CN106354271A - Method and terminal for processing voice message - Google Patents
Method and terminal for processing voice message Download PDFInfo
- Publication number
- CN106354271A CN106354271A CN201611059167.0A CN201611059167A CN106354271A CN 106354271 A CN106354271 A CN 106354271A CN 201611059167 A CN201611059167 A CN 201611059167A CN 106354271 A CN106354271 A CN 106354271A
- Authority
- CN
- China
- Prior art keywords
- voice
- user
- word message
- identified
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 title abstract description 5
- 238000003825 pressing Methods 0.000 claims description 18
- 230000010365 information processing Effects 0.000 claims description 13
- 238000003672 processing method Methods 0.000 claims description 13
- 239000000203 mixture Substances 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 14
- 238000012217 deletion Methods 0.000 description 13
- 230000037430 deletion Effects 0.000 description 13
- 230000001755 vocal effect Effects 0.000 description 10
- 230000009471 action Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 208000031481 Pathologic Constriction Diseases 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 210000001215 vagina Anatomy 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transforming into visible information
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a method and terminal for processing a voice message. The method comprises the steps of if a voice message is received, conducting voiceprint recognition of the voice message; determining whether the voice message contains an illegal user voice or not; if the voice message contains the illegal user voice, identifying the illegal user voice in the message voice; converting the voice message into a corresponding text message, and identifying the text message corresponding to the illegal user voice; displaying the converted text message in the current display interface, wherein the user can edit the identified text message. The method and terminal for processing the voice message can help the user delete the text message of the illegal user, thus the usage experience of the user is improved.
Description
Technical field
The present invention relates to electronic technology field, more particularly, to a kind of voice information processing method and terminal.
Background technology
At present carried out using mobile terminal phonetic entry and be converted to word operation in, a kind of very common scene
For user in phonetic entry, other people also speak on mobile terminal side, lead to not only user speech input to be converted to literary composition
Word, the voice of non-user is also converted into word and is input simultaneously in the interface of mobile terminal, and for this situation, traditional does
Method is the single deletion action of word that the voice to non-user is changed, however, this mode of operation is comparatively laborious, operating efficiency
Not high, Consumer's Experience is not good.
Content of the invention
In view of this, the embodiment of the present invention provides a kind of voice information processing method and terminal.
In a first aspect, embodiments providing a kind of voice information processing method, the method includes:
If receiving voice messaging, Application on Voiceprint Recognition is carried out to described voice messaging;
Judge to whether there is disabled user's voice in described voice messaging;
If there is disabled user's voice in described voice messaging, identify the disabled user's voice in described voice messaging;
Described voice messaging is converted into corresponding Word message and the Word message corresponding to disabled user's voice entered
Line identifier;
Show changed Word message in current display interface, and described identified Word message is available for user and enters
Edlin operates.
On the other hand, embodiments provide a kind of terminal, this terminal includes:
First recognition unit, if for receiving voice messaging, carries out Application on Voiceprint Recognition to described voice messaging;
First judging unit, for judging to whether there is disabled user's voice in described voice messaging;
Second recognition unit, if there is disabled user's voice in described voice messaging, identifies described voice messaging
In disabled user's voice;
Switch signs unit, for being converted into corresponding Word message and by disabled user's voice institute by described voice messaging
Corresponding Word message is identified;
Display unit, for showing changed Word message in current display interface, and described identified word
Information is available for user and carries out edit operation.
Above-described embodiment that the present invention provides by converting speech information into corresponding Word message, and to Word message
In the Word message corresponding to disabled user's voice be identified, can help user distinguish disabled user Word message;
In addition, above-described embodiment passes through to show changed Word message in current display interface, and provide a user with described being marked
The edit operation of the Word message known, can help user to edit the Word message of disabled user, improve the experience of user.
Brief description
In order to be illustrated more clearly that embodiment of the present invention technical scheme, required use in embodiment being described below
Accompanying drawing be briefly described it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of voice information processing method provided in an embodiment of the present invention.
Fig. 2 is a kind of another schematic flow sheet of voice information processing method provided in an embodiment of the present invention.
Fig. 3 is a kind of another schematic flow sheet of voice information processing method provided in an embodiment of the present invention.
Fig. 4 is a kind of another schematic flow sheet of voice information processing method provided in an embodiment of the present invention.
Fig. 4 a-4c is a kind of demonstration on interface process in terminal for the voice information processing method provided in an embodiment of the present invention
Schematic diagram.
Fig. 5 is a kind of another schematic flow sheet of voice information processing method provided in an embodiment of the present invention.
Fig. 5 a-5c is a kind of demonstration on interface process in terminal for the voice information processing method provided in an embodiment of the present invention
Schematic diagram.
Fig. 6 is a kind of schematic block diagram of terminal provided in an embodiment of the present invention.
Fig. 7 is a kind of another schematic block diagram of terminal provided in an embodiment of the present invention.
Fig. 8 is a kind of another schematic block diagram of terminal provided in an embodiment of the present invention.
Fig. 9 is a kind of another schematic block diagram of terminal provided in an embodiment of the present invention.
Figure 10 is a kind of another schematic block diagram of terminal provided in an embodiment of the present invention.
Figure 11 is a kind of another schematic block diagram of terminal provided in an embodiment of the present invention.
Figure 12 is a kind of structure composition schematic diagram of another embodiment of terminal provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation description is it is clear that described embodiment a part of embodiment that is the present invention, rather than whole embodiments.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under the premise of not making creative work
Example, broadly falls into the scope of protection of the invention.
It should be appreciated that when using in this specification and in the appended claims, term " inclusion " and "comprising" indicate
The presence of described feature, entirety, step, operation, element and/or assembly, but it is not precluded from one or more of the other feature, whole
Body, step, operation, the presence of element, assembly and/or its set or interpolation.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Hereafter clearly indicate other situations, otherwise " one " of singulative, " one " and " being somebody's turn to do " are intended to including plural form.
A kind of voice information processing method and the schematic flow sheet of terminal that Fig. 1 provides for the embodiment of the present invention one.The party
Method includes step s101~s106.
S101, if receiving voice messaging, carries out Application on Voiceprint Recognition to described voice messaging.
Specifically, in embodiments of the present invention, voice messaging is received by the mike in terminal, described voice messaging
For the voice of the normal input of user, the environment difference according to residing for user, described voice messaging can be the voice of only user,
Other disabled user's voices can also be mingled with the voice of user input;Sentenced by Application on Voiceprint Recognition is carried out to voice messaging
There is disabled user's voice, described Application on Voiceprint Recognition is mainly by special to the vocal print in voice in the voice messaging of disconnected user input
Levy and be compared, vocal print feature mainly includes frequency spectrum, cepstrum, formant, fundamental tone and reflection coefficient etc..
S102, judges to whether there is disabled user's voice in described voice messaging.
S103, if there is disabled user's voice in described voice messaging, identifies the disabled user in described voice messaging
Voice.
Specifically, in embodiments of the present invention, identify that the disabled user's voice in described voice messaging can be when to certain
When carrying out Application on Voiceprint Recognition in the voice messaging of section user input, by comparing different vocal print features in this section of voice messaging, will
Vocal print feature difference arranges in advance specific vocal print spy than larger voice as disabled user's voice or in the terminal
Levy, when the voice messaging to certain section of user input carries out Application on Voiceprint Recognition, will be specific with pre-set in this section of voice messaging
The different voice of vocal print feature is as disabled user's voice.
S104, by described voice messaging be converted into corresponding Word message and by corresponding to disabled user's voice word letter
Breath is identified.
Specifically, in embodiments of the present invention, this mark can be represented with letter, for example, if there is one section of illegal use
Word message corresponding to the voice of family, the Word message corresponding to this section of disabled user's voice is labeled as a;If it is non-to there is multistage
Word message corresponding to method user speech, the Word message corresponding to each section of disabled user's voice is respectively labeled as a, b, c
Etc..
S105, shows changed Word message in current display interface, and described identified Word message is available for
User carries out edit operation.
Specifically, in embodiments of the present invention, described edit operation is deletion action, display in current display interface
Word message is complete Word message, and that is, this word packet includes identified Word message;Aobvious in current display interface
The identified Word message showing is different from the dispaly state of not identified Word message, and this dispaly state can use font face
Color or underscore, to represent, for example, the font color of identified Word message are redness, and not identified word letter
Cease for other different colors, or identified Word message is provided with underscore, thus by different dispaly states
Different Word messages is made a distinction;More specifically, if user touches identified Word message, that is, produce to identified
Word message deletion action.
Above-described embodiment that the present invention provides by converting speech information into corresponding Word message, and to Word message
In the Word message corresponding to disabled user's voice be identified, can help user distinguish disabled user Word message;
In addition, above-described embodiment passes through to show changed Word message in current display interface, and provide a user with described being marked
The edit operation of the Word message known, can help user to edit the Word message of disabled user, improve the experience of user.
Further, in embodiments of the present invention, with reference to Fig. 2, it is the sub-process schematic diagram of step s104, as Fig. 2 institute
Show, step s104, by described voice messaging be converted into corresponding Word message and by corresponding to disabled user's voice word letter
Breath is identified, and specifically includes step s201~s207.
S201, reads the Word message corresponding to described disabled user's voice.
S202, judges whether the Word message corresponding to described disabled user's voice is continuous.
S203, if the Word message corresponding to described disabled user's voice is continuous, by corresponding Word message shape
Written field.
S204, is identified to described word section.
S205, finds out the identified word section corresponding to disabled user's voice.
S206, described identified word section is associated, and forms incidence relation.
S207, if receiving the edit operation to described identified word section, all according to the execution of described incidence relation
The edit operation of identified word section.Specifically, in the embodiment of the present invention, described edit operation is deletion action, if receiving
To the deletion action to identified word section, all identified word sections are carried out by deletion action according to incidence relation.
In above-described embodiment that the present invention provides, the word section corresponding to each disabled user's voice can be associated,
When user selects edit operation, can disposably by word section corresponding to related disabled user's voice once
Property enters edlin.
Further, in embodiments of the present invention, with reference to Fig. 3, it is the sub-process schematic diagram of step s105, as Fig. 3 institute
Show, step s105, in current display interface, show changed Word message, and described identified Word message is available for using
Family carries out edit operation, specifically includes step s301~s302.
S301, if the length receiving user to identified Word message is by operation, triggering terminal is in current display interface
Show corresponding editor for user input edit instruction.Specifically, in embodiments of the present invention, described editor is
Delete window, described edit instruction is to delete instruction.
S302, if receiving the pressing operation to identified Word message for the user, triggering terminal is in current display interface
Show corresponding editor for user input edit instruction.
Further, refer to Fig. 4, it is the sub-process schematic diagram of step s301 in said method, as shown in figure 4,
Step s301 specifically includes step s401~s403.
S401, if the length receiving user to identified Word message, by operation, starts timing.
Specifically, in embodiments of the present invention, terminal arranges timing acquisition function, the length of user is detected by behaviour
When making, start timing.
S402, judges whether timing time reaches Preset Time.
Specifically, in embodiments of the present invention, described Preset Time can be 1 minute, can be 2 minutes or
Other Preset Times, described Preset Time can also be the time of user's self-defining.
S403, if timing time reaches Preset Time, triggering terminal shows corresponding deletion window in current display interface
For user input edit instruction.
Specifically, described edit instruction is to delete instruction, as shown in Fig. 4 a, 4b, the word letter of display in terminal display screen
Breath display interface h, the Word message corresponding to disabled user's voice is identified with a, identified word in display interface h
Information underscore represents, if length is when reaching Preset Time by 1 time, show in terminal display screen deletion window j with
Delete instruction for user input, thus deleting the Word message of a mark.
Further, refer to Fig. 5, it is the sub-process schematic diagram of step s302 in said method, as shown in figure 5,
Step s302 specifically includes step s501~s503.
S501, if receiving the pressing operation to identified Word message for the user, detects the pressure of described pressing operation
Value.
S502, judges whether described pressure value is the pressure limit pre-setting.
S503, if described pressure value is the pressure limit pre-setting, triggering terminal shows accordingly in current display interface
Editor for user input edit instruction.
Specifically, as shown in Fig. 5 a, 5b, the word-information display interface h of display, disabled user's voice in terminal display screen
Corresponding Word message is identified with a, and identified Word message underscore in display interface h represents, if pressing
When the time of operation 2 reaches pressure limit, show that deletion window j deletes instruction for user input in terminal display screen,
Thus deleting the Word message of a mark.
As shown in fig. 6, corresponding to a kind of above-mentioned document transmission method, the embodiment of the present invention also proposes a kind of terminal, this terminal
100 include: the first recognition unit 10, the first judging unit 20, the second recognition unit 30, switch signs unit 40, display unit
50.
Wherein, the first recognition unit 10, if for the voice messaging receiving user input, carry out to described voice messaging
Application on Voiceprint Recognition.Specifically, in embodiments of the present invention, voice messaging is received by the mike in terminal, described voice letter
Cease the voice for the normal input of user, the environment according to residing for user is different, and described voice messaging can be the language of only user
Sound is it is also possible to be mingled with other disabled user's voices in the voice of user input;By Application on Voiceprint Recognition is carried out to voice messaging
To judge there is disabled user's voice in the voice messaging of user input, described Application on Voiceprint Recognition is mainly by the sound in voice
Stricture of vagina feature is compared, and vocal print feature mainly includes frequency spectrum, cepstrum, formant, fundamental tone and reflection coefficient etc..
First judging unit 20, for judging to whether there is disabled user's voice in described voice messaging.
Second recognition unit 30, if there is disabled user's voice in described voice messaging, identifies described voice letter
Disabled user's voice in breath.Specifically, in embodiments of the present invention, identify that the disabled user's voice in described voice messaging can
To be when carrying out Application on Voiceprint Recognition in the voice messaging to certain section of user input, by comparing different sound in this section of voice messaging
Stricture of vagina feature, vocal print feature difference is arranged in advance special than larger voice as disabled user's voice or in the terminal
Determine vocal print feature, when the voice messaging to certain section of user input carries out Application on Voiceprint Recognition, by this section of voice messaging with set in advance
The different voice of the specific vocal print feature put is as disabled user's voice.
Switch signs unit 40, for being converted into corresponding Word message and by disabled user's voice by described voice messaging
Corresponding Word message is identified.Specifically, in embodiments of the present invention, this mark can be represented with letter, for example,
If there is the Word message corresponding to one section of disabled user's voice, by the Word message labelling corresponding to this section of disabled user's voice
For a;If there is the Word message corresponding to multistage disabled user's voice, by the Word message corresponding to each section of disabled user's voice
It is respectively labeled as a, b, c etc..
Display unit 50, for showing changed Word message in current display interface, and described identified literary composition
Word information is available for user and carries out edit operation.Specifically, in embodiments of the present invention, described edit operation is deletion action,
In current display interface, the Word message of display is complete Word message, and that is, this word packet includes identified word letter
Breath;In current display interface, the identified Word message of display is different from the dispaly state of not identified Word message,
This dispaly state can represent with font color or underscore, for example, the font color by identified Word message is
Redness, and the not identified Word message color that to be other different, or identified Word message is provided with underscore,
Thus being made a distinction to different Word messages by different dispaly states;More specifically, if user touches identified literary composition
During word information, that is, produce the deletion action to identified Word message, this operation can be long by operation or pressing operation.
Above-described embodiment that the present invention provides by converting speech information into corresponding Word message, and to Word message
In the Word message corresponding to disabled user's voice be identified, can help user distinguish disabled user Word message;
In addition, above-described embodiment passes through to show changed Word message in current display interface, and provide a user with described being marked
The edit operation of the Word message known, can help user to edit the Word message of disabled user, improve the experience of user.
Specifically, as shown in fig. 7, described switch signs unit 40, specifically include:
Reading unit 401, for reading the Word message corresponding to described disabled user's voice.
Second judging unit 402, whether continuous for judging the Word message corresponding to described disabled user's voice.
Form unit 403, if being continuous for the Word message corresponding to described disabled user's voice, will be corresponding
Word message forms word section.
Mark unit 404, for being identified to described word section.
Specifically, as shown in figure 8, described switch signs unit 40, also include:
Searching unit 405, for finding out the word section corresponding to identified disabled user's voice.
Associative cell 406, for being associated described identified word section, and forms incidence relation.
Performance element 407, if for receiving the edit operation to described identified word section, closes according to described association
System executes the edit operation of all identified word sections.
Specifically, as shown in figure 9, described display unit 50, specifically include:
First trigger element 501, if for receiving the length to identified Word message for the user by operation, triggering terminal
Show corresponding editor for user input edit instruction in current display interface.
Second trigger element 502, if for receiving the pressing operation to identified Word message for the user, triggering terminal
Show corresponding editor for user input edit instruction in current display interface.
Specifically, as shown in Figure 10, described first trigger element 501, specifically includes:
Timing unit 5011, if for receiving the length to identified Word message for the user by operation, start timing.Tool
Body, in embodiments of the present invention, timing acquisition function is arranged on terminal, when the length of user is detected by operation, start
Timing.
3rd judging unit 5012, for judging whether timing time reaches Preset Time.Specifically, implement in the present invention
In example, described Preset Time can be 1 minute, can be 2 minutes or other Preset Times, described Preset Time
It can be the time of user's self-defining.
3rd trigger element 5013, if reaching Preset Time for timing time, triggering terminal shows in current display interface
Show corresponding editor for user input edit instruction.Specifically, described editor is to delete window, and described editor refers to
Make as deleting instruction, as shown in Fig. 4 a, 4b, the word-information display interface h of display, disabled user's voice institute in terminal display screen
Corresponding Word message is identified with a, and identified Word message underscore in display interface h represents, if length presses 1
When time reaches Preset Time, show that deletion window j deletes instruction for user input, thus deleting in terminal display screen
The Word message of a mark.
Specifically, as shown in figure 11, described second trigger element 502, specifically includes:
Detector unit 5021, if for receiving the pressing operation to identified Word message for the user, presses described in detection
The pressure value of press operation.
4th judging unit 5022, for judging whether described pressure value is the pressure limit pre-setting.
4th trigger element 5023, if being the pressure limit pre-setting for described pressure value, triggering terminal is current
Display interface shows corresponding editor for user input edit instruction.
Specifically, as shown in Fig. 5 a, 5b, the word-information display interface h of display, disabled user's voice in terminal display screen
Corresponding Word message is identified with a, and identified Word message underscore in display interface h represents, if pressing
When the time of operation 2 reaches pressure limit, show that deletion window j deletes instruction for user input in terminal display screen,
Thus deleting the Word message of a mark.
Figure 12 is the structure composition schematic diagram of another embodiment of terminal of the present invention.As shown in figure 12, its may include: defeated
Enter device 99, output device 88, R-T unit 77, memorizer 66 and processor 55, wherein:
Described input equipment 99, for receiving the input data of outside access control device.In implementing, the present invention is real
Apply the input equipment 99 described in example may include keyboard, mouse, light device of electrical input, acoustic input device, touch input equipment,
Scanner etc..
Described output device 88, for the output data of external output access control device.In implementing, the present invention is real
Apply the output device 88 described in example and may include display, speaker, printer etc..
Described R-T unit 77, for sending data or from other equipment reception number by communication link to other equipment
According to.In implementing, the R-T unit 77 of the embodiment of the present invention may include the transceiving devices such as radio-frequency antenna.
Described memorizer 66, for the routine data with various functions for the storage.In the embodiment of the present invention, memorizer 66 is deposited
The data of storage includes the routine data that can call and run.In implementing, the memorizer 66 of the embodiment of the present invention can be to be
System memorizer, such as, volatile (such as ram), non-volatile (such as rom, flash memory etc.), or both combinations.Tool
During body is realized, the memorizer 66 of the embodiment of the present invention can also be the external memory storage outside system, such as, disk, CD, magnetic
Band etc..
Described processor 55, for calling the routine data of storage in described memorizer 66, and executes following operation:
If receiving voice messaging, Application on Voiceprint Recognition is carried out to described voice messaging;Judge whether deposit in described voice messaging
In disabled user's voice;If there is disabled user's voice in described voice messaging, identify the illegal use in described voice messaging
Family voice;Described voice messaging is converted into corresponding Word message and the Word message corresponding to disabled user's voice carried out
Mark;Show changed Word message in current display interface, and described identified Word message is available for user and carries out
Edit operation.
Further, described processor 55 also executes and operates as follows:
Read the Word message corresponding to described disabled user's voice;Judge the word corresponding to described disabled user's voice
Whether information is continuous;If the Word message corresponding to described disabled user's voice is continuous, by corresponding Word message shape
Written field;Described word section is identified.
Further, described processor 55 also executes and operates as follows:
Find out the identified word section corresponding to disabled user's voice;Described identified word section is closed
Connection, and form incidence relation;If receiving the edit operation to described identified word section, executed according to described incidence relation
The edit operation of all identified word sections.
Further, described processor 55 also executes and operates as follows:
If the length receiving user to identified Word message is by operation, if or receiving user to identified literary composition
The pressing operation of word information, in current display interface, triggering terminal shows that corresponding editor refers to for user input editor
Order.
Further, described processor 55 also executes and operates as follows:
If the length receiving user to identified Word message, by operation, starts timing;Judge whether timing time reaches
To Preset Time;If timing time reaches Preset Time, triggering terminal current display interface show corresponding editor with
For user input edit instruction.
Further, described processor 55 also executes and operates as follows:
If receiving the pressing operation to identified Word message for the user, detect the pressure value of described pressing operation;Sentence
Whether described pressure value of breaking is the pressure limit pre-setting;If described pressure value is the pressure limit pre-setting, triggering is eventually
End shows corresponding editor for user input edit instruction in current display interface.
Unit in all embodiments of the invention can pass through universal integrated circuit, such as cpu (central
Processing unit, central processing unit), or pass through asic (application specific integrated
Circuit, special IC) realizing.
Step in present invention method can carry out order according to actual needs and adjust, merges and delete.
Unit in embodiment of the present invention terminal can merge according to actual needs, divides and delete.
One of ordinary skill in the art will appreciate that realizing all or part of flow process in above-described embodiment method, it is permissible
Instruct related hardware to complete by computer program, described program can be stored in a computer read/write memory medium
In, this program is upon execution, it may include as the flow process of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic
Dish, CD, read-only memory (read-only memory, rom) or random access memory (random access
Memory, ram) etc..
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replace
Change, these modifications or replacement all should be included within the scope of the present invention.Therefore, protection scope of the present invention should be with right
The protection domain requiring is defined.
Claims (10)
1. a kind of voice information processing method is it is characterised in that methods described includes:
If receiving voice messaging, Application on Voiceprint Recognition is carried out to described voice messaging;
Judge to whether there is disabled user's voice in described voice messaging;
If there is disabled user's voice in described voice messaging, identify the disabled user's voice in described voice messaging;
Described voice messaging is converted into corresponding Word message and the Word message corresponding to disabled user's voice entered rower
Know;
Show changed Word message in current display interface, and described identified Word message is available for user and is compiled
Collect operation.
2. the method for claim 1 is it is characterised in that described be converted into corresponding Word message by described voice messaging
And the Word message corresponding to disabled user's voice is identified, specifically include:
Read the Word message corresponding to described disabled user's voice;
Judge whether the Word message corresponding to described disabled user's voice is continuous;
If the Word message corresponding to described disabled user's voice is continuous, corresponding Word message is formed word section;
Described word section is identified.
3. method as claimed in claim 2 is it is characterised in that methods described also includes:
Find out the identified word section corresponding to disabled user's voice;
Described identified word section is associated, and forms incidence relation;
If receiving the edit operation to described identified word section, all identified literary compositions are executed according to described incidence relation
The edit operation of field.
4. the method for claim 1 is it is characterised in that show changed Word message in current display interface,
And described identified Word message is available for user and carries out edit operation, specifically include:
If the length receiving user to identified Word message is by operation, if or receiving user to identified word letter
The pressing operation of breath, triggering terminal shows corresponding editor for user input edit instruction in current display interface.
5. method as claimed in claim 4 it is characterised in that
If by operation, triggering terminal shows phase in current display interface to the described length to identified Word message for the user that receives
The editor answered, for user input edit instruction, specifically includes:
If the length receiving user to identified Word message, by operation, starts timing;
Judge whether timing time reaches Preset Time;
If timing time reaches Preset Time, in current display interface, triggering terminal shows that corresponding editor is defeated for user
Enter edit instruction;
If described receive the pressing operation to identified Word message for the user, triggering terminal shows phase in current display interface
The editor answered, for user input edit instruction, specifically includes:
If receiving the pressing operation to identified Word message for the user, detect the pressure value of described pressing operation;
Judge whether described pressure value is the pressure limit pre-setting;
If described pressure value is the pressure limit pre-setting, triggering terminal shows corresponding editor in current display interface
For user input edit instruction.
6. a kind of terminal is it is characterised in that described terminal includes:
First recognition unit, if for receiving voice messaging, carries out Application on Voiceprint Recognition to described voice messaging;
First judging unit, for judging to whether there is disabled user's voice in described voice messaging;
Second recognition unit, if there is disabled user's voice in described voice messaging, identifies in described voice messaging
Disabled user's voice;
Switch signs unit, for being converted into corresponding Word message and by corresponding to disabled user's voice by described voice messaging
Word message be identified;
Display unit, for showing changed Word message in current display interface, and described identified Word message
It is available for user and carry out edit operation.
7. terminal as claimed in claim 6, it is characterised in that described switch signs unit, specifically includes:
Reading unit, for reading the Word message corresponding to described disabled user's voice;
Second judging unit, whether continuous for judging the Word message corresponding to described disabled user's voice;
Form unit, if being continuous for the Word message corresponding to described disabled user's voice, by corresponding word letter
Breath forms word section;
Mark unit, for being identified to described word section.
8. terminal as claimed in claim 7 is it is characterised in that described terminal also includes:
Searching unit, for finding out the word section corresponding to identified disabled user's voice;
Associative cell, for being associated described identified word section, and forms incidence relation;
Performance element, if for receiving the edit operation to described identified word section, executes according to described incidence relation
The edit operation of all identified word sections.
9. terminal as claimed in claim 6, it is characterised in that described display unit, specifically includes:
First trigger element, if for receiving the length to identified Word message for the user by operation, triggering terminal is current
Display interface shows corresponding editor for user input edit instruction;
Second trigger element, if for receiving the pressing operation to identified Word message for the user, triggering terminal is current
Display interface shows corresponding editor for user input edit instruction.
10. terminal as claimed in claim 9 it is characterised in that
Described first trigger element, specifically includes:
Timing unit, if for receiving the length to identified Word message for the user by operation, start timing;
3rd judging unit, for judging whether timing time reaches Preset Time;
3rd trigger element, if reaching Preset Time for timing time, triggering terminal shows in current display interface accordingly
Editor is for user input edit instruction;
Described second trigger element, specifically includes:
Detector unit, if for receiving the pressing operation to identified Word message for the user, detect described pressing operation
Pressure value;
4th judging unit, for judging whether described pressure value is the pressure limit pre-setting;
4th trigger element, if being the pressure limit pre-setting for described pressure value, triggering terminal is in current display interface
Show corresponding editor for user input edit instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611059167.0A CN106354271A (en) | 2016-11-23 | 2016-11-23 | Method and terminal for processing voice message |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611059167.0A CN106354271A (en) | 2016-11-23 | 2016-11-23 | Method and terminal for processing voice message |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106354271A true CN106354271A (en) | 2017-01-25 |
Family
ID=57862749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611059167.0A Pending CN106354271A (en) | 2016-11-23 | 2016-11-23 | Method and terminal for processing voice message |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106354271A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223709A (en) * | 2019-05-31 | 2019-09-10 | 维沃移动通信有限公司 | A kind of recording frequency spectrum display methods and terminal device |
CN112115686A (en) * | 2019-06-21 | 2020-12-22 | 珠海金山办公软件有限公司 | Document editing method and device, computer storage medium and terminal |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010060850A (en) * | 2008-09-04 | 2010-03-18 | Nec Corp | Minute preparation support device, minute preparation support method, program for supporting minute preparation and minute preparation support system |
CN102522084A (en) * | 2011-12-22 | 2012-06-27 | 广东威创视讯科技股份有限公司 | Method and system for converting voice data into text files |
CN103369122A (en) * | 2012-03-31 | 2013-10-23 | 盛乐信息技术(上海)有限公司 | Voice input method and system |
CN105094833A (en) * | 2015-08-03 | 2015-11-25 | 联想(北京)有限公司 | Data Processing method and system |
CN105488227A (en) * | 2015-12-29 | 2016-04-13 | 惠州Tcl移动通信有限公司 | Electronic device and method for processing audio file based on voiceprint features through same |
CN105975569A (en) * | 2016-05-03 | 2016-09-28 | 深圳市金立通信设备有限公司 | Voice processing method and terminal |
-
2016
- 2016-11-23 CN CN201611059167.0A patent/CN106354271A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010060850A (en) * | 2008-09-04 | 2010-03-18 | Nec Corp | Minute preparation support device, minute preparation support method, program for supporting minute preparation and minute preparation support system |
CN102522084A (en) * | 2011-12-22 | 2012-06-27 | 广东威创视讯科技股份有限公司 | Method and system for converting voice data into text files |
CN103369122A (en) * | 2012-03-31 | 2013-10-23 | 盛乐信息技术(上海)有限公司 | Voice input method and system |
CN105094833A (en) * | 2015-08-03 | 2015-11-25 | 联想(北京)有限公司 | Data Processing method and system |
CN105488227A (en) * | 2015-12-29 | 2016-04-13 | 惠州Tcl移动通信有限公司 | Electronic device and method for processing audio file based on voiceprint features through same |
CN105975569A (en) * | 2016-05-03 | 2016-09-28 | 深圳市金立通信设备有限公司 | Voice processing method and terminal |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223709A (en) * | 2019-05-31 | 2019-09-10 | 维沃移动通信有限公司 | A kind of recording frequency spectrum display methods and terminal device |
CN110223709B (en) * | 2019-05-31 | 2021-08-27 | 维沃移动通信有限公司 | Recorded audio spectrum display method and terminal equipment |
CN112115686A (en) * | 2019-06-21 | 2020-12-22 | 珠海金山办公软件有限公司 | Document editing method and device, computer storage medium and terminal |
CN112115686B (en) * | 2019-06-21 | 2024-05-07 | 珠海金山办公软件有限公司 | Method and device for editing document, computer storage medium and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5685702B2 (en) | Speech recognition result management apparatus and speech recognition result display method | |
CN100521708C (en) | Voice recognition and voice tag recoding and regulating method of mobile information terminal | |
CN107331400A (en) | A kind of Application on Voiceprint Recognition performance improvement method, device, terminal and storage medium | |
KR20190061706A (en) | Voice recognition system and method for analyzing plural intention command | |
CN104142879B (en) | A kind of audio loudness reminding method, device and user terminal | |
CN101931701A (en) | Method, system and mobile terminal for prompting contact information in communication process | |
CN104050966A (en) | Voice interaction method of terminal equipment and terminal equipment employing voice interaction method | |
CN109274831A (en) | A kind of audio communication method, device, equipment and readable storage medium storing program for executing | |
TWI509432B (en) | Electronic device and language analysis method thereof | |
CN107346229A (en) | Pronunciation inputting method and device, computer installation and readable storage medium storing program for executing | |
KR20190029237A (en) | Apparatus for interpreting and method thereof | |
US20040176139A1 (en) | Method and wireless communication device using voice recognition for entering text characters | |
CN108595412A (en) | Correction processing method and device, computer equipment and readable medium | |
KR20150094419A (en) | Apparatus and method for providing call record | |
CN101370053B (en) | Mobile terminal and method for operating mobile terminal by headphone | |
KR20140003035A (en) | Control method for terminal using context-aware and terminal thereof | |
CN103064828B (en) | A kind of method and device operating text | |
CN106354271A (en) | Method and terminal for processing voice message | |
CN105848031A (en) | Earphone sound channel adjusting method and device | |
CN107465818A (en) | The method and terminal of a kind of virtual incoming call | |
CN106708632A (en) | Information editing method and information editing device | |
CN106529638A (en) | Information processing method and device | |
JP2010197669A (en) | Portable terminal, editing guiding program, and editing device | |
CN109065017B (en) | Voice data generation method and related device | |
KR20170010978A (en) | Method and apparatus for preventing voice phishing using pattern analysis of communication content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170125 |
|
WD01 | Invention patent application deemed withdrawn after publication |