CN105448300A - Method and device for calling - Google Patents

Method and device for calling Download PDF

Info

Publication number
CN105448300A
CN105448300A CN201510770291.7A CN201510770291A CN105448300A CN 105448300 A CN105448300 A CN 105448300A CN 201510770291 A CN201510770291 A CN 201510770291A CN 105448300 A CN105448300 A CN 105448300A
Authority
CN
China
Prior art keywords
voice signal
speech model
preset
voice
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510770291.7A
Other languages
Chinese (zh)
Inventor
高毅
王洪强
葛云源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510770291.7A priority Critical patent/CN105448300A/en
Publication of CN105448300A publication Critical patent/CN105448300A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

The invention relates to a method and a device for calling. The method comprises the steps of acquiring a first voice signal of the local party when talking to the other party, transforming the first voice signal by use of a preset voice model to get a second voice signal, and transmitting the second voice signal to the other party. As the voice signal transmitted in the call process is the voice signal after voice transformation, the other party gets the voice after voice transformation. Therefore, the call effect desired by users is achieved, the personalized need of the caller for call voice is satisfied, and the user experience is enhanced.

Description

For the method conversed and device
Technical field
The disclosure relates to communication field, particularly relates to the method for conversing and device.
Background technology
Universal along with the terminal device for communication, people use terminal device to carry out conversing and become general communication modes.
In the related, call is mutually changed by acoustic energy and electric energy and utilizes " electricity " this medium to transmit a kind of communication technology of speech signal.When our caller picks up the transmitter speech of terminal device, the vibrational excitation air vibration of vocal cords, forms sound wave.Sound wave effect, on transmitter, makes it to produce the voice signal embodying first speaker real speech.Voice signal is sent in the receiver of counterpart telephone machine, and receiver is converted into sound wave voice signal, is reached in the ear of people by air.Therefore, people are after connection phone, and what the other side heard is all the real voice oneself sent.
But, only transmit the voice signal embodying caller's real speech, the personal needs of caller to call voice can not be met.
Summary of the invention
For overcoming Problems existing in correlation technique, the disclosure provides a kind of method for conversing and device.
According to the first aspect of disclosure embodiment, a kind of method for conversing is provided, comprise: when conversing with the other side, obtain first voice signal of we, utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal, transmit described second voice signal to the other side.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: due to when conversing with the other side, obtain first voice signal of we, utilize preset speech model, phonetic modification is carried out to described first voice signal, obtain the second voice signal, described second voice signal is transmitted to the other side, therefore, the voice signal be through after phonetic modification transmitted in communication process, the voice that what the other side heard is after phonetic modification, reach the communication effect that user wants, meet the personal needs of caller to call voice, enhance Consumer's Experience.
In the first possible embodiment of disclosure embodiment first aspect, described preset speech model comprises: preset responsive vocabulary and the special efficacy voice corresponding with described preset responsive vocabulary.Describedly utilize preset speech model, phonetic modification is carried out to described first voice signal, obtain the second voice signal to comprise: detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtains the second voice signal comprising described special efficacy voice.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: owing to the preset responsive vocabulary comprised in described first voice signal can be transformed to special efficacy voice corresponding to described preset responsive vocabulary, therefore, partner can be allowed to hear the different special efficacy voice with real speech, meet the personal needs of caller to special efficacy voice, enhance Consumer's Experience.
In the embodiment that the second of disclosure embodiment first aspect is possible, described preset speech model comprises: the speech model of adjustment sound characteristic.Describedly utilize preset speech model, phonetic modification is carried out to described first voice signal, obtain the second voice signal to comprise: the speech model utilizing adjustment sound characteristic, adjusts the sound characteristic of described first voice signal, obtain the second voice signal after sound characteristic adjustment.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: owing to can utilize the speech model of adjustment sound characteristic, the sound characteristic of described first voice signal is adjusted, therefore, partner can be allowed to hear the different sound characteristic with real speech, meet the personal needs of caller to sound, enhance Consumer's Experience.
In the third possible embodiment of disclosure embodiment first aspect, described method also comprises: provide multiple speech model, receive the selection instruction choosing one or more speech model from described multiple speech model, according to described selection instruction, the one or more speech models chosen are set to described preset speech model.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: owing to providing multiple speech model, user can input the selection instruction choosing one or more speech model, the one or more speech models chosen are made to be set to described preset speech model, therefore, user can like and need the phonetic modification selecting flexibly to want according to oneself, make the other side hear the sound effect oneself wanted, Consumer's Experience is better.
In conjunction with the third possible embodiment of disclosure embodiment first aspect, in the 4th kind of possible embodiment, described according to described selection instruction, choose speech model is set to described preset speech model and comprises: the speech model that the selection instruction received recently apart from current time is chosen is set to described preset speech model.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: because the speech model selection instruction received recently apart from current time chosen is set to described preset speech model, therefore, user is after choosing some unsatisfied speech models, can continue to select other speech model, the unsatisfied speech model before chosen is abandoned automatically, both the computational resource of terminal device had been saved, it also avoid user and additionally perform operations such as cancelling selection, execution efficiency is higher.
In conjunction with the third possible embodiment of disclosure embodiment first aspect, in the 5th kind of possible embodiment, described according to described selection instruction, the multiple speech models chosen are set to described preset speech model and comprise: the multiple speech models described selection instruction chosen are set to carry out described first voice signal successively respectively the preset speech model of phonetic modification.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: because the multiple speech models described selection instruction chosen are set to the preset voice mould successively described first voice signal being carried out to phonetic modification respectively, therefore, user can carry out many-sided conversion according to the voice self needed oneself, more meets user's request.In conjunction with the third possible embodiment of disclosure embodiment first aspect, in the 6th kind of possible embodiment, describedly multiple speech model is provided to comprise: in call interface, show voice treatment button, when receiving the click commands to described speech processes button, eject the dialog box comprising described multiple speech model option.Described reception chooses the selection instruction of one or more speech model to comprise from described multiple speech model: by described dialog box, receives the selection instruction choosing a speech model from described multiple speech model option.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: owing to showing voice treatment button in call interface, make user can choose speech model by the dialog box comprising multiple speech model option, its display mode is directly perceived, is more convenient to user operation.
In the 7th kind of possible embodiment of disclosure embodiment first aspect, described when conversing with the other side, the first voice signal obtaining we comprises: when conversing with the other side, at interval of preset duration, obtains first voice signal of we in described preset duration.Describedly utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal and comprise: utilize preset speech model, phonetic modification is carried out to the first voice signal in described preset duration, obtains the second voice signal.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: due to when conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration, utilize preset speech model, phonetic modification is carried out to the first voice signal in described preset duration, obtain the second voice signal, thus can while the first voice signal within a time interval of reception, perform the phonetic modification to first voice signal in the time interval before, improve phonetic modification execution efficiency, reduce the delay that the second voice signal is exported, obtain continuous communication effect.
According to the second aspect of disclosure embodiment, a kind of device for conversing being provided, comprising: acquisition module, being configured to when conversing with the other side, obtain first voice signal of we.Conversion module, is configured to utilize preset speech model, carries out phonetic modification, obtain the second voice signal to the first voice signal that described acquisition module obtains.Transport module, is configured to transmit to the other side the second voice signal that described conversion module obtains.
In the first possible embodiment of disclosure embodiment second aspect, described preset speech model comprises: preset responsive vocabulary and the special efficacy voice corresponding with described preset responsive vocabulary.Described conversion module be configured to detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtains the second voice signal comprising described special efficacy voice.
In the embodiment that the second of disclosure embodiment second aspect is possible, described preset speech model comprises: the speech model of adjustment sound characteristic.Described conversion module is configured to the speech model utilizing adjustment sound characteristic, adjusts the sound characteristic of described first voice signal, obtains the second voice signal after sound characteristic adjustment.
In the third possible embodiment of disclosure embodiment second aspect, described device also comprises: model provides module, is configured to provide multiple speech model.Select receiver module, be configured to receive the selection instruction choosing one or more speech model from multiple speech models that described model provides module to provide.Model preset module, is configured to the selection instruction received according to described selection receiver module, the one or more speech models chosen is set to described preset speech model.
In conjunction with the third possible embodiment of disclosure embodiment second aspect, in the 4th kind of possible embodiment, the speech model that described model preset module is configured to the selection instruction received recently apart from current time is chosen is set to described preset speech model.
In conjunction with the third possible embodiment of disclosure embodiment second aspect, in the 5th kind of possible embodiment, multiple speech models that described model preset module is configured to described selection instruction to be chosen are set to the preset speech model successively described first voice signal being carried out to phonetic modification respectively.
In conjunction with the third possible embodiment of disclosure embodiment second aspect, in the 6th kind of possible embodiment, described model provides module to comprise: the Show Button submodule, is configured in call interface, show voice treatment button.Option ejects submodule, is configured to, when receiving the click commands to the speech processes button of described the Show Button submodule display, eject the dialog box comprising described multiple speech model option.Wherein, described selection receiver module is configured to by described dialog box, receives the selection instruction choosing a speech model from described multiple speech model option.
In the 7th kind of possible embodiment of disclosure embodiment second aspect, described acquisition module is configured to when conversing with the other side, at interval of preset duration, obtains first voice signal of we in described preset duration.Described conversion module is configured to utilize preset speech model, carries out phonetic modification, obtain the second voice signal to the first voice signal that described acquisition module obtains in described preset duration.
According to the third aspect of disclosure embodiment, a kind of device for conversing being provided, comprising: processor; For the storer of storage of processor executable instruction; Wherein, described processor is configured to: when conversing with the other side, obtains first voice signal of we; Utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal; Described second voice signal is transmitted to the other side.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in instructions and to form the part of this instructions, shows and meets embodiment of the present disclosure, and is used from instructions one and explains principle of the present disclosure.
Fig. 1 is the structural representation of a kind of implementation environment according to an exemplary embodiment.
Fig. 2 is a kind of method flow diagram for conversing according to an exemplary embodiment.
Fig. 3 is a kind of method flow diagram for conversing according to another exemplary embodiment.
Fig. 4 is a kind of method flow diagram for conversing according to another exemplary embodiment.
Fig. 5 is the call interface schematic diagram according to an exemplary embodiment.
Fig. 6 is a kind of method flow diagram for conversing according to another exemplary embodiment.
Fig. 7 is a kind of method flow diagram for conversing according to another exemplary embodiment.
Fig. 8 is a kind of device block diagram for conversing according to an exemplary embodiment.
Fig. 9 is a kind of device block diagram for conversing according to another exemplary embodiment.
Figure 10 is a kind of device block diagram for conversing according to another exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
Fig. 1 is the structural representation of a kind of implementation environment according to an exemplary embodiment.As shown in Figure 1, this implementation environment can comprise at least two terminal devices.Wherein terminal device 110 can the incoming call of receiving terminal apparatus 120, and terminal device 120 also can the incoming call of receiving terminal apparatus 110.Wherein, described terminal device can be such as: the terminal devices such as wire telephony, wireless telephone, smart mobile phone, panel computer.
Fig. 2 is a kind of method flow diagram for conversing according to an exemplary embodiment.The present embodiment is applied to terminal device 110 shown in Fig. 1 in this way or terminal device 120 illustrates.The method can comprise:
In step 210, when conversing with the other side, obtain first voice signal of we.
In a step 220, utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal.
It should be noted that, the implementation of disclosure embodiment to preset speech model does not limit.
In a kind of possible embodiment, described preset speech model can comprise: preset responsive vocabulary and the special efficacy voice corresponding with described preset responsive vocabulary.In this embodiment, can detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtains the second voice signal comprising described special efficacy voice.
Such as, when can comprise " being stranded " this responsive vocabulary in the first voice signal we being detected, then corresponding yawning sound can be exported.Again such as, when can comprise " happiness " this responsive vocabulary in the first voice signal we being detected, then corresponding laugh can be exported.
Wherein, preset responsive vocabulary can be stored in speech model with speech form or written form.When preset responsive vocabulary stores with speech form, can be mated by the voice that the first voice signal is corresponding with preset responsive vocabulary, detect whether there is preset responsive vocabulary.When preset responsive vocabulary stores with written form, can by carrying out speech recognition to the first voice signal, identify word, mated by the word that the word identified is corresponding with preset responsive vocabulary, detect whether there is preset responsive vocabulary.
In another kind of possible embodiment, described preset speech model comprises: the speech model of adjustment sound characteristic.Wherein, sound characteristic can comprise: the loudness, tone, audio frequency, amplitude etc. of sound.In this embodiment, the speech model of adjustment sound characteristic can be utilized, the sound characteristic of described first voice signal is adjusted, obtain the second voice signal after sound characteristic adjustment.
Such as, the sound intensity, tone, audio frequency, amplitude etc. of speech model to first voice signal of we of adjustment sound characteristic can be utilized to adjust.
In another possible embodiment, described preset speech model can comprise: male voice speech model and/or female voice speech model, etc.In this embodiment, the sound mapping of oneself can be the sound of more butch sound or more womanlike exquisiteness by user, can reach better communication effect.
In another possible embodiment, described preset speech model can for utilizing the speech model based on the phonetic modification algorithm realization of harmonic sinusoidal modal.
In step 230, described second voice signal is transmitted to the other side.
It should be noted that, disclosure embodiment can be applied to the conversion of calling terminal voice when initiatively calling user, also can be applied to the conversion of called end voice when receiving calls to user is passive.Therefore, the we in disclosure embodiment can refer to the calling terminal sending incoming call, also can receive the called end of sending a telegram here by finger.Correspondingly, the other side can receive the called end of sending a telegram here by finger, also can refer to the calling terminal sending incoming call.
In sum, due to the voice signal be through after phonetic modification transmitted in communication process, the voice that what the other side heard is after phonetic modification, reach the communication effect that user wants, meet the personal needs of caller to call voice, enhance Consumer's Experience.
Fig. 3 is a kind of method flow diagram for conversing according to another exemplary embodiment.The present embodiment is applied to terminal device 110 shown in Fig. 1 in this way or terminal device 120 illustrates.The method can comprise:
In step 300, multiple speech model is provided.
Such as, the terminal device that the method that multiple speech model can be stored in disclosure embodiment is run on it is local.When terminal device enters through speech phase, described multiple speech model can be read out from this locality, there is provided described multiple speech model so that the instruction of user's input selection by the mode such as voice message or interface display, thus receive the selection instruction that user chooses one or more speech model from multiple speech model.
In step 301, the selection instruction choosing one or more speech model from described multiple speech model is received.
Such as, by multiple speech models of voice message, user can carry out selection operation thus input selection instruction according to the selection mode of voice message, such as, and can by pressing the mode input selection instructions such as the specified button of voice message.
Again such as, by multiple speech models of interface display, selection operation can be carried out thus input selection instruction according to the selection mode of interface display, such as, can pass through directly to press the instruction of virtual push button input selection on the touchscreen for touch-screen, also can by pressing the mode input selection instruction of specifying physical button.
In step 302, according to described selection instruction, the one or more speech models chosen are set to described preset speech model.
It should be noted that, carry out to the first voice signal the preset speech model that phonetic modification uses in disclosure embodiment, can be one, also can be multiple.When preset speech model is multiple speech model, phonetic modification can be carried out to described first voice signal successively respectively.
In a kind of possible embodiment, the speech model that the selection instruction received recently is chosen can be set to described preset speech model apart from current time.In this embodiment, because the speech model selection instruction received recently apart from current time chosen is set to described preset speech model, therefore, user is after choosing some unsatisfied speech models, can continue to select other speech model, the unsatisfied speech model before chosen is abandoned automatically, has both saved the computational resource of terminal device, it also avoid user and additionally perform operations such as cancelling selection, execution efficiency is higher.
In another kind of possible embodiment, multiple speech models that described selection instruction is chosen can be set to the preset speech model successively described first voice signal being carried out to phonetic modification respectively.In this embodiment, because the multiple speech models described selection instruction chosen are set to the preset voice mould successively described first voice signal being carried out to phonetic modification respectively, therefore, user can carry out many-sided conversion according to the voice self needed oneself, more meets user's request.
Such as, the preset responsive vocabulary described selection instruction can chosen and the special efficacy voice corresponding with described preset responsive vocabulary, and the speech model of adjustment sound characteristic, is set to the preset speech model successively described first voice signal being carried out to phonetic modification respectively.Thus detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtain the voice signal comprising described special efficacy voice, afterwards, the speech model of recycling adjustment sound characteristic, the sound characteristic of the voice signal comprising described special efficacy voice is adjusted, finally obtains the second voice signal after sound characteristic adjustment.
Again such as, the method when designated button is triggered, can also cancel the setting to described preset speech model.In a kind of possible embodiment, the speech model being stored in terminal device this locality can have use attribute.Described according to described selection instruction, the implementation one or more speech models chosen being set to described preset speech model can be: the property value of the use attribute of the speech model described selection instruction chosen is updated to effectively.And for not being set to preset speech model, the property value of its use attribute can be invalid.After the first voice signal obtaining we, be effective speech model by the property value finding out use attribute, use the speech model found out to carry out phonetic modification to the first voice signal.In this embodiment, when designated button is triggered, it is invalid that the property value of the use attribute that preset speech model selection instruction before can chosen is corresponding is updated to, if next user wants to carry out phonetic modification, then needs to reselect preset speech model.Such as, this designated button can for the physical button on the terminal device of application disclosure embodiment method.Again such as, this designated button can for the virtual push button on the touch display interface of the terminal device of application disclosure embodiment method.In this embodiment, because user can cancel the setting to described preset speech model, thus user can reselect the speech model wanted, or, abandon the operation of this phonetic modification, thus the autonomous selectivity of user is stronger, operation is more flexible, user-friendly.
In the step 310, when conversing with the other side, obtain first voice signal of we.
In step 320, utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal.
In a step 330, described second voice signal is transmitted to the other side.
In the present embodiment, owing to providing multiple speech model, user can input the selection instruction choosing one or more speech model, the one or more speech models chosen are made to be set to described preset speech model, therefore, user can like and need the phonetic modification selecting flexibly to want according to oneself, make the other side hear the sound effect oneself wanted, Consumer's Experience is better.
It should be noted that, the method that disclosure embodiment provides can be applied to without/without wire telephony, the wireless telephone of display screen, also can be applied to the terminal device that the smart mobile phone of display screen, panel computer etc. are possible arbitrarily.Below, it is that example is described in detail that the method provided in conjunction with disclosure embodiment is applied to the terminal device being configured with display screen.
Fig. 4 is a kind of method flow diagram for conversing according to another exemplary embodiment.The present embodiment is applied to terminal device 110 shown in Fig. 1 in this way or terminal device 120 illustrates.The method can comprise:
In step 400, in call interface, voice treatment button is shown.
Such as, voice treatment button 501 can be shown in call interface shown in Fig. 5.
In step 401, when receiving the click commands to described speech processes button, eject the dialog box comprising described multiple speech model option.
Such as, as shown in Figure 5, when receiving the click commands to speech processes button 501, the dialog box 502 comprising multiple speech model option can be ejected.
In step 402, by described dialog box, receive the selection instruction choosing a speech model from described multiple speech model option.
In step 403, according to described selection instruction, the speech model that the selection instruction received recently apart from current time is chosen is set to described preset speech model.Such as, the use attribute of speech model selection instruction can chosen is updated to effectively.
In step 404, when after step 401, when again receiving the click commands to described speech processes button, cancel the setting to described preset speech model.Such as, can be by use attribute that the use attribute of effective speech model is updated to invalid.
Such as, after receiving the click commands to speech processes button in step 401, this speech processes button can be rendered as selected state, when again receiving the click commands to the described speech processes button that this is chosen, this speech processes button can be rendered as non-selected state, and corresponding cancellation is to the setting of described preset speech model, that is, be equivalent to the not preset speech model of user.In this embodiment, due to twice click in front and back according to same speech processes button, eject speech model options dialog box and cancel speech model preset between switch, therefore, call interface is more succinct, is convenient to user operation, and avoids the waste to terminal device resource.
In step 410, when conversing with the other side, obtain first voice signal of we.
At step 420 which, utilize described preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal.
In step 430, described second voice signal is transmitted to the other side.
In the present embodiment, owing to showing voice treatment button in call interface, make user can choose speech model by the dialog box comprising multiple speech model option, its display mode is directly perceived, is more convenient to user operation.
Fig. 6 is a kind of method flow diagram for conversing according to another exemplary embodiment.The present embodiment is applied to terminal device 110 shown in Fig. 1 in this way or terminal device 120 illustrates.The method can comprise:
In step 610, when conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration.
Such as, at interval of 2 seconds, the voice signal of we's input in these 2 seconds can be obtained.The 3rd second as started in call starts, obtain the voice signal of we's input in 1,2 second that converses, call start the 5th second start, obtain call 3,4 seconds in we input voice signal, the like until before end of conversation we input voice signal all obtained.
It should be noted that, the time length of disclosure embodiment to described preset duration does not limit.But be understandable that, this preset duration is unsuitable excessive, to cause the too much delay that the second voice signal exports.
In step 620, utilize preset speech model, phonetic modification is carried out to the first voice signal in described preset duration, obtains the second voice signal.
Such as, phonetic modification is carried out for the first voice signal in different preset duration, according to the order obtained, at interval of preset duration, the phonetic modification to the first voice signal obtained in this preset duration can be performed.Such as, within 3rd second, start to utilize preset speech model what converse, to call 1,2 second in we input voice signal carry out phonetic modification, obtain a part of second voice signal, the 5th second in call starts, to call 3,4 seconds in we input voice signal carry out phonetic modification, obtain next part second voice signal, the like until before end of conversation we input voice signal all converted.
Again such as, phonetic modification is carried out for the first voice signal in different preset duration, after the first voice signal conversion execution once obtained is terminated, also at once can perform the phonetic modification of the first voice signal once obtained.
In act 630, described second voice signal is transmitted to the other side.
Such as, at once this part second voice signal can be transmitted to the other side after conversion obtains a part of second voice signal, can when last point of the second voice signal be transmitted yet, this part second voice signal is stored in this locality temporarily, when last point of the second transmitting voice signal completes, transmit this part of speech signal again, thus make the second voice signal to the other side's output continuous.
In the present embodiment, due to when conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration, utilize preset speech model, phonetic modification is carried out to the first voice signal in described preset duration, obtain the second voice signal, thus can while the first voice signal within a time interval of reception, perform the phonetic modification to first voice signal in the time interval before, improve phonetic modification execution efficiency, reduce the delay that the second voice signal is exported, obtain continuous communication effect.
Below, in conjunction with above each embodiment, another possible embodiment of disclosure embodiment is described in detail.
Fig. 7 is a kind of method flow diagram for conversing according to another exemplary embodiment.The present embodiment is applied to terminal device 110 shown in Fig. 1 in this way or terminal device 120 illustrates.The method can comprise:
In step 700, in call interface, show voice treatment button.
In step 701, when receiving the click commands to described speech processes button, eject the dialog box comprising multiple speech model option.
In a step 702, by described dialog box, receive the selection instruction choosing a speech model from described multiple speech model option.
In step 703, according to described selection instruction, the speech model that the selection instruction received recently apart from current time is chosen is set to described preset speech model.
In step 704, after step 701, when again receiving the click commands to described speech processes button, cancel the setting to described preset speech model.
In step 720, when conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration.
In step 720, utilize preset speech model, phonetic modification is carried out to the first voice signal in described preset duration, obtains the second voice signal.
In step 730, described second voice signal is transmitted to the other side.
In the present embodiment, preset speech model is selected for user owing to being provided multiple speech model by call interface, and when conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration, therefore, be not only convenient to user operation, and phonetic modification execution efficiency is high, communication effect is good.
Fig. 8 is a kind of device block diagram for conversing according to an exemplary embodiment.The present embodiment is configured at terminal device 110 shown in Fig. 1 with this device or terminal device 120 illustrates.This device can comprise: acquisition module 810, conversion module 820 and transport module 830.
This acquisition module 810, can be configured to when conversing with the other side, obtains first voice signal of we.
This conversion module 820, can be configured to utilize preset speech model, carries out phonetic modification, obtain the second voice signal to the first voice signal that described acquisition module 810 obtains.
This transport module 830, can be configured to transmit to the other side the second voice signal that described conversion module 820 obtains.
In a kind of possible embodiment, described preset speech model can comprise: preset responsive vocabulary and the special efficacy voice corresponding with described preset responsive vocabulary.Described conversion module 820 can be configured to detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal obtained by described acquisition module 810 is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtains the second voice signal comprising described special efficacy voice.
In another kind of possible embodiment, described preset speech model comprises: the speech model of adjustment sound characteristic.Described conversion module 820 can be configured to the speech model utilizing adjustment sound characteristic, adjusts the sound characteristic of described first voice signal that described acquisition module 810 obtains, and obtains the second voice signal after sound characteristic adjustment.
Due to when conversing with the other side, acquisition module 810 obtains first voice signal of we, conversion module 820 utilizes preset speech model, phonetic modification is carried out to described first voice signal, obtain the second voice signal, transport module 830 transmits described second voice signal to the other side, therefore, the voice signal be through after phonetic modification transmitted in communication process, the voice that what the other side heard is after phonetic modification, reach the communication effect that user wants, meet the personal needs of caller to call voice, enhance Consumer's Experience.Fig. 9 is a kind of device block diagram for conversing according to another exemplary embodiment.The present embodiment is configured at terminal device 110 shown in Fig. 1 with this device or terminal device 120 illustrates.As shown in Figure 9, this device can also comprise: model provides module 840, selects receiver module 850 and model preset module 860.
This model provides module 840, and can be configured to provides multiple speech model.
This selection receiver module 850, can be configured to receive the selection instruction choosing one or more speech model from multiple speech models that described model provides module 840 to provide.
This model preset module 860, can be configured to the selection instruction received according to described selection receiver module 850, the one or more speech models chosen are set to described preset speech model.
In a kind of possible embodiment, the speech model that this model preset module 860 can be configured to described selection receiver module 850 is chosen apart from the selection instruction that current time receives recently is set to described preset speech model.
In another kind of possible embodiment, multiple speech models that this model preset module 860 can be configured to described selection instruction to be chosen are set to the preset speech model successively described first voice signal being carried out to phonetic modification respectively.In a kind of possible embodiment, as shown in Figure 9, this model provides module 840 to comprise: the Show Button submodule 841 and option eject submodule 842.
This Show Button submodule 841, can be configured to show voice treatment button in call interface.
This option ejects submodule 842, can be configured to, when receiving the click commands to the speech processes button that described the Show Button submodule 841 shows, eject the dialog box comprising described multiple speech model option.
In this embodiment, this selection receiver module 850 can be configured to by described dialog box, receives the selection instruction choosing a speech model from described multiple speech model option.
In a kind of possible embodiment, this acquisition module 810 can be configured to when conversing with the other side, at interval of preset duration, obtains first voice signal of we in described preset duration.This conversion module 820 can be configured to utilize preset speech model, carries out phonetic modification, obtain the second voice signal to the first voice signal that described acquisition module 810 obtains in described preset duration.
In a kind of possible embodiment, described speech model can comprise: male voice speech model and/or female voice speech model.
About the device in above-described embodiment, wherein the concrete mode of modules executable operations has been described in detail in about the embodiment of the method, will not elaborate explanation herein.
Figure 10 is the block diagram of a kind of device 1000 for conversing according to another exemplary embodiment.Such as, device 1000 can be mobile phone, computing machine, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Figure 10, device 1000 can comprise following one or more assembly: processing components 1002, storer 1004, electric power assembly 1006, multimedia groupware 1008, audio-frequency assembly 1010, the interface 1012 of I/O (I/O), sensor module 1014, and communications component 1016.
The integrated operation of the usual control device 1000 of processing components 1002, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1002 can comprise one or more processor 1020 to perform instruction, to complete all or part of step of the above-mentioned method for conversing.In addition, processing components 1002 can comprise one or more module, and what be convenient between processing components 1002 and other assemblies is mutual.Such as, processing components 1002 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1008 and processing components 1002.
Storer 1004 is configured to store various types of data to be supported in the operation of device 1000.The example of these data comprises for any application program of operation on device 1000 or the instruction of method, contact data, telephone book data, message, picture, video etc.Storer 1004 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), ROM (read-only memory) (ROM), magnetic store, flash memory, disk or CD.
The various assemblies that electric power assembly 1006 is device 1000 provide electric power.Electric power assembly 1006 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 1000 and be associated.
Multimedia groupware 1008 is included in the screen providing an output interface between described device 1000 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1008 comprises a front-facing camera and/or post-positioned pick-up head.When device 1000 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1010 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1010 comprises a microphone (MIC), and when device 1000 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The sound signal received can be stored in storer 1004 further or be sent via communications component 1016.In certain embodiments, audio-frequency assembly 1010 also comprises a loudspeaker, for output audio signal.
I/O interface 1012 is for providing interface between processing components 1002 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor module 1014 comprises one or more sensor, for providing the state estimation of various aspects for device 1000.Such as, sensor module 1014 can detect the opening/closing state of device 1000, the relative positioning of assembly, such as described assembly is display and the keypad of device 1000, the position of all right pick-up unit 1000 of sensor module 1014 or device 1000 assemblies changes, the presence or absence that user contacts with device 1000, the temperature variation of device 1000 orientation or acceleration/deceleration and device 1000.Sensor module 1014 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor module 1014 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor module 1014 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 1016 is configured to the communication being convenient to wired or wireless mode between device 1000 and other equipment.Device 1000 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1016 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1016 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 1000 can be realized, for performing the above-mentioned method for conversing by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD) (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the storer 1004 of instruction, above-mentioned instruction can perform the above-mentioned method for conversing by the processor 1020 of device 1000.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
Those skilled in the art, at consideration instructions and after putting into practice the disclosure, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Instructions and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (17)

1. the method for conversing, is characterized in that, comprising:
When conversing with the other side, obtain first voice signal of we;
Utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal;
Described second voice signal is transmitted to the other side.
2. method according to claim 1, is characterized in that, described preset speech model comprises: preset responsive vocabulary and the special efficacy voice corresponding with described preset responsive vocabulary;
Describedly utilize preset speech model, phonetic modification carried out to described first voice signal, obtains the second voice signal and comprise:
Detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtains the second voice signal comprising described special efficacy voice.
3. method according to claim 1, is characterized in that, described preset speech model comprises: the speech model of adjustment sound characteristic;
Describedly utilize preset speech model, phonetic modification carried out to described first voice signal, obtains the second voice signal and comprise:
Utilize the speech model of adjustment sound characteristic, the sound characteristic of described first voice signal is adjusted, obtain the second voice signal after sound characteristic adjustment.
4. method according to claim 1, is characterized in that, described method also comprises:
Multiple speech model is provided;
Receive the selection instruction choosing one or more speech model from described multiple speech model;
According to described selection instruction, the one or more speech models chosen are set to described preset speech model.
5. method according to claim 4, is characterized in that, described according to described selection instruction, choose speech model is set to described preset speech model and comprises:
The speech model that the selection instruction received recently apart from current time is chosen is set to described preset speech model.
6. method according to claim 4, is characterized in that, described according to described selection instruction, the multiple speech models chosen is set to described preset speech model and comprises:
The multiple speech models described selection instruction chosen are set to the preset speech model successively described first voice signal being carried out to phonetic modification respectively.
7. method according to claim 4, it is characterized in that, describedly providing multiple speech model to comprise: in call interface, show voice treatment button, when receiving the click commands to described speech processes button, ejecting the dialog box comprising described multiple speech model option;
Described reception chooses the selection instruction of one or more speech model to comprise from described multiple speech model: by described dialog box, receives the selection instruction choosing a speech model from described multiple speech model option.
8. method according to claim 1, is characterized in that, described when conversing with the other side, and the first voice signal obtaining we comprises:
When conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration;
Describedly utilize preset speech model, phonetic modification carried out to described first voice signal, obtains the second voice signal and comprise:
Utilize preset speech model, phonetic modification is carried out to the first voice signal in described preset duration, obtains the second voice signal.
9. the device for conversing, is characterized in that, comprising:
Acquisition module, is configured to when conversing with the other side, obtains first voice signal of we;
Conversion module, is configured to utilize preset speech model, carries out phonetic modification, obtain the second voice signal to the first voice signal that described acquisition module obtains;
Transport module, is configured to transmit to the other side the second voice signal that described conversion module obtains.
10. device according to claim 9, is characterized in that, described preset speech model comprises: preset responsive vocabulary and the special efficacy voice corresponding with described preset responsive vocabulary;
Described conversion module be configured to detect comprise preset responsive vocabulary in described first voice signal time, the preset responsive vocabulary comprised in described first voice signal is transformed to special efficacy voice corresponding to described preset responsive vocabulary, obtains the second voice signal comprising described special efficacy voice.
11. devices according to claim 9, is characterized in that, described preset speech model comprises: the speech model of adjustment sound characteristic;
Described conversion module is configured to the speech model utilizing adjustment sound characteristic, adjusts the sound characteristic of described first voice signal, obtains the second voice signal after sound characteristic adjustment.
12. devices according to claim 9, is characterized in that, described device also comprises:
Model provides module, and being configured to provides multiple speech model;
Select receiver module, be configured to receive the selection instruction choosing one or more speech model from multiple speech models that described model provides module to provide;
Model preset module, is configured to the selection instruction received according to described selection receiver module, the one or more speech models chosen is set to described preset speech model.
13. devices according to claim 12, is characterized in that, the speech model that described model preset module is configured to the selection instruction received recently apart from current time is chosen is set to described preset speech model.
14. devices according to claim 12, is characterized in that, multiple speech models that described model preset module is configured to described selection instruction to be chosen are set to the preset speech model successively described first voice signal being carried out to phonetic modification respectively.
15. devices according to claim 12, is characterized in that, described model provides module to comprise:
The Show Button submodule, is configured in call interface, show voice treatment button;
Option ejects submodule, is configured to, when receiving the click commands to the speech processes button of described the Show Button submodule display, eject the dialog box comprising described multiple speech model option;
Wherein, described selection receiver module is configured to by described dialog box, receives the selection instruction choosing a speech model from described multiple speech model option.
16. devices according to claim 9, is characterized in that, described acquisition module is configured to when conversing with the other side, at interval of preset duration, obtain first voice signal of we in described preset duration;
Described conversion module is configured to utilize preset speech model, carries out phonetic modification, obtain the second voice signal to the first voice signal that described acquisition module obtains in described preset duration.
17. 1 kinds for the device conversed, is characterized in that, comprising:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
When conversing with the other side, obtain first voice signal of we;
Utilize preset speech model, phonetic modification is carried out to described first voice signal, obtains the second voice signal;
Described second voice signal is transmitted to the other side.
CN201510770291.7A 2015-11-12 2015-11-12 Method and device for calling Pending CN105448300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510770291.7A CN105448300A (en) 2015-11-12 2015-11-12 Method and device for calling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510770291.7A CN105448300A (en) 2015-11-12 2015-11-12 Method and device for calling

Publications (1)

Publication Number Publication Date
CN105448300A true CN105448300A (en) 2016-03-30

Family

ID=55558406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510770291.7A Pending CN105448300A (en) 2015-11-12 2015-11-12 Method and device for calling

Country Status (1)

Country Link
CN (1) CN105448300A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107508997A (en) * 2017-09-29 2017-12-22 深圳市云中飞网络科技有限公司 Call control method, call control apparatus and mobile terminal
CN107886963A (en) * 2017-11-03 2018-04-06 珠海格力电器股份有限公司 Voice processing method and device and electronic equipment
CN107919138A (en) * 2017-11-30 2018-04-17 维沃移动通信有限公司 Mood processing method and mobile terminal in a kind of voice
CN108156317A (en) * 2017-12-21 2018-06-12 广东欧珀移动通信有限公司 call voice control method, device and storage medium and mobile terminal
CN109151366A (en) * 2018-09-27 2019-01-04 惠州Tcl移动通信有限公司 A kind of sound processing method of video calling

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030024721A (en) * 2003-01-28 2003-03-26 배명진 A Soft Sound Method to Warmly Playback Sounds Recorded from Voice-Pen.
CN1652622A (en) * 2004-02-02 2005-08-10 英华达股份有限公司 Mobile communication apparatus with changing telephone telesound function and related method
CN102984370A (en) * 2012-11-20 2013-03-20 浙江大学 Method for voice-changing call under wireless network and based on Android
CN103856390A (en) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 Instant messaging method and system, messaging information processing method and terminals
CN103903627A (en) * 2012-12-27 2014-07-02 中兴通讯股份有限公司 Voice-data transmission method and device
CN104104793A (en) * 2014-06-30 2014-10-15 百度在线网络技术(北京)有限公司 Audio processing method and device
CN104299622A (en) * 2014-09-23 2015-01-21 深圳市金立通信设备有限公司 Audio processing method
CN104299619A (en) * 2014-09-29 2015-01-21 广东欧珀移动通信有限公司 Method and device for processing audio file
CN104469693A (en) * 2014-12-11 2015-03-25 北京奇虎科技有限公司 Group message publishing method and device
CN105049646A (en) * 2015-07-13 2015-11-11 宇龙计算机通信科技(深圳)有限公司 Voice change conversation method, device and terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030024721A (en) * 2003-01-28 2003-03-26 배명진 A Soft Sound Method to Warmly Playback Sounds Recorded from Voice-Pen.
CN1652622A (en) * 2004-02-02 2005-08-10 英华达股份有限公司 Mobile communication apparatus with changing telephone telesound function and related method
CN102984370A (en) * 2012-11-20 2013-03-20 浙江大学 Method for voice-changing call under wireless network and based on Android
CN103856390A (en) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 Instant messaging method and system, messaging information processing method and terminals
CN103903627A (en) * 2012-12-27 2014-07-02 中兴通讯股份有限公司 Voice-data transmission method and device
CN104104793A (en) * 2014-06-30 2014-10-15 百度在线网络技术(北京)有限公司 Audio processing method and device
CN104299622A (en) * 2014-09-23 2015-01-21 深圳市金立通信设备有限公司 Audio processing method
CN104299619A (en) * 2014-09-29 2015-01-21 广东欧珀移动通信有限公司 Method and device for processing audio file
CN104469693A (en) * 2014-12-11 2015-03-25 北京奇虎科技有限公司 Group message publishing method and device
CN105049646A (en) * 2015-07-13 2015-11-11 宇龙计算机通信科技(深圳)有限公司 Voice change conversation method, device and terminal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
佚名: "屏蔽敏感词或者脏话时的"哗"或者"嘀"如何做出来的?", 《知乎HTTPS://WWW.ZHIHU.COM/QUESTION/23152636?SORT=CREATED》 *
佚名: "我录了一段视频,里面有人说了句脏话", 《百度知道HTTPS://ZHIDAO.BAIDU.COM/QUESTION/1541421842269360267. HTML》 *
无: "电话变声器 V7.0安卓版", 《HTTPS://WWW.CR173.COM/SOFT/139104.HTML》 *
无: "通话变声器(魔音)", 《HTTP://WWW.962.NET/AZGAME/40708.HTML》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107508997A (en) * 2017-09-29 2017-12-22 深圳市云中飞网络科技有限公司 Call control method, call control apparatus and mobile terminal
CN107886963A (en) * 2017-11-03 2018-04-06 珠海格力电器股份有限公司 Voice processing method and device and electronic equipment
CN107886963B (en) * 2017-11-03 2019-10-11 珠海格力电器股份有限公司 Voice processing method and device and electronic equipment
CN107919138A (en) * 2017-11-30 2018-04-17 维沃移动通信有限公司 Mood processing method and mobile terminal in a kind of voice
CN107919138B (en) * 2017-11-30 2021-01-08 维沃移动通信有限公司 Emotion processing method in voice and mobile terminal
CN108156317A (en) * 2017-12-21 2018-06-12 广东欧珀移动通信有限公司 call voice control method, device and storage medium and mobile terminal
CN108156317B (en) * 2017-12-21 2020-03-10 Oppo广东移动通信有限公司 Call voice control method and device, storage medium and mobile terminal
CN109151366A (en) * 2018-09-27 2019-01-04 惠州Tcl移动通信有限公司 A kind of sound processing method of video calling
CN109151366B (en) * 2018-09-27 2020-09-22 惠州Tcl移动通信有限公司 Sound processing method for video call, storage medium and server

Similar Documents

Publication Publication Date Title
KR101571993B1 (en) Method for voice calling method for voice playing, devices, program and storage medium thereof
CN105262452A (en) Method and apparatus for adjusting volume, and terminal
CN104092836A (en) Power-saving method and apparatus
CN104219388A (en) Voice control method and device
CN104065836A (en) Method and device for monitoring calls
CN104318741A (en) Bluetooth device control method and device
CN105448300A (en) Method and device for calling
CN104836897A (en) Method and device for controlling terminal communication through wearable device
CN105338157A (en) Nuisance call processing method, and device and telephone
CN105532634A (en) Ultrasonic wave mosquito repel method, device and system
CN104219644A (en) Emergency communication method and device
CN104539789A (en) Method and device for prompting call request
CN104539871A (en) Multimedia call method and device
CN104636110A (en) Method and device for controlling volume
CN103957330A (en) Method, device and system for processing calling busying
CN104837154A (en) Wireless access point control method and device
CN104378715A (en) Device and method for lowering earphone POP sound
CN105530381A (en) Call hanging-up method and device
CN105100484A (en) Method, device and system for ending voice call
CN104767857A (en) Telephone calling method and device based on cloud name cards
CN103945065A (en) Message reminding method and device
CN104935729A (en) Audio output method and device
CN104917909A (en) Call-based message-leaving method, device, terminal and server
CN104702756A (en) Detecting method and detecting device for soundless call
CN104506703A (en) Voice message leaving method, voice message leaving device, voice message playing method and voice message playing device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160330

RJ01 Rejection of invention patent application after publication