WO2021098708A1 - 通话方法及终端设备 - Google Patents

通话方法及终端设备 Download PDF

Info

Publication number
WO2021098708A1
WO2021098708A1 PCT/CN2020/129662 CN2020129662W WO2021098708A1 WO 2021098708 A1 WO2021098708 A1 WO 2021098708A1 CN 2020129662 W CN2020129662 W CN 2020129662W WO 2021098708 A1 WO2021098708 A1 WO 2021098708A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
call
terminal
voice
call terminal
Prior art date
Application number
PCT/CN2020/129662
Other languages
English (en)
French (fr)
Inventor
张世杰
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021098708A1 publication Critical patent/WO2021098708A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72406User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by software upgrading or downloading
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • the embodiments of the present invention relate to the field of communication technology, and in particular, to a call method and terminal equipment.
  • the call quality When the user is in a call, if the noise in the call environment is high, the call quality will be poor. In order to improve the quality of the call, the usual method currently used is to increase the volume of the call or move to a place with less noise. However, in the case where the user cannot clearly hear the voice of the opposite end when the call volume is adjusted to the highest level, or cannot move to a place with low noise, the call quality cannot be guaranteed even after the user adopts the above method.
  • the embodiments of the present invention provide a call method and terminal equipment to solve the existing problem of poor call quality due to high noise in the call environment.
  • the present invention is implemented as follows:
  • an embodiment of the present invention provides a call method, which is applied to a first call terminal, where the first call terminal includes a voice assistant; the method includes:
  • the target call terminal is the first call terminal
  • the first information is text information
  • the second information is voice information
  • the target call terminal is the communication with the first call terminal.
  • the first information is voice information
  • the second information is text information.
  • an embodiment of the present invention also provides a terminal device, the terminal device is a first call terminal, the terminal device includes a voice assistant; the terminal device includes:
  • An obtaining module configured to obtain the first information of the target call terminal through the voice assistant when the voice assistant is turned on;
  • a conversion module configured to convert the first information into second information through the voice assistant
  • An output module for outputting the second information
  • the target call terminal is the first call terminal
  • the first information is text information
  • the second information is voice information
  • the target call terminal is the communication with the first call terminal.
  • the first information is voice information
  • the second information is text information.
  • an embodiment of the present invention also provides a terminal device.
  • the terminal device is a first call terminal.
  • the terminal device includes a processor, a memory, and a device that is stored in the memory and can run on the processor.
  • a computer program which, when executed by the processor, implements the steps of the call method described above.
  • an embodiment of the present invention also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the call method described above are implemented.
  • the first talking terminal can use the voice assistant to convert the voice information of the second talking terminal into text information, so that the user of the first communication terminal can obtain the expression content of the user of the second talking terminal by viewing the text information.
  • the first call terminal can receive the text information input by the user, and convert the text information into voice information through the voice assistant, so as to transmit the voice information to The second call terminal, in this way, even if the user at the first call terminal is inconvenient to speak, the user at the second call terminal can still be called. It can be seen that the embodiment of the present invention can improve the call quality.
  • FIG. 1 is one of the flowcharts of a call method provided by an embodiment of the present invention
  • Figure 2 is a schematic diagram of a call page according to an embodiment of the present invention.
  • FIG. 3 is the second flowchart of the call method provided by the embodiment of the present invention.
  • FIG. 4 is the third flowchart of the call method provided by the embodiment of the present invention.
  • FIG. 5 is one of the structural diagrams of a terminal device provided by an embodiment of the present invention.
  • Fig. 6 is a second structural diagram of a terminal device provided by an embodiment of the present invention.
  • first”, “second”, etc. in the present invention are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence.
  • the terms “including” and “having” and any variations of them are intended to cover non-exclusive inclusions.
  • a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to those clearly listed. Those steps or units may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or equipment.
  • the use of "and/or" in the present invention means at least one of the connected objects, such as A and/or B and/or C, which means that it includes A alone, B alone, C alone, and both A and B exist, Both B and C exist, A and C exist, and A, B, and C all exist in 7 cases.
  • the call method in the embodiment of the present invention can be applied to the first call terminal.
  • the first call terminal can establish a call with other second call terminals.
  • the expression form of the call may include a telephone call, a voice call, a video call, and so on.
  • the call terminal can be a mobile phone, a tablet (Personal Computer), a wearable device (Wearable Device), and so on.
  • the first talking terminal in the embodiment of the present invention may include, but is not limited to, a voice assistant.
  • the voice assistant has a speech recognition (Automatic Speech Recognition, ASR) function and/or a speech synthesis (Text-To-Speech, TTS) function. Specifically, when the voice recognition function is turned on, the voice assistant can convert voice information into text information; when the voice synthesis function is turned on, the voice assistant can convert text information into voice information.
  • ASR Automatic Speech Recognition
  • TTS Text-To-Speech
  • Fig. 1 is one of the flowcharts of the call method provided by the embodiment of the present invention. As shown in Figure 1, the call method may include the following steps:
  • Step 101 When the voice assistant is turned on, obtain the first information of the target calling terminal through the voice assistant.
  • the first talking terminal can start the voice assistant when any of the following conditions is met:
  • the first condition the input of the auxiliary call control is received
  • the second condition the input of the physical artificial intelligence (Artificial Intelligence, AI) key is received.
  • AI Artificial Intelligence
  • the auxiliary call control may be displayed on the incoming call page and/or the call page. As shown in FIG. 2, the auxiliary call control 21 is displayed on the call page 22.
  • the target call terminal may be: the first call terminal; or, the second call terminal that talks with the first call terminal.
  • the expression form of the first information is related to the expression form of the target call terminal.
  • the first information is text information.
  • the user inputs text information on the screen of the first call terminal.
  • the first information is voice information.
  • the voice assistant can obtain the voice information of the second call terminal in the following two ways.
  • the voice assistant can obtain the voice information of the second call terminal through the earpiece.
  • the voice assistant directly obtains the electrical signal of the voice information from the earpiece.
  • the voice assistant can obtain the voice information of the second call terminal through the microphone. In this implementation mode, the voice assistant directly obtains the voice signal from the microphone.
  • Step 102 Transform the first information into second information through the voice assistant.
  • the voice assistant completes the conversion of text information to voice information through the voice synthesis function.
  • the second information is text information.
  • the voice assistant completes the conversion of voice information to text information through the voice recognition function.
  • Step 103 Output the second information; in the case where the target call terminal is the first call terminal, the first information is text information, and the second information is voice information; and/or In the case where the target call terminal is a second call terminal that is talking with the first call terminal, the first information is voice information, and the second information is text information.
  • the second information is voice information.
  • the output of the second information includes: outputting the second information through a target microphone; wherein, the target microphone may be the first microphone in the embodiment of the present invention, or may be another microphone of the first call terminal.
  • the first talking terminal can receive the text information input by the user, and use the voice assistant to convert the text information into voice information, so as to transmit the voice information to the second talking terminal. In this way, even at the first talking terminal When the user is inconvenient to speak, he can also talk to the user of the second call terminal, so that the quality of the call can be improved.
  • the second information is text information.
  • the outputting the second information includes: displaying the second information on the screen of the first call terminal.
  • the area of the screen for displaying the second information may be the entire area of the screen, or may be a part of the area of the screen, which may be specifically determined according to actual needs. It should be understood that the present invention does not limit the size and position of the area for displaying the second information on the screen.
  • the first calling terminal can use the voice assistant to convert the voice information of the second calling terminal into text information, and display the text information on the screen, so that the user of the first calling terminal can obtain the second call by viewing the text information
  • the expression content of the user at the end of the call enriches the way for the user at the first call end to obtain the expression content of the user at the second call end, thereby improving the quality of the call.
  • the first talking terminal only enables the voice recognition function of the voice assistant.
  • the user at the first call terminal can have two ways to obtain the expression content of the user at the second call terminal: 1. Listen to the voice information of the second call terminal output through the receiver of the first call terminal; 2. View the first call The text information converted from the voice information of the second calling terminal displayed on the screen of the calling terminal.
  • the first talking terminal obtains the expression content of the user at the first talking terminal by collecting voice information.
  • the call flow of the first call terminal may include:
  • the second voice information of the first call terminal is collected through the first microphone, and the second voice information is sent.
  • the user of the first talking terminal obtains the expression content of the user of the second talking terminal by listening to the voice information of the second talking terminal output by the earpiece of the first talking terminal.
  • the first talking terminal obtains the expression content of the user at the first talking terminal by acquiring the text information input by the user. And the first talking terminal converts the text information input by the user into voice information through the voice assistant, so as to transmit the voice information to the second talking terminal.
  • the voice objects of the voice information are substantially different.
  • the voice object of the voice information is the user of the first talking terminal
  • the voice object of the voice information is the first talking terminal
  • the call flow of the first call terminal may include:
  • the fourth voice information received from the second talking terminal is output.
  • the first talking terminal turns on the voice recognition function and voice synthesis function of the voice assistant.
  • the user at the first call end can have two ways to obtain the expression content of the user at the second call end: 1. Listen to the voice information of the second call end output through the receiver of the first call end; 2. View the first call end’s voice information. The text information converted from the voice information of the second calling terminal displayed on the screen of the calling terminal.
  • the first talking terminal obtains the expression content of the user at the first talking terminal by acquiring the text information input by the user. And the first talking terminal converts the text information input by the user into voice information through the voice assistant, so as to transmit the voice information to the second talking terminal.
  • the call flow of the first call terminal may include:
  • the first call terminal can convert the voice information of the second call terminal into text information through the voice assistant, so that the user of the first communication terminal can obtain the expression content of the user of the second call terminal by viewing the text information, which is rich
  • the first call terminal can receive the text information input by the user, and convert the text information into voice information through the voice assistant, so as to transmit the voice information to The second call terminal, in this way, even if the user at the first call terminal is inconvenient to speak, the user at the second call terminal can still be called. It can be seen that the embodiment of the present invention can improve the call quality.
  • the second information is text information
  • the outputting the second information includes:
  • the method further includes:
  • the text information displayed on the screen is saved.
  • the text information displayed on the screen may include: text information converted from the voice information of the second call terminal.
  • the text information displayed on the screen may include: text information converted from the voice information of the second call terminal, and text information input by the user of the first call terminal.
  • the displaying the second information on the screen includes:
  • the call page and the text page are displayed separately on the screen, and the text page is used to display the second information.
  • the screen in the split-screen display mode, the screen can display the call page and the text page at the same time, which can enrich the user's expression content obtained by the user of the second call terminal without hindering the operation of the call page, thereby further improving the call quality.
  • the first talking terminal can trigger the screen to enter the split-screen display mode after turning on the voice assistant; it can also trigger the screen to enter the split-screen display mode after the text information is converted, but it is not limited to this.
  • the first talking terminal may also display the second information in a full screen.
  • the first call terminal further includes:
  • a containing cavity, the containing cavity is made of sound-proof material
  • the first earpiece is arranged outside the accommodating cavity and is used to output the voice information of the second call terminal;
  • the first microphone is arranged outside the accommodating cavity and is used to collect voice information of the call environment of the first call terminal;
  • the second earpiece is arranged in the accommodating cavity and is used to output the voice information of the second call terminal;
  • the second microphone is arranged in the accommodating cavity and is electrically connected to the voice assistant, and is used to obtain the voice information output by the second earpiece and transmit the voice information to the voice assistant.
  • both the first earpiece and the second earpiece can output the received voice information of the second call terminal.
  • the voice information output by the first earpiece allows the user to obtain the expression content of the user at the second call end by listening to the voice information;
  • the voice information output by the second earpiece can be transmitted to the voice assistant through the second microphone, so that the voice assistant can
  • the voice information is converted into text information, so that the user can obtain the expression content of the user at the second call terminal by viewing the text information.
  • Both the first microphone and the second microphone are used to collect voice information, but the first microphone collects the voice information of the external environment of the first talking terminal, that is, the first microphone collects the voice information of the talking environment of the first talking terminal; What the second microphone collects is the voice information output by the second earpiece.
  • the second earpiece and the second microphone are arranged in the accommodating cavity made of soundproof material, the quality of the voice information obtained by the voice assistant can be improved, and the call quality can be improved.
  • the second information is text information; the method further includes at least one of the following:
  • the target parameter value is used to characterize the quality of the call environment of the first call terminal.
  • the above steps can be applied to the process in which the first talking terminal obtains the first information of the target talking terminal through the voice assistant, that is, the first talking terminal may obtain the voice information through the voice assistant according to the target parameter.
  • the comparison result of the value and the threshold value controls the working state of the first microphone.
  • the target parameter value is greater than the threshold, it indicates that the call environment of the first call terminal is poor and the noise is large. Therefore, in order to reduce the influence of external noise on the second microphone, the first microphone can be turned off.
  • the target parameter value is less than or equal to the threshold value
  • the target parameter value includes a volume value output by the first earpiece.
  • a compartment can be made of sound insulation material inside the call terminal, and a miniature special call speaker and an AI special microphone are placed in the compartment.
  • Step 301 In the case of detecting an incoming call, display an auxiliary call control on the incoming call and called interface.
  • the auxiliary function is used to answer; if the touch operation on the auxiliary call control is not detected, the auxiliary function is not used to answer the call.
  • Step 302 Detect whether the auxiliary function is enabled to answer.
  • step 304 is executed, otherwise, step 303 is executed.
  • Step 303 Enter a normal call answering interface.
  • an auxiliary call control can be displayed on the call answering interface. If a touch operation on the auxiliary call control is detected, the auxiliary function is used to answer, and step 304 is executed.
  • Step 304 Start the voice assistant and control the screen to split up and down.
  • the call page is zoomed in and out, and all the buttons of the phone answering process are retained.
  • the lower half is the voice assistant wake-up interface.
  • the call terminal will be opened.
  • the internal AI dedicated speaker and microphone display the text recognized by ASR in the lower half of the screen.
  • the user can assist this call based on the text content in the lower half of the screen.
  • the external microphone in order to prevent the external microphone from affecting the internal microphone noise, after opening the auxiliary call, when the earpiece volume is greater than the threshold, the external microphone should be turned off. When the earpiece volume is less than the threshold, the external microphone is turned on again.
  • Step 305 When the call ends, the call page is automatically closed, the AI dedicated speaker and radio port are closed, and the mobile phone voice assistant interface in the lower half of the screen is expanded to the full screen; the call content can be selectively saved by the user.
  • a compartment can be made of sound insulation material inside the call terminal, and a miniature special call speaker and an AI special microphone are placed in the compartment.
  • Step 401 The system detects an incoming call.
  • Step 402 When the incoming call is answered, enter the normal call answering interface.
  • the user can wake up the AI-assisted call through the physical AI key of the mobile phone.
  • Step 403. When the AI assisted mobile phone call is enabled, and control the screen to split up and down, the upper half of the screen is the zoom of the call page, and all the buttons of the phone answering process are retained, and the lower half of the screen is the same of the voice assistant wake-up interface. Zooming, the AI dedicated speaker and microphone inside the call terminal will be turned on, and the text recognized by ASR will be displayed in the lower half of the screen.
  • the user can assist this call based on the text content in the lower half of the screen.
  • the external microphone in order to prevent the external microphone from affecting the internal microphone noise, after opening the auxiliary call, when the earpiece volume is greater than the threshold, the external microphone should be turned off. When the earpiece volume is less than the threshold, the external microphone is turned on again.
  • Step 404 When the call ends, the call page is automatically closed, the AI dedicated speaker and radio port are closed, and the mobile phone voice assistant interface in the lower half of the screen is expanded to the full screen; the call content can be selectively saved by the user.
  • FIG. 5 is one of the structural diagrams of a terminal device provided by an embodiment of the present invention.
  • the terminal device 500 is the first call terminal in the method embodiment of the present invention, and the terminal device 500 includes a voice assistant; as shown in FIG. 5, the terminal device 500 includes:
  • the obtaining module 501 is configured to obtain the first information of the target call terminal through the voice assistant when the voice assistant is turned on;
  • the conversion module 502 is configured to convert the first information into second information through the voice assistant
  • the output module 503 is configured to output the second information
  • the target call terminal is the first call terminal
  • the first information is text information
  • the second information is voice information
  • the communication with the In the case of the second call end of the first call end the first information is voice information
  • the second information is text information
  • the second information is text information
  • the output module 503 is specifically used for:
  • the terminal device 500 further includes:
  • the saving module is configured to save the text information displayed on the screen when the first input is received after the output module outputs the second information.
  • the output module 503 is specifically used for:
  • the call page and the text page are displayed separately on the screen, and the text page is used to display the second information.
  • the terminal device 500 further includes:
  • a containing cavity, the containing cavity is made of sound-proof material
  • the first earpiece is arranged outside the accommodating cavity and is used to output the voice information of the second call terminal;
  • the first microphone is arranged outside the accommodating cavity and is used to collect voice information of the call environment of the first call terminal;
  • the second earpiece is arranged in the accommodating cavity and is used to output the voice information of the second call terminal;
  • the second microphone is arranged in the accommodating cavity and is electrically connected to the voice assistant, and is used to obtain the voice information output by the second earpiece and transmit the voice information to the voice assistant.
  • the second information is text information
  • the terminal device further includes a control module configured to perform at least one of the following:
  • the target parameter value is used to characterize the quality of the call environment of the first call terminal.
  • the target parameter value includes a volume value output by the first earpiece.
  • the terminal device 500 can implement each process that can be implemented by the first talking terminal in the method embodiment of the present invention, and achieve the same beneficial effects. To avoid repetition, details are not described herein again.
  • FIG. 6 is a second structural diagram of a terminal device provided by an embodiment of the present invention, and may be a schematic diagram of a hardware structure of a first call terminal that implements various embodiments of the present invention.
  • the terminal device 600 is the first call terminal in the method embodiment of the present invention, and the terminal device 600 includes a voice assistant.
  • the terminal device 600 includes but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, The processor 610, and the power supply 611 and other components.
  • a radio frequency unit 601 a radio frequency unit 601
  • a network module 602 an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609,
  • the processor 610, and the power supply 611 and other components are examples of the terminal device shown in
  • terminal devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminals, wearable devices, and pedometers.
  • the processor 610 is used for:
  • the target call terminal is the first call terminal
  • the first information is text information
  • the second information is voice information
  • the communication with the In the case of the second call end of the first call end the first information is voice information
  • the second information is text information
  • the second information is text information; the processor 610 is further configured to:
  • the text information displayed on the screen is saved.
  • processor 610 is also used for:
  • the call page and the text page are split-screen displayed on the screen by the display unit 606, and the text page is used to display the second information.
  • the terminal device 600 further includes:
  • a containing cavity, the containing cavity is made of sound-proof material
  • the first earpiece is arranged outside the accommodating cavity and is used to output the voice information of the second call terminal;
  • the first microphone is arranged outside the accommodating cavity and is used to collect voice information of the call environment of the first call terminal;
  • the second earpiece is arranged in the accommodating cavity and is used to output the voice information of the second call terminal;
  • the second microphone is arranged in the accommodating cavity and is electrically connected to the voice assistant, and is used to obtain the voice information output by the second earpiece and transmit the voice information to the voice assistant.
  • the second information is text information; the processor 610 is further configured to:
  • the target parameter value is used to characterize the quality of the call environment of the first call terminal.
  • the target parameter value includes a volume value output by the first earpiece.
  • terminal device 600 in this embodiment can implement each process in the method embodiment in the embodiment of the present invention and achieve the same beneficial effects. To avoid repetition, details are not described herein again.
  • the radio frequency unit 601 can be used to receive and send signals during information transmission or communication. Specifically, the downlink data from the base station is received and sent to the processor 610 for processing; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 601 can also communicate with the network and other devices through a wireless communication system.
  • the terminal device provides users with wireless broadband Internet access through the network module 602, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 603 can convert the audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into audio signals and output them as sounds. Moreover, the audio output unit 603 may also provide audio output related to a specific function performed by the terminal device 600 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 604 is used to receive audio or video signals.
  • the input unit 604 may include a graphics processing unit (GPU) 6041 and a microphone 6042, and the graphics processor 6041 is used to capture images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the data is processed.
  • the processed image frame can be displayed on the display unit 606.
  • the image frame processed by the graphics processor 6041 may be stored in the memory 609 (or other storage medium) or sent via the radio frequency unit 601 or the network module 602.
  • the microphone 6042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit 601 for output in the case of a telephone call mode.
  • the terminal device 600 also includes at least one sensor 605, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 6061 and / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the terminal device (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 605 can also include fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensors, etc., will not be repeated here.
  • the display unit 606 is used to display information input by the user or information provided to the user.
  • the display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 607 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the terminal device.
  • the user input unit 607 includes a touch panel 6071 and other input devices 6072.
  • the touch panel 6071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 6071 or near the touch panel 6071. operating).
  • the touch panel 6071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 610, the command sent by the processor 610 is received and executed.
  • the touch panel 6071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 607 may also include other input devices 6072.
  • other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 6071 can cover the display panel 6061.
  • the touch panel 6071 detects a touch operation on or near it, it is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 determines the type of the touch event according to the touch.
  • the type of event provides corresponding visual output on the display panel 6061.
  • the touch panel 6071 and the display panel 6061 are used as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 6071 and the display panel 6061 can be integrated
  • the implementation of the input and output functions of the terminal device is not specifically limited here.
  • the interface unit 608 is an interface for connecting an external device and the terminal device 600.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (Input/Output, I/O) port, video I/O port, headphone port, etc.
  • the interface unit 608 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the terminal device 600 or can be used to connect to the terminal device 600 and an external device. Transfer data between devices.
  • the memory 609 can be used to store software programs and various data.
  • the memory 609 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 609 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 610 is the control center of the terminal device. It uses various interfaces and lines to connect the various parts of the entire terminal device, runs or executes software programs and/or modules stored in the memory 609, and calls data stored in the memory 609. , Perform various functions of the terminal equipment and process data, so as to monitor the terminal equipment as a whole.
  • the processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem
  • the processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 610.
  • the terminal device 600 may also include a power source 611 (such as a battery) for supplying power to various components.
  • a power source 611 such as a battery
  • the power source 611 may be logically connected to the processor 610 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. And other functions.
  • the terminal device 600 includes some functional modules not shown, which will not be repeated here.
  • the embodiment of the present invention also provides a terminal device, the terminal device is a first call terminal, the terminal device includes a processor 610, a memory 609, stored in the memory 609 and available on the processor 610
  • the running computer program when the computer program is executed by the processor 610, realizes each process of the embodiment of the above-mentioned call method, and can achieve the same technical effect. In order to avoid repetition, it will not be repeated here.
  • the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, each process of the above-mentioned calling method embodiment is realized, and the same technical effect can be achieved. To avoid repetition, I won’t repeat it here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to enable a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present invention.
  • a terminal which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • modules, units, and sub-units can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSP Device, DSPD) ), programmable logic devices (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, used to execute the present disclosure Described functions in other electronic units or combinations thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processor
  • DSP Device Digital Signal Processing Device
  • DSPD Digital Signal Processing Device
  • PLD programmable logic devices
  • Field-Programmable Gate Array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described in the embodiments of the present disclosure can be implemented by modules (for example, procedures, functions, etc.) that perform the functions described in the embodiments of the present disclosure.
  • the software codes can be stored in the memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephone Function (AREA)

Abstract

本发明实施例提供一种通话方法及终端设备。所述终端设备为第一通话端,所述第一通话端包括语音助手;所述方法包括:在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;通过所述语音助手将所述第一信息转化为第二信息;输出所述第二信息;其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;和/或,在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。

Description

通话方法及终端设备
相关申请的交叉引用
本申请主张在2019年11月22日在中国提交的中国专利申请号No.201911155039.X的优先权,其全部内容通过引用包含于此。
技术领域
本发明实施例涉及通信技术领域,尤其涉及一种通话方法及终端设备。
背景技术
用户在通话时,若通话环境的噪声较大,将导致通话质量较差。为提高通话质量,目前通常采用的方式是:调高通话音量或转移到噪声较小的地方。然而,在通话音量调到最高用户仍不能清楚听清对端的声音,或,无法转移到噪声较小的地方的情况下,用户采用上述方式后通话质量仍得不到保证。
发明内容
本发明实施例提供一种通话方法及终端设备,以解决现有因通话环境的噪声较大,导致通话质量较差的问题。
为解决上述问题,本发明是这样实现的:
第一方面,本发明实施例提供了一种通话方法,应用于第一通话端,所述第一通话端包括语音助手;所述方法包括:
在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;
通过所述语音助手将所述第一信息转化为第二信息;
输出所述第二信息;
其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信 息为文本信息。
第二方面,本发明实施例还提供一种终端设备,所述终端设备为第一通话端,所述终端设备包括语音助手;所述终端设备包括:
获取模块,用于在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;
转化模块,用于通过所述语音助手将所述第一信息转化为第二信息;
输出模块,用于输出所述第二信息;
其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。
第三方面,本发明实施例还提供一种终端设备,所述终端设备为第一通话端,该终端设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的通话方法的步骤。
第四方面,本发明实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的通话方法的步骤。
在本发明实施例中,第一通话端可以通过语音助手将第二通话端的语音信息转化为文本信息,使得第一通信终端的用户可以通过查看文本信息获取第二通话端的用户的表达内容,丰富了第一通话端的用户获取第二通话端的用户的表达内容的方式;第一通话端可以接收用户输入的文本信息,并通过语音助手将该文本信息转化为语音信息,以将该语音信息传输至第二通话端,这样,即使在第一通话端的用户不方便发声的情况下,也可以与第二通话端的用户通话。可见,本发明实施例可以提高通话质量。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图 仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的通话方法的流程图之一;
图2是本发明实施例的通话页面的示意图;
图3是本发明实施例提供的通话方法的流程图之二;
图4是本发明实施例提供的通话方法的流程图之三;
图5是本发明实施例提供的终端设备的结构图之一;
图6是本发明实施例提供的终端设备的结构图之二。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,本发明中使用“和/或”表示所连接对象的至少其中之一,例如A和/或B和/或C,表示包含单独A,单独B,单独C,以及A和B都存在,B和C都存在,A和C都存在,以及A、B和C都存在的7种情况。
本发明实施例的通话方法可以应用于第一通话端。第一通话端可以与其他第二通话端进建立通话。具体地,通话的表现形式可以包括电话通话、语音通话、视频通话等。在实际应用中,通话端可以是手机、平板电脑(Tablet Personal Computer)、可穿戴式设备(Wearable Device)等。
本发明实施例的第一通话端可以包括但不仅限于语音助手。语音助手具有语音识别(Automatic Speech Recognition,ASR)功能和/或语音合成(Text-To-Speech,TTS)功能。具体地,在开启语音识别功能的情况下,语 音助手可以将语音信息转化为文本信息;在开启语音合成功能的情况下,语音助手可以将文本信息转化为语音信息。
参见图1,图1是本发明实施例提供的通话方法的流程图之一。如图1所示,通话方法可以包括以下步骤:
步骤101、在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息。
具体实现时,第一通话端可以在满足以下任一条件的情况下,开启所述语音助手:
第一条件:接收到对辅助通话控件的输入;
第二条件:接收到对实体人工智能(Artificial Intelligence,AI)键的输入。
在实际应用中,对于上述第一条件,所述辅助通话控件可以显示于来电页面和/或通话页面。如图2所示,辅助通话控件21显示于通话页面22上。
在本发明实施例中,目标通话端可以是:第一通话端;或,与所述第一通话端通话的第二通话端。第一信息的表现形式与目标通话端的表现形式相关。
具体地,在目标通话端为第一通话端的情况下,第一信息为文本信息。在该情况下,用户在第一通话端的屏幕上输入文本信息。
在目标通话端为第二通话端的情况下,第一信息为语音信息。在该情况下,语音助手可以通过以下两种方式获取第二通话端的语音信息。
方式一、语音助手可以通过听筒获取第二通话端的语音信息,在该实现方式中,语音助手从听筒直接获取的是语音信息的电信号。
方式二、语音助手可以通过话筒获取目第二通话端的语音信息,在该实现方式中,语音助手从话筒直接获取的是声音信号。
步骤102、通过所述语音助手将所述第一信息转化为第二信息。
具体实现时,在第一信息为文本信息的情况下,第二信息为语音信息。在该情况下,语音助手通过语音合成功能完成文本信息到语音信息的转化。
在第一信息为语音信息的情况下,第二信息为文本信息。在该情况下,语音助手通过语音识别功能完成语音信息到文本信息的转化。
步骤103、输出所述第二信息;在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;和/或,在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。
由上述内容可知,在不同的情况下,第二信息的表现形式不同。可以理解的是,对于不同的第二信息,其输出方式也不同,分别说明如下。
第一情况、所述第二信息为语音信息。
所述输出所述第二信息,包括:通过目标话筒输出所述第二信息;其中,所述目标话筒可以是本发明实施例的第一话筒,也可以是所述第一通话端的其他话筒。
对于第一情况,第一通话端可以接收用户输入的文本信息,并通过语音助手将该文本信息转化为语音信息,以将该语音信息传输至第二通话端,这样,即使在第一通话端的用户不方便发声的情况下,也可以与第二通话端的用户通话,从而可以提高通话质量。
第二情况、所述第二信息为文本信息。
所述输出所述第二信息包括:在所述第一通话端的屏幕显示所述第二信息。
具体实现时,所述屏幕用于显示所述第二信息的区域可以是屏幕的全部区域,也可以是屏幕的部分区域,具体可根据实际需要确定。应理解的是,本发明并不限制所述屏幕用于显示所述第二信息的区域的大小和位置。
对于第二情况,第一通话端可以通过语音助手将第二通话端的语音信息转化为文字信息,并在屏幕上显示该文字信息,这样,第一通话端的用户可以通过查看文字信息获取第二通话端的用户的表达内容,丰富了第一通话端的用户获取第二通话端的用户的表达内容的方式,从而可以提高通话质量。
需要说明的是,在实际应用中,本发明实施例可以应用于以下三种场景。
第一场景、第一通话端仅开启语音助手的语音识别功能。
在第一场景中,第一通话端的用户可以有两种获取第二通话端的用户的表达内容的途径:一、听取通过第一通话端的听筒输出的第二通话端的语音信息;二、查看第一通话端的屏幕中显示的由第二通话端的语音信息转化的文本信息。
第一通话端通过采集语音信息获取第一通话端的用户的表达内容。
因此,对于第一场景,具体实现时,第一通话端的通话流程可以包括:
通过所述语音助手获取第二通话端的第一语音信息;通过所述语音助手将所述第一语音信息转化为第一文本信息;输出所述第一文本信息;
通过所述第一话筒采集第一通话端的第二语音信息,并发送所述第二语音信息。
第二场景、第一通话端仅开启语音助手的语音合成功能。
在第二场景中,第一通话端的用户通过听取第一通话端的听筒输出的第二通话端的语音信息,获取第二通话端的用户的表达内容。
第一通话端通过获取用户输入的文本信息,获取第一通话端的用户的表达内容。且第一通话端通过语音助手将用户输入的文本信息转化为语音信息,以将该语音信息传输至第二通话端。
可见,对于第一场景和第二场景,在第二通话端的用户通过听取语音信息获取第一通话端的用户的表达内容时,实质上该语音信息的发声对象不同。具体地,对于第一场景,该语音信息的发声对象是第一通话端的用户,而对于第二场景,该语音信息的发声对象是第一通话端。
因此,对于第二场景,具体实现时,第一通话端的通话流程可以包括:
通过所述语音助手获取第一通话端的第二文本信息;通过所述语音助手将所述第二文本信息为第三语音信息;输出所述第三语音信息;
输出从第二通话端接收到的第四语音信息。
第三场景、第一通话端开启语音助手的语音识别功能和语音合成功能。
在第三场景中,第一通话端的用户可以有两种获取第二通话端的用户的表达内容的途径:一、听取通过第一通话端的听筒输出的第二通话端的语音信息;二、查看第一通话端的屏幕中显示的由第二通话端的语音信息转化的文本信息。
第一通话端通过获取用户输入的文本信息,获取第一通话端的用户的表达内容。且第一通话端通过语音助手将用户输入的文本信息转化为语音信息,以将该语音信息传输至第二通话端。
因此,对于第三场景,具体实现时,第一通话端的通话流程可以包括:
通过所述语音助手获取第二通话端的第一语音信息;通过所述语音助手将所述第一语音信息转化为第一文本信息;输出所述第一文本信息;
通过所述语音助手获取第一通话端的第二文本信息;通过所述语音助手将所述第二文本信息为第三语音信息;输出所述第三语音信息;
本实施例的通话方法,第一通话端可以通过语音助手将第二通话端的语音信息转化为文本信息,使得第一通信终端的用户可以通过查看文本信息获取第二通话端的用户的表达内容,丰富了第一通话端的用户获取第二通话端的用户的表达内容的方式;第一通话端可以接收用户输入的文本信息,并通过语音助手将该文本信息转化为语音信息,以将该语音信息传输至第二通话端,这样,即使在第一通话端的用户不方便发声的情况下,也可以与第二通话端的用户通话。可见,本发明实施例可以提高通话质量。
在本发明实施例中,可选的,所述第二信息为文本信息;
所述输出所述第二信息,包括:
在屏幕上显示所述第二信息;
所述输出所述第二信息之后,所述方法还包括:
在接收到第一输入的情况下,保存所述屏幕上显示的文本信息。
具体实现时,对于上述第一场景,所述屏幕上显示的文本信息可以包括:由第二通话端的语音信息转化的文本信息。
对于上述第三场景,所述屏幕上显示的文本信息可以包括:由第二通话端的语音信息转化的文本信息,以及第一通话端的用户输入的文本信息。
进一步地,所述在屏幕上显示所述第二信息,包括:
在所述屏幕上分屏显示通话页面和文本页面,所述文本页面用于显示所述第二信息。
这样,在分屏显示模式下,屏幕可以同时显示通话页面和文本页面,从而可以在不妨碍通话页面操作的情况下,丰富用户获取第二通话端的用户的表达内容,从而可以进一步提高通话质量。
另外,在实施时,第一通话端可以在开启语音助手后,触发屏幕进入分屏显示模式;也可以在转化得到文本信息后,才触发屏幕进入分屏显示模式,但不仅限于此。
在本发明实施例的其他实施方式中,第一通话端也可以全屏显示所述第二信息。
在本发明实施例中,可选的,所述第一通话端还包括:
容纳腔,所述容纳腔由隔音材料制成;
第一听筒,设于所述容纳腔外,用于输出所述第二通话端的语音信息;
第一话筒,设于所述容纳腔外,用于采集所述第一通话端的通话环境的语音信息;
第二听筒,设于所述容纳腔内,用于输出所述第二通话端的语音信息;
第二话筒,设于所述容纳腔内,与所述语音助手电连接,用于获取所述第二听筒输出的语音信息,并将所述语音信息传输至所述语音助手。
在本实施方式中,第一听筒和第二听筒均可以输出接收到的第二通话端的语音信息。具体地,第一听筒输出的语音信息供用户通过听取该语音信息,获取第二通话端的用户的表达内容;第二听筒输出的语音信息可以通过第二话筒传输至语音助手,以使语音助手将该语音信息转化为文本信息,进而供用户通过查看该文本信息获取第二通话端的用户的表达内容。
第一话筒和第二话筒均用于采集语音信息,但第一话筒采集的是第一通话端的外部所处环境的语音信息,即第一话筒采集的是第一通话端的通话环境的语音信息;第二话筒采集的是第二听筒输出的语音信息。
由于第二听筒和第二话筒设于由隔音材料制成的容纳腔内,因此,可以提高语音助手获取语音信息的质量,进而可以提高通话质量。
在本发明实施例中,可选的,所述第二信息为文本信息;所述方法还包括以下至少一项:
在目标参数值大于阈值的情况下,控制所述第一话筒处于关闭状态;
在所述目标参数值小于或等于所述阈值的情况下,控制所述第一话筒处于开启状态;
其中,所述目标参数值用于表征所述第一通话端的通话环境的优劣程度。
应理解的是,上述步骤可以应用于第一通话端通过所述语音助手获取目标通话端的第一信息的过程中,即第一通话端可以在通过语音助手获取语音信息的情况下,根据目标参数值与阈值的比较结果,控制第一话筒的工作状态。
在目标参数值大于阈值的情况下,说明所述第一通话端的通话环境劣,噪声大。因此,为了降低外部噪声对第二话筒的影响,可以关闭第一话筒。
在目标参数值小于或等于阈值的情况下,说明所述第一通话端的通话环境优,噪声小。因此,可以开启第一话筒。由于第一话筒还需要用于采集第一通话端的外部所处环境的语音信息,从而在目标参数值小于或等于阈值的情况下控制所述第一话筒处于开启状态,可以降低第一话筒开关的切换频次。
具体实现时,考虑到第二通话端的用户的发声音量大小受第一通话端的通话环境影响。具体地,第一通话端的通话环境越差,则第二通话端的用户的发声音量越大。因此,可选的,所述目标参数值包括所述第一听筒输出的音量值。
需要说明的是,本发明实施例中介绍的多种可选的实施方式,在彼此不冲突的情况下可以相互结合实现,也可以单独实现,对此本发明实施例不作限定。
为方便理解,示例说明如下:
实施例一
本实施例可以在通话端的内部用隔音材料制成一个隔层,隔层内放置微型专用通话扬声器和AI专用话筒。
本实施例可以包括以下步骤:
步骤301、在检测到来电的情况下,在来电被叫界面显示辅助通话控件。
若检测到对辅助通话控件的触控操作,则使用辅助功能接听;若未检测到对辅助通话控件的触控操作,则未使用辅助功能接听。
步骤302、检测是否启用辅助功能接听。
若使用辅助功能接听,则执行步骤304,反之则执行步骤303。
步骤303、进入正常的通话接听界面。
进一步地,可以在通话接听界面显示辅助通话控件。若检测到对辅助通话控件的触控操作,则使用辅助功能接听,执行步骤304。
步骤304、启动语音助手,并控制屏幕上下分屏,在上半屏为通话页面的同比缩放,继续保留手机接听过程的所有按钮,下半屏为语音助手唤醒界面的同比缩放,将打开通话端内部的AI专用扬声器和话筒,将ASR识别出的文本显示在下半屏内。
用户可以根据下半屏文字内容辅助本次通话。
在实施过程中,为了防止外置话筒对内部话筒的噪音影响,开启辅助通话后,当听筒音量大于阈值时,外部话筒应关闭。当听筒音量小于阈值时,外部话筒重新打开。
步骤305、在通话结束的情况下,通话页面自动关闭,AI专用扬声器和收音孔关闭,下半屏手机语音助手界面扩大到全屏;通话内容用户可选择性保存。
实施例二
本实施例可以在通话端的内部用隔音材料制成一个隔层,隔层内放置微型专用通话扬声器和AI专用话筒。
步骤401、系统检测到有来电。
步骤402、在来电被接听的情况下,进入正常的通话接听界面。
在接听过程中,用户可以通过手机实体AI键唤醒AI辅助通话。
步骤403、在启用AI辅助手机通话的情况下,并控制屏幕上下分屏,在上半屏为通话页面的同比缩放,继续保留手机接听过程的所有按钮,下半屏为语音助手唤醒界面的同比缩放,将打开通话端内部的AI专用扬声器和话筒,将ASR识别出的文本显示在下半屏内。
用户可以根据下半屏文字内容辅助本次通话。
在实施过程中,为了防止外置话筒对内部话筒的噪音影响,开启辅助通话后,当听筒音量大于阈值时,外部话筒应关闭。当听筒音量小于阈值时,外部话筒重新打开。
步骤404、在通话结束的情况下,通话页面自动关闭,AI专用扬声器和收音孔关闭,下半屏手机语音助手界面扩大到全屏;通话内容用户可选择性保存。
参见图5,图5是本发明实施例提供的终端设备的结构图之一。终端设备500为本发明方法实施例中的第一通话端,终端设备500包括语音助手;如图5所示,终端设备500包括:
获取模块501,用于在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;
转化模块502,用于通过所述语音助手将所述第一信息转化为第二信息;
输出模块503,用于输出所述第二信息;
其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;和/或,在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。
可选的,所述第二信息为文本信息;
所述输出模块503,具体用于:
在屏幕上显示所述第二信息;
所述终端设备500还包括:
保存模块,用于在所述输出模块输出所述第二信息之后,在接收到 第一输入的情况下,保存所述屏幕上显示的文本信息。
可选的,所述输出模块503,具体用于:
在所述屏幕上分屏显示通话页面和文本页面,所述文本页面用于显示所述第二信息。
可选的,所述终端设备500还包括:
容纳腔,所述容纳腔由隔音材料制成;
第一听筒,设于所述容纳腔外,用于输出所述第二通话端的语音信息;
第一话筒,设于所述容纳腔外,用于采集所述第一通话端的通话环境的语音信息;
第二听筒,设于所述容纳腔内,用于输出所述第二通话端的语音信息;
第二话筒,设于所述容纳腔内,与所述语音助手电连接,用于获取所述第二听筒输出的语音信息,并将所述语音信息传输至所述语音助手。
可选的,所述第二信息为文本信息;所述终端设备还包括控制模块,用于执行以下至少一项:
在目标参数值大于阈值的情况下,控制所述第一话筒处于关闭状态;
在所述目标参数值小于或等于所述阈值的情况下,控制所述第一话筒处于开启状态;
其中,所述目标参数值用于表征所述第一通话端的通话环境的优劣程度。
可选的,所述目标参数值包括所述第一听筒输出的音量值。
终端设备500能够实现本发明方法实施例中第一通话端能够实现的各个过程,以及达到相同的有益效果,为避免重复,这里不再赘述。
请参考图6,图6是本发明实施例提供的终端设备的结构图之二,可以为实现本发明各个实施例的一种第一通话端的硬件结构示意图。终端设备600为本发明方法实施例中的第一通话端,终端设备600包括语音助手。如图6所示,终端设备600包括但不限于:射频单元601、网络模块602、音频输出单元603、输入单元604、传感器605、显示单元606、用户输入单元 607、接口单元608、存储器609、处理器610、以及电源611等部件。本领域技术人员可以理解,图6中示出的终端设备结构并不构成对第一通话端的限定,第一通话端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本发明实施例中,终端设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,处理器610,用于:
在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;
通过所述语音助手将所述第一信息转化为第二信息;
输出所述第二信息;
其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;和/或,在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。
可选的,所述第二信息为文本信息;处理器610,还用于:
通过显示单元606在屏幕上显示所述第二信息;
在通过用户输入单元607接收到第一输入的情况下,保存所述屏幕上显示的文本信息。
可选的,处理器610,还用于:
通过显示单元606在所述屏幕上分屏显示通话页面和文本页面,所述文本页面用于显示所述第二信息。
可选的,所述终端设备600还包括:
容纳腔,所述容纳腔由隔音材料制成;
第一听筒,设于所述容纳腔外,用于输出所述第二通话端的语音信息;
第一话筒,设于所述容纳腔外,用于采集所述第一通话端的通话环境的语音信息;
第二听筒,设于所述容纳腔内,用于输出所述第二通话端的语音信息;
第二话筒,设于所述容纳腔内,与所述语音助手电连接,用于获取所述第二听筒输出的语音信息,并将所述语音信息传输至所述语音助手。
可选的,所述第二信息为文本信息;处理器610,还用于:
在目标参数值大于阈值的情况下,控制所述第一话筒处于关闭状态;
在所述目标参数值小于或等于所述阈值的情况下,控制所述第一话筒处于开启状态;
其中,所述目标参数值用于表征所述第一通话端的通话环境的优劣程度。
可选的,所述目标参数值包括所述第一听筒输出的音量值。
需要说明的是,本实施例中上述终端设备600可以实现本发明实施例中方法实施例中的各个过程,以及达到相同的有益效果,为避免重复,此处不再赘述。
应理解的是,本发明实施例中,射频单元601可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器610处理;另外,将上行的数据发送给基站。通常,射频单元601包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元601还可以通过无线通信系统与网络和其他设备通信。
终端设备通过网络模块602为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元603可以将射频单元601或网络模块602接收的或者在存储器609中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元603还可以提供与终端设备600执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元603包括扬声器、蜂鸣器以及受话器等。
输入单元604用于接收音频或视频信号。输入单元604可以包括图形处理器(Graphics Processing Unit,GPU)6041和麦克风6042,图形处理器6041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示 单元606上。经图形处理器6041处理后的图像帧可以存储在存储器609(或其它存储介质)中或者经由射频单元601或网络模块602进行发送。麦克风6042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元601发送到移动通信基站的格式输出。
终端设备600还包括至少一种传感器605,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板6061的亮度,接近传感器可在终端设备600移动到耳边时,关闭显示面板6061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器605还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元606用于显示由用户输入的信息或提供给用户的信息。显示单元606可包括显示面板6061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板6061。
用户输入单元607可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元607包括触控面板6071以及其他输入设备6072。触控面板6071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板6071上或在触控面板6071附近的操作)。触控面板6071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器610,接收处理器610发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板6071。 除了触控面板6071,用户输入单元607还可以包括其他输入设备6072。具体地,其他输入设备6072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板6071可覆盖在显示面板6061上,当触控面板6071检测到在其上或附近的触摸操作后,传送给处理器610以确定触摸事件的类型,随后处理器610根据触摸事件的类型在显示面板6061上提供相应的视觉输出。虽然在图6中,触控面板6071与显示面板6061是作为两个独立的部件来实现终端设备的输入和输出功能,但是在某些实施例中,可以将触控面板6071与显示面板6061集成而实现终端设备的输入和输出功能,具体此处不做限定。
接口单元608为外部装置与终端设备600连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(Input/Output,I/O)端口、视频I/O端口、耳机端口等等。接口单元608可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到终端设备600内的一个或多个元件或者可以用于在终端设备600和外部装置之间传输数据。
存储器609可用于存储软件程序以及各种数据。存储器609可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器609可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器610是终端设备的控制中心,利用各种接口和线路连接整个终端设备的各个部分,通过运行或执行存储在存储器609内的软件程序和/或模块,以及调用存储在存储器609内的数据,执行终端设备的各种功能和处理数据,从而对终端设备进行整体监控。处理器610可包括一个或多个处理单元;优选的,处理器610可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要 处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器610中。
终端设备600还可以包括给各个部件供电的电源611(比如电池),优选的,电源611可以通过电源管理系统与处理器610逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端设备600包括一些未示出的功能模块,在此不再赘述。
优选的,本发明实施例还提供一种终端设备,所述终端设备为第一通话端,所述终端设备包括处理器610,存储器609,存储在存储器609上并可在所述处理器610上运行的计算机程序,该计算机程序被处理器610执行时实现上述通话方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述通话方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的 对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(Read-Only Memory,ROM)或随机存取存储器(Random Access Memory,RAM)等。
可以理解的是,本公开实施例描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,模块、单元、子单元 可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本公开实施例所述功能的模块(例如过程、函数等)来实现本公开实施例所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。

Claims (14)

  1. 一种通话方法,应用于第一通话端,其特征在于,所述第一通话端包括语音助手;所述方法包括:
    在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;
    通过所述语音助手将所述第一信息转化为第二信息;
    输出所述第二信息;
    其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;和/或,在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。
  2. 根据权利要求1所述的方法,其特征在于,所述第二信息为文本信息;
    所述输出所述第二信息,包括:
    在屏幕上显示所述第二信息;
    所述输出所述第二信息之后,所述方法还包括:
    在接收到第一输入的情况下,保存所述屏幕上显示的文本信息。
  3. 根据权利要求2所述的方法,其特征在于,所述在屏幕上显示所述第二信息,包括:
    在所述屏幕上分屏显示通话页面和文本页面,所述文本页面用于显示所述第二信息。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述第一通话端还包括:
    容纳腔,所述容纳腔由隔音材料制成;
    第一听筒,设于所述容纳腔外,用于输出所述第二通话端的语音信息;
    第一话筒,设于所述容纳腔外,用于采集所述第一通话端的通话环境的语音信息;
    第二听筒,设于所述容纳腔内,用于输出所述第二通话端的语音信息;
    第二话筒,设于所述容纳腔内,与所述语音助手电连接,用于获取所述 第二听筒输出的语音信息,并将所述语音信息传输至所述语音助手。
  5. 根据权利要求4所述的方法,其特征在于,所述第二信息为文本信息;所述方法还包括以下至少一项:
    在目标参数值大于阈值的情况下,控制所述第一话筒处于关闭状态;
    在所述目标参数值小于或等于所述阈值的情况下,控制所述第一话筒处于开启状态;
    其中,所述目标参数值用于表征所述第一通话端的通话环境的优劣程度。
  6. 根据权利要求5所述的方法,其特征在于,所述目标参数值包括所述第一听筒输出的音量值。
  7. 一种终端设备,所述终端设备为第一通话端,其特征在于,所述终端设备包括语音助手;所述终端设备包括:
    获取模块,用于在所述语音助手开启的情况下,通过所述语音助手获取目标通话端的第一信息;
    转化模块,用于通过所述语音助手将所述第一信息转化为第二信息;
    输出模块,用于输出所述第二信息;
    其中,在所述目标通话端为所述第一通话端的情况下,所述第一信息为文本信息,所述第二信息为语音信息;和/或,在所述目标通话端为与所述第一通话端通话的第二通话端的情况下,所述第一信息为语音信息,所述第二信息为文本信息。
  8. 根据权利要求7所述的终端设备,其特征在于,所述第二信息为文本信息;
    所述输出模块,具体用于:
    在屏幕上显示所述第二信息;
    所述终端设备还包括:
    保存模块,用于在所述输出模块输出所述第二信息之后,在接收到第一输入的情况下,保存所述屏幕上显示的文本信息。
  9. 根据权利要求8所述的终端设备,其特征在于,所述输出模块,具体用于:
    在所述屏幕上分屏显示通话页面和文本页面,所述文本页面用于显示所 述第二信息。
  10. 根据权利要求7至9中任一项所述的终端设备,其特征在于,所述终端设备还包括:
    容纳腔,所述容纳腔由隔音材料制成;
    第一听筒,设于所述容纳腔外,用于输出所述第二通话端的语音信息;
    第一话筒,设于所述容纳腔外,用于采集所述第一通话端的通话环境的语音信息;
    第二听筒,设于所述容纳腔内,用于输出所述第二通话端的语音信息;
    第二话筒,设于所述容纳腔内,与所述语音助手电连接,用于获取所述第二听筒输出的语音信息,并将所述语音信息传输至所述语音助手。
  11. 根据权利要求10所述的终端设备,其特征在于,所述第二信息为文本信息;所述终端设备还包括控制模块,用于执行以下至少一项:
    在目标参数值大于阈值的情况下,控制所述第一话筒处于关闭状态;
    在所述目标参数值小于或等于所述阈值的情况下,控制所述第一话筒处于开启状态;
    其中,所述目标参数值用于表征所述第一通话端的通话环境的优劣程度。
  12. 根据权利要求11所述的终端设备,其特征在于,所述目标参数值包括所述第一听筒输出的音量值。
  13. 一种终端设备,所述终端设备为第一通话端,其特征在于,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至6中任一项所述的通话方法的步骤。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的通话方法的步骤。
PCT/CN2020/129662 2019-11-22 2020-11-18 通话方法及终端设备 WO2021098708A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911155039.XA CN110913070B (zh) 2019-11-22 2019-11-22 一种通话方法及终端设备
CN201911155039.X 2019-11-22

Publications (1)

Publication Number Publication Date
WO2021098708A1 true WO2021098708A1 (zh) 2021-05-27

Family

ID=69818857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129662 WO2021098708A1 (zh) 2019-11-22 2020-11-18 通话方法及终端设备

Country Status (2)

Country Link
CN (1) CN110913070B (zh)
WO (1) WO2021098708A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660375A (zh) * 2021-08-11 2021-11-16 维沃移动通信有限公司 通话方法、装置及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110913070B (zh) * 2019-11-22 2021-11-23 维沃移动通信有限公司 一种通话方法及终端设备

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510917A (zh) * 2009-03-11 2009-08-19 宇龙计算机通信科技(深圳)有限公司 一种移动终端无声通话的方法及移动终端
CN101610465A (zh) * 2008-06-18 2009-12-23 朗讯科技公司 用于将文本信息转换为语音信息的通信方法及通信系统
CN103973877A (zh) * 2013-02-06 2014-08-06 北京壹人壹本信息科技有限公司 一种在移动终端中利用文字实现实时通话的方法和装置
CN104285428A (zh) * 2012-05-08 2015-01-14 三星电子株式会社 用于运行通信服务的方法和系统
CN104869225A (zh) * 2014-02-21 2015-08-26 宏达国际电子股份有限公司 智能对话方法和使用所述方法的电子装置
KR101609585B1 (ko) * 2014-11-28 2016-04-06 박지선 청각 장애인용 이동 통신 단말기
CN106131288A (zh) * 2016-08-25 2016-11-16 深圳市金立通信设备有限公司 一种通话信息的记录方法及终端
CN106412259A (zh) * 2016-09-14 2017-02-15 广东欧珀移动通信有限公司 移动终端通话控制方法、装置及移动终端
CN108769363A (zh) * 2018-04-13 2018-11-06 珠海市魅族科技有限公司 通话方法及装置、计算机装置和计算机可读存储介质
CN110913070A (zh) * 2019-11-22 2020-03-24 维沃移动通信有限公司 一种通话方法及终端设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110051385A (ko) * 2009-11-10 2011-05-18 삼성전자주식회사 통신 단말기 및 그의 통신 방법
CN102710539A (zh) * 2012-05-02 2012-10-03 中兴通讯股份有限公司 语音信息传送方法及装置
CN105847580A (zh) * 2016-05-04 2016-08-10 浙江吉利控股集团有限公司 一种可实现第三方来电语音提醒的系统及方法
CN107103899B (zh) * 2017-04-24 2020-06-19 北京小米移动软件有限公司 输出语音消息的方法和装置
CN109036404A (zh) * 2018-07-18 2018-12-18 北京小米移动软件有限公司 语音交互方法及装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610465A (zh) * 2008-06-18 2009-12-23 朗讯科技公司 用于将文本信息转换为语音信息的通信方法及通信系统
CN101510917A (zh) * 2009-03-11 2009-08-19 宇龙计算机通信科技(深圳)有限公司 一种移动终端无声通话的方法及移动终端
CN104285428A (zh) * 2012-05-08 2015-01-14 三星电子株式会社 用于运行通信服务的方法和系统
CN103973877A (zh) * 2013-02-06 2014-08-06 北京壹人壹本信息科技有限公司 一种在移动终端中利用文字实现实时通话的方法和装置
CN104869225A (zh) * 2014-02-21 2015-08-26 宏达国际电子股份有限公司 智能对话方法和使用所述方法的电子装置
KR101609585B1 (ko) * 2014-11-28 2016-04-06 박지선 청각 장애인용 이동 통신 단말기
CN106131288A (zh) * 2016-08-25 2016-11-16 深圳市金立通信设备有限公司 一种通话信息的记录方法及终端
CN106412259A (zh) * 2016-09-14 2017-02-15 广东欧珀移动通信有限公司 移动终端通话控制方法、装置及移动终端
CN108769363A (zh) * 2018-04-13 2018-11-06 珠海市魅族科技有限公司 通话方法及装置、计算机装置和计算机可读存储介质
CN110913070A (zh) * 2019-11-22 2020-03-24 维沃移动通信有限公司 一种通话方法及终端设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660375A (zh) * 2021-08-11 2021-11-16 维沃移动通信有限公司 通话方法、装置及电子设备

Also Published As

Publication number Publication date
CN110913070B (zh) 2021-11-23
CN110913070A (zh) 2020-03-24

Similar Documents

Publication Publication Date Title
WO2021098678A1 (zh) 投屏控制方法及电子设备
US11848773B2 (en) Transmit antenna switching method and terminal device
WO2021078116A1 (zh) 视频处理方法及电子设备
WO2020238635A1 (zh) 移动终端及出音口的切换方法
WO2021109907A1 (zh) 应用分享方法、第一电子设备及计算机可读存储介质
US11635939B2 (en) Prompting method and mobile terminal
US20200257433A1 (en) Display method and mobile terminal
WO2019201271A1 (zh) 通话处理方法及移动终端
WO2021129529A1 (zh) 设备切换方法及相关设备
CN111638779A (zh) 音频播放控制方法、装置、电子设备及可读存储介质
WO2021063249A1 (zh) 电子设备的控制方法及电子设备
CN108551534B (zh) 多终端语音通话的方法及装置
WO2021098633A1 (zh) 信息显示、发送方法及电子设备
WO2020220990A1 (zh) 受话器控制方法及终端
WO2020199986A1 (zh) 视频通话方法及终端设备
WO2019206077A1 (zh) 视频通话处理方法及移动终端
WO2021190545A1 (zh) 通话处理方法及电子设备
WO2021109959A1 (zh) 应用程序分享方法及电子设备
WO2021129835A1 (zh) 音量控制方法、设备及计算机可读存储介质
CN108712566A (zh) 一种语音助手唤醒方法及移动终端
WO2021238844A1 (zh) 音频输出方法及电子设备
WO2021098698A1 (zh) 音频播放方法及终端设备
WO2021098708A1 (zh) 通话方法及终端设备
WO2021197311A1 (zh) 音量调节显示方法及电子设备
WO2021169869A1 (zh) 音频播放装置、音频播放方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20890599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20890599

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20890599

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20890599

Country of ref document: EP

Kind code of ref document: A1