WO2020249062A1 - 一种语音通信方法及相关装置 - Google Patents

一种语音通信方法及相关装置 Download PDF

Info

Publication number
WO2020249062A1
WO2020249062A1 PCT/CN2020/095751 CN2020095751W WO2020249062A1 WO 2020249062 A1 WO2020249062 A1 WO 2020249062A1 CN 2020095751 W CN2020095751 W CN 2020095751W WO 2020249062 A1 WO2020249062 A1 WO 2020249062A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
call
voice
user
multiple terminals
Prior art date
Application number
PCT/CN2020/095751
Other languages
English (en)
French (fr)
Inventor
李进
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020249062A1 publication Critical patent/WO2020249062A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42187Lines and connections with preferential service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42348Location-based services which utilize the location information of a target
    • H04M3/42357Location-based services which utilize the location information of a target where the information is provided to a monitoring entity such as a potential calling party or a call processing server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42365Presence services providing information on the willingness to communicate or the ability to communicate in terms of media capability or network connectivity
    • H04M3/42374Presence services providing information on the willingness to communicate or the ability to communicate in terms of media capability or network connectivity where the information is provided to a monitoring entity such as a potential calling party or a call processing server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/54Arrangements for diverting calls for one subscriber to another predetermined subscriber

Definitions

  • This application relates to the field of communication technology, and in particular to a voice communication method and related devices.
  • smart devices with a call function in a home LAN are all talking individually.
  • the voice call received on the personal computer can only be answered on the personal computer
  • the call received on the mobile phone can only be answered on the mobile phone.
  • the user may not be able to detect the device's incoming call reminder (ring or vibration), which may cause the user to miss incoming calls and affect the user’s Experience.
  • the present application provides a voice communication method and related devices, which realize the transfer of incoming calls and voice calls between telephone devices in the home, avoid users from missing incoming calls on the telephone devices, and improve user experience.
  • the present application provides a voice communication method, including: first, the first terminal receives a voice call. Then, when the first terminal determines that the voice call has not answered the call over time, or the first terminal is in a call, the first terminal obtains the user location reported by multiple terminals. Wherein, any one of the multiple terminals is different from the first terminal. The first terminal determines the second terminal from the multiple terminals according to the user locations reported by the multiple terminals. Among the multiple terminals, the second terminal is the closest to the user. The first terminal transfers the voice call to the second terminal to answer it.
  • a voice communication method which realizes that after the terminal receives a voice call, if the terminal is occupied or does not answer the call over time, the terminal can choose from other terminals with the ability to call, according to the user location reported by each terminal, Determine the optimal answering terminal, and transfer the voice call to the answering terminal. In this way, through the transfer of the voice call between the call terminals in the family, it is avoided that the user misses the voice call on the call terminal, and the user experience is improved.
  • the foregoing first terminal acquiring the user locations reported by multiple terminals specifically includes: the first terminal acquiring the user's voiceprint energy values reported by the multiple terminals. The higher the voiceprint energy value indicates that the terminal reporting the voiceprint energy value is closer to the user.
  • the first terminal determines the second terminal from the multiple terminals according to the voiceprint capability values reported by the multiple terminals; wherein, among the multiple terminals The voiceprint energy value of the second terminal is the highest. In this way, the first terminal can determine the terminal closest to the user through the user's voiceprint energy value, and transfer the voice call to the terminal closest to the user, avoiding the user from missing the voice call.
  • the method further includes: first, the first terminal obtains the call frequencies reported by the multiple terminals.
  • the call frequency is the ratio of the number of calls of the terminal to the total number of calls of all terminals connected to the router. Then, when there are multiple terminals closest to the user among the multiple terminals, the first terminal determines the second terminal from the multiple terminals closest to the user according to the call frequency. Among them, the second terminal has the highest call frequency among the multiple terminals closest to the user. In this way, when there are multiple terminals closest to the user, the second terminal frequently used by the user can be selected according to the call frequency of each terminal, which improves the user experience.
  • the method further includes: first, the first terminal obtains the voice capability priorities reported by the multiple terminals.
  • the priority of the voice capability is determined by the device type of the terminal.
  • the first terminal determines the second terminal from the multiple terminals closest to the user and the highest call frequency according to the voice capability priority.
  • the voice capability of the second terminal has the highest priority. In this way, when there are multiple terminals closest to the user, the second terminal with the highest voice capability priority can be further selected according to the voice capability priority of each terminal, which improves the user experience.
  • the first terminal transfers the voice call to the second terminal to answer, specifically including: the first terminal receives the contact’s voice data sent by the contact’s terminal, and receives the voice data sent by the second terminal The user’s voice data.
  • the first terminal sends the voice data of the contact to the second terminal, and sends the voice data of the user to the terminal of the contact.
  • the method before the first terminal receives the voice data of the contact sent by the terminal of the contact and receives the voice data of the user sent by the second terminal, the method further includes: first, the first terminal Send an incoming call instruction to the second terminal. Wherein, the incoming call instruction is used to instruct the second terminal to output an incoming call reminder. Then, the first terminal receives the answer confirmation sent by the second terminal. In response to the answering confirmation, the first terminal receives the voice data of the contact sent by the terminal of the contact, and receives the voice data of the user sent by the second terminal.
  • the method before the first terminal transfers the voice call to the second terminal to answer, the method further includes: establishing a connection between the first terminal and the second terminal.
  • the first terminal may also periodically obtain the user location reported by each of the multiple terminals except the second terminal, and determine Out of the third terminal.
  • the third terminal is the terminal closest to the user among the above-mentioned multiple terminals except the second terminal. After the third terminal is determined, the first terminal can transfer the voice call to the third terminal instead of the second terminal.
  • the first terminal may also receive the user's transfer operation. After receiving the user's transfer operation, the first terminal may obtain the user location reported by each terminal except the second terminal among the above-mentioned multiple terminals, and determine the third terminal. Wherein, the third terminal is the terminal closest to the user among the above-mentioned multiple terminals except the second terminal. After the third terminal is determined, the first terminal can transfer the voice call to the third terminal instead of the second terminal.
  • the first terminal transfers the incoming call of the contact to the second terminal, and the first terminal detects that the second terminal has output an incoming call reminder but has not answered the call after a timeout, then the terminal 1 can select from the aforementioned multiple terminals Among other terminals except the second terminal, the third terminal is determined according to the user location reported by the other terminals. The third terminal is the terminal closest to the user among the other terminals mentioned above. After the third terminal is determined, the first terminal can transfer the incoming call of the contact to the third terminal instead of the second terminal.
  • the first terminal after the first terminal receives an incoming voice call, it can first determine whether the first terminal has enabled the call forwarding function. If enabled, when the voice call is not answered or the first terminal is in a call, One terminal obtains the user location reported by multiple terminals. If the call transfer function is turned off, the first terminal outputs an incoming call reminder and waits to receive the user's answer operation.
  • this application provides another voice communication method, including: first, the first terminal receives a voice call. Then, when the first terminal determines that the voice call has not been answered over time, or the first terminal is in a call, the first terminal obtains the user location, call frequency, voice capability priority, and device status reported by multiple terminals. Wherein, any one of the foregoing multiple terminals is different from the first terminal. Then, the first terminal determines the second terminal from the multiple terminals according to the user location, call frequency, voice capability priority, and device state value reported by the multiple terminals. Then, the first terminal transfers the voice call to the second terminal to answer it.
  • This application proposes a voice communication method, which realizes that after the terminal receives a voice call, if the terminal is occupied or does not answer the call over time, the terminal can use other terminals with the ability to call according to the user location and voice reported by each terminal. Capability priority, call frequency and equipment status determine the optimal answering terminal, and transfer incoming calls and voice calls to the answering terminal. In this way, through the transfer of the incoming call and the voice call between the call terminals in the family, it is avoided that the user misses the voice call on the call terminal, and the user experience is improved.
  • the present application provides another voice communication method, including: First, when the first terminal is in a voice call with the terminal of the contact, the first terminal receives the call transfer operation of the user. Then, in response to the call transfer operation, the first terminal obtains the user locations reported by multiple terminals. Wherein, any one of the foregoing multiple terminals is different from the first terminal. Then, the first terminal determines the second terminal from the multiple terminals according to the user locations reported by the multiple terminals; wherein, among the multiple terminals, the second terminal is the closest to the user. Finally, the first terminal transfers the voice call to the second terminal.
  • This application provides a voice communication method.
  • a user talks with a contact through a first terminal (such as a smart phone)
  • the first terminal can determine the second terminal of the voice call based on the user location reported by other terminals , And transfer the voice call to a second terminal (such as a smart TV), so that the call effect between the user and the contact can be improved when the user is moving indoors.
  • a second terminal such as a smart TV
  • the present application provides a terminal including one or more processors, one or more memories, and transceivers.
  • the one or more memories, transceivers, and one or more processors are coupled, and the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the terminal executes the voice communication method in any possible implementation manner of any one of the foregoing aspects.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on a terminal, cause the terminal to execute the voice communication method in any one of the possible implementations of any of the foregoing aspects.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the voice communication method in any one of the possible implementations of any of the foregoing aspects.
  • this application provides another voice communication method, including: first, the hub device receives a call transfer request sent by the first terminal. Then, in response to the call transfer request sent by the first terminal, the hub device obtains the user locations reported by the multiple terminals. Wherein, any one of the multiple terminals is different from the first terminal. Then, the first terminal and the aforementioned multiple terminals are all connected to the hub device. Then, the hub device determines the second terminal from the multiple terminals according to the user locations reported by the multiple terminals. Among the above-mentioned multiple terminals, the second terminal is the closest to the user. Finally, the hub device sends an incoming call notification to the second terminal, and the incoming call notification is used for the second terminal to output an incoming call reminder.
  • a voice communication method which realizes that after the terminal receives a voice call, if the terminal is occupied or has not answered the call over time, the central device can select from other devices connected to the central device according to the user location reported by each terminal. Among the terminals with call capability, the optimal answering terminal is determined. The hub device transfers the voice call to the answering terminal. In this way, through the transfer of the voice call between the call terminals in the family, it is avoided that the user misses the voice call on the call terminal, and the user experience is improved.
  • the method further includes: the hub device receives the answering instruction sent by the second terminal.
  • the hub device receives the contact's voice data sent by the contact's terminal, and receives the user's voice data sent by the second terminal.
  • the hub device sends the voice data of the contact to the second terminal, and sends the voice data of the user to the terminal of the contact.
  • this application provides another voice communication method, including: first, the server receives a call transfer request sent by the first terminal. Then, in response to the call transfer request, the server obtains the user location reported by multiple terminals. Wherein, any one of the multiple terminals is different from the first terminal. Then, the server determines the second terminal from the multiple terminals according to the user locations reported by the multiple terminals. Among the above-mentioned multiple terminals, the second terminal is the closest to the user. Finally, the server sends an incoming call notification to the second terminal, and the incoming call notification is used for the second terminal to output an incoming call reminder.
  • a voice communication method which realizes that after the terminal receives a voice call, if the terminal is occupied or has not answered the call over time, the server can use the user location reported by each terminal to communicate with others connected to the server. Among the terminals, the optimal answering terminal is determined. The server transfers the voice call to the answering terminal. In this way, through the transfer of the voice call between the call terminals, it is avoided that the user misses the voice call on the call terminal, and the user experience is improved.
  • the method further includes: the server receives the answering instruction sent by the second terminal.
  • the server receives the voice data of the contact sent by the terminal of the contact, and receives the voice data of the user sent by the second terminal.
  • the server sends the voice data of the contact to the second terminal, and sends the voice data of the user to the terminal of the contact.
  • FIG. 1 is a schematic structural diagram of a terminal provided by an embodiment of the application.
  • Figure 2 is a schematic diagram of a network architecture provided by an embodiment of the application.
  • FIG. 3 is a schematic flowchart of a voice communication method provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a voice communication scenario provided by an embodiment of this application.
  • FIG. 5 is a schematic diagram of a voice communication scenario provided by another embodiment of this application.
  • FIG. 6 is a schematic diagram of a voice communication scenario provided by another embodiment of this application.
  • FIG. 7 is a schematic flowchart of a voice communication method provided by another embodiment of this application.
  • FIGS. 8A-8C are schematic diagrams of scenarios of a voice communication method provided by another embodiment of this application.
  • 9A-9C are schematic diagrams of scenarios of a voice communication method provided by another embodiment of this application.
  • FIG. 10 is a schematic flowchart of a voice communication method provided by another embodiment of this application.
  • FIG. 11 is a schematic diagram of a network architecture provided by another embodiment of this application.
  • FIG. 12 is a schematic flowchart of a voice communication method provided by another embodiment of this application.
  • first and second are only used for descriptive purposes, and cannot be understood as implying or implying relative importance or implicitly specifying the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, “multiple” The meaning is two or more.
  • FIG. 1 shows a schematic structural diagram of a terminal 100.
  • the terminal 100 As an example, it should be understood that the terminal 100 shown in FIG. 1 is only an example, and the terminal 100 may have more or fewer components than those shown in FIG. 1, may combine two or more components, or may have Different component configurations.
  • the various components shown in the figure may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the terminal 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the terminal 100.
  • the terminal 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the terminal 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the terminal 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the terminal 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the terminal 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the terminal 100, and can also be used to transfer data between the terminal 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely illustrative, and does not constitute a structural limitation of the terminal 100.
  • the terminal 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the terminal 100. While the charging management module 140 charges the battery 142, the power management module 141 can also supply power to the electronic device.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the terminal 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the terminal 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the terminal 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the terminal 100, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the terminal 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the terminal 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the terminal 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the terminal 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the terminal 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the terminal 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the terminal 100 may support one or more video codecs.
  • the terminal 100 can play or record videos in multiple encoding formats, for example: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the terminal 100 can be implemented, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the terminal 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system, at least one application program (such as sound playback function, image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the terminal 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • UFS universal flash storage
  • the terminal 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the terminal 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the terminal 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the terminal 100 may be provided with at least one microphone 170C. In other embodiments, the terminal 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the terminal 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the terminal 100 determines the intensity of the pressure according to the change in capacitance.
  • the terminal 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the terminal 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the terminal 100.
  • the angular velocity of the terminal 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the terminal 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the terminal 100 through a reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the terminal 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the terminal 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the terminal 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal 100 in various directions (generally three axes). When the terminal 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers and so on.
  • the terminal 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the terminal 100 emits infrared light to the outside through the light emitting diode.
  • the terminal 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal 100. When insufficient reflected light is detected, the terminal 100 may determine that there is no object near the terminal 100.
  • the terminal 100 can use the proximity light sensor 180G to detect that the user holds the terminal 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the terminal 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the terminal 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the terminal 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the terminal 100 when the temperature is lower than another threshold, the terminal 100 heats the battery 142 to avoid abnormal shutdown of the terminal 100 due to low temperature.
  • the terminal 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the terminal 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the terminal 100 may receive key input, and generate key signal input related to user settings and function control of the terminal 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations that act on different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal 100.
  • the terminal 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the terminal 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the terminal 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
  • the user when the user is inconvenient to answer the call on the mobile terminal, the user can transfer the call in the following ways: 1.
  • the user turns on the mobile terminal call transfer function in advance, and allows the user to fill in the phone number of the receiving terminal,
  • the mobile terminal can send the phone number of the answering terminal to the network side device of the mobile communication network.
  • the network side device can bind the number of the mobile terminal with the phone number of the answering terminal.
  • the network side device receives a call to the mobile terminal Upon request, the network side device can transfer the call request for the mobile terminal to the answering terminal, so that the answering terminal can answer the call to the mobile terminal.
  • the user turns on the call forwarding function of the mobile terminal, all calls made to the mobile terminal will be transferred to the answering terminal. If the user wants to continue to use the mobile terminal to answer the call but forgets to turn off the call forwarding function, many calls will be missed The telephone has caused inconvenience to users.
  • the user can connect the mobile terminal to a headset or speaker via Bluetooth or Wi-Fi technology.
  • the mobile terminal When the mobile terminal receives an incoming call, the mobile terminal will transfer the incoming call and voice call to the headset or speaker by default, and use the headset or speaker Collect the user's voice.
  • the mobile terminal needs to receive user selection input before the call can be switched back to the mobile terminal. This causes complicated operation steps for the user, and it is easy for the user to switch the voice call. Missing part of the call content caused inconvenience to users.
  • this application proposes a voice communication method, which realizes that after the terminal receives a voice call, if the terminal is occupied or does not answer the call over time, the terminal can choose from other terminals with the ability to call in the home LAN, according to each terminal Voice capability parameters (such as voice capability priority m, call frequency n, user voice energy value x, user location value y, device status value s, etc.), determine the optimal answering terminal, and transfer the incoming call to the voice call To the answering terminal. In this way, through the transfer of the incoming call and the voice call between the call terminals in the family, it is avoided that the user misses the voice call on the call terminal, and the user experience is improved.
  • Voice capability parameters such as voice capability priority m, call frequency n, user voice energy value x, user location value y, device status value s, etc.
  • FIG. 2 shows a schematic diagram of a network architecture 200 provided in an embodiment of the present application.
  • the network architecture 200 includes multiple terminals.
  • the terminal may include a smart phone 201, a smart watch 202, a smart speaker 203, a personal computer 204, a smart TV 205, a tablet computer 206, etc., which are not limited in this application.
  • the multiple terminals can all have the call capability, and the multiple terminals can receive calls and talks in the following ways: 1. All the multiple terminals can receive from the circuit switched domain (CS) in the mobile communication network. Domain) incoming call or conversation. 2. The multiple terminals can receive incoming calls or calls based on VoLTE technology in the IP multimedia subsystem (IMS) in the mobile communication network. 3. The multiple terminals can receive incoming calls or conversations based on VoIP technology from the Internet.
  • CS circuit switched domain
  • IMS IP multimedia subsystem
  • the multiple terminals can all be connected to a local area network (LAN) through a wired or wireless fidelity (wireless fidelity, WiFi) connection.
  • LAN local area network
  • WiFi wireless fidelity
  • the local area network of the network architecture 200 also includes a hub device 207, which can interact with multiple terminals in the network architecture 200 (for example, terminal 201, terminal 202, terminal 203, terminal 204). , Terminal 205 and terminal 206.) Connect.
  • the hub device 207 may be a router, a gateway, a smart device controller, etc.
  • the hub device 207 may include a memory, a processor, and a transceiver.
  • the memory may be used to store the respective voice capability parameters of the multiple terminals (for example, voice capability priority m, call frequency n, voiceprint energy value x, user location Value y, device status value s, etc.).
  • the processor can be used to determine the answering terminal from the respective voice capability parameters of multiple terminals when a terminal connected to the local area network needs to transfer an incoming call.
  • the transceiver can be used to communicate with the multiple terminals connected to the local area network.
  • the network architecture 200 further includes a server 208, where the server 208 may be a server in a smart home cloud network, the number of which is not limited to one, but may be multiple, which is not limited herein.
  • the server 208 may include a memory, a processor, and a transceiver.
  • the memory may be used to store the respective voice capability parameters of the multiple terminals (for example, voice capability priority m, call frequency n, voiceprint energy value x, User location value y, device status value s, etc.).
  • the transceiver can be used to communicate with various terminals in the local area network.
  • the processor can be used to process the data acquisition request of each terminal in the local area network, and instruct the transceiver to issue the respective voice capability parameters of the multiple terminals to each terminal.
  • the terminal receiving the voice call may be referred to as the first terminal, and the answering terminal may be referred to as the second terminal.
  • the embodiment of the present application provides a voice communication method.
  • Terminal 1 After the terminal 1 (for example, a smart phone) receives an incoming call, when the terminal 1 has not received the user's answering operation over time, or the terminal is already occupied (for example, in a call) , Terminal 1 can determine the answering terminal (for example, smart phone) of the incoming call based on the voice capability parameters of other terminals (for example, voice capability priority m, call frequency n, voiceprint energy value x, user location y, device status value s, etc.) TV), and transfer the incoming call to the answering terminal (such as a smart TV). In this way, the user can be prevented from missing the incoming call on the terminal, and the user experience is improved.
  • voice capability priority m for example, call frequency n, voiceprint energy value x, user location y, device status value s, etc.
  • the local area network may include N terminals with call capability, and N is an integer greater than 2.
  • a local area network refers to a computer network formed by connecting N terminals capable of talking to a router. Among them, multiple terminals with call capability in the local area network may be bound to the same user account.
  • any terminal that receives an incoming call can be referred to as terminal 1.
  • the smart phone when a smart phone receives an incoming call, the smart phone can be referred to as terminal 1, and when a smart TV receives an incoming call, the smart TV can be referred to as terminal 1, which is not limited here.
  • the method includes:
  • Terminal 1 receives the incoming call.
  • the incoming call may refer to a voice call.
  • the contact’s terminal (that is, the terminal that initiates the call) can make a voice call to terminal 1 through the CS domain in the mobile communication network, and the contact’s terminal can also make a voice call based on VoLTE technology to terminal 1 through the IMS network in the mobile communication network.
  • the terminal of the contact can also make a voice call based on the VoIP technology to the terminal 1 via the Internet.
  • Terminal 1 outputs an incoming call reminder.
  • the terminal 1 can output an incoming call reminder after receiving the call from the contact's terminal.
  • the incoming call reminder may include at least one of the following: ringtone reminder, mechanical vibration reminder, and incoming call display reminder (for example, the terminal 1 displays the contact information of the contact on the display screen, etc.).
  • the terminal 1 judges whether to enable the call transfer function, and if so, in S304, the terminal 1 establishes a connection with other terminals in the local area network. If not, the terminal 1 does not establish a connection with other terminals in the local area network.
  • the terminal 1 may receive the user's setting input before receiving the call from the contact. In response to the user's setting input, the terminal 1 may turn on or turn off the call transfer function. In this way, the terminal 1 can transfer the received call according to the needs of the user, which improves the user experience. Understandably, the call transfer function is to transfer incoming calls or voice calls received by this terminal to other terminals, and other terminals will remind you of incoming calls, and/or collect the user's voice and play the caller's (ie contact) voice.
  • terminal 1 turns on the call forwarding function, terminal 1 receives incoming calls and needs to forward the incoming calls, then the incoming calls received by terminal 1 are forwarded to other terminals, and the other terminals will remind you of incoming calls.
  • terminal 1 When the user is detected on other terminals Answer the incoming call, terminal 1 transfers the voice call to other terminals.
  • terminal 1 turns on the call transfer function, and the terminal 1 is in a voice call and needs to transfer the call, the terminal 1 transfers the voice call to other terminals.
  • the terminal 1 can establish connections with other terminals (terminal 2, terminal 3, ..., terminal N) in the local area network.
  • the connection may be a connection based on the TCP/IP protocol. Under the connection based on the TCP/IP protocol, the terminal 1 may transfer incoming calls (including calls and voice calls) to the switching device based on the VoIP technology.
  • the connection can be Wi-Fi Direct.
  • the connection can also be a connection established through a router. If the two devices support the Bluetooth function, the connection can also be a Bluetooth connection, and so on. In this way, after the terminal 1 determines that the call needs to be transferred, it can transfer the call to the transfer device (for example, the terminal 2) in time, without the user having to wait too much time, and reduce the time delay caused by transferring the call.
  • the terminal 1 may only establish a connection with the switching device after determining the answering device. In this way, the terminal 1 only establishes a connection with the switching device, reducing wireless resource consumption.
  • the terminal 1 judges whether the incoming call has not been answered over time or whether the terminal 1 is occupied, and if so, in S306, the terminal 1 obtains the voice capability parameters of other terminals in the local area network.
  • the terminal 1 can first determine whether the terminal 1 is already occupied, and if so, the terminal 1 can obtain the information of the other terminals (terminal 2, terminal 3,..., terminal N) If the voice capability parameter is not occupied, the terminal 1 can determine whether the user's answer operation is not received after a specified time threshold (for example, 10s) after the received call. If there is no answer beyond the specified time threshold, the terminal 1 obtains the voice capability parameters of other terminals (terminal 2, terminal 3, ..., terminal N).
  • the voice capability parameters include: voice capability priority m, call frequency n, voiceprint energy value x, user location value y, and device state value s. among them:
  • the voice capability priority m is used to indicate the call effect of the terminal.
  • the voice capability priority m of terminal 1 may be 1
  • the voice capability priority n of terminal 2 may be 0.5, which means that the call effect of terminal 1 is better than the call effect of terminal 2.
  • the voice capability priority of the terminal is determined by the type of terminal. Exemplarily, the corresponding relationship between the terminal type and the terminal voice capability value may be as shown in Table 1 below:
  • the voice capability value of smart phone terminals can be 1, the voice capability value of tablet computer terminals can be 0.8, the voice capability value of smart speaker terminals can be 0.6, and the voice capability value of smart TV terminals
  • the capability value may be 0.5, the voice capability value of smart watch terminals may be 0.4, and the voice capability value of personal computer terminals may be 0.3.
  • the content shown in Table 1 above is only used to explain this application and should not constitute a limitation.
  • the call frequency n is used to indicate how frequently the terminal connects to voice calls. The larger the call frequency n, the more frequently the terminal connects to voice calls.
  • the call frequency n may be the ratio of the number of calls of the terminal to the total number of calls of all terminals in the local area network.
  • any terminal in the local area network can periodically (for example, the cycle can be one week, one month, etc.) send the number of calls to other terminals. After receiving the call times of other terminals, the terminal can determine the total call times of all terminals in the local area network. Exemplarily, assume that there are 6 terminals in the local area network, namely, smart phones, tablet computers, smart speakers, smart TVs, smart watches, and personal computers. The number of calls and the call frequency n of these 6 terminals can be shown in Table 2 below:
  • the number of calls on the smartphone is 80
  • the number of calls on the tablet is 20
  • the number of calls on the smart speaker is 20
  • the number of calls on the smart TV is 60
  • the number of calls on the smart watch is 10, and the number of calls on the personal computer
  • the number of calls is 10.
  • the total number of calls made by these 6 terminals in the local area network is 200. Therefore, the call frequency of smart phones is 0.4, the call frequency of tablets is 0.1, the call frequency of smart speakers is 0.1, the call frequency of smart TVs is 0.3, the call frequency of smart watches is 0.05, and the call frequency of personal computers is 0.05.
  • the content shown in Table 2 above is only used to explain this application and should not constitute a limitation.
  • the voiceprint energy value x is used to indicate the size of the user's voice received by the terminal.
  • the terminal can collect sound in real time through a microphone, and the terminal calculates the user's voiceprint energy value according to the collected sound.
  • the terminal can collect sound in real time through a microphone, and the terminal calculates the user's voiceprint energy value according to the collected sound.
  • 6 terminals in the local area network namely, smart phones, tablet computers, smart speakers, smart TVs, smart watches, and personal computers.
  • the user voice capability value x collected by these 6 terminals can be shown in Table 3 below:
  • Voiceprint energy value x (unit: 100dB) smart phone 0.23 tablet 0 Smart speaker 0.25 Smart TV 0.55 smart watch 0.5 personal computer 0
  • the user location value y is used to indicate the location relationship between the terminal and the user. For example, when the user location value y is 1, it means that the user is around the terminal. When the user location value y is 0, it means that the terminal does not detect that the user is around the terminal.
  • the terminal can obtain the image around the terminal through the camera, and detect from the image whether the user and the terminal are in the same position in the family plan (for example, both in the bedroom), if so, the user position value y of the terminal is 1; if No, the user location value y of the terminal is 0.
  • the camera may be the camera of the terminal itself, or it may be a separate camera connected to the local area network.
  • the camera When the camera is a separate camera connected to the local area network, after the camera obtains the user's location (for example, in the bedroom), the camera may send the user's location (for example, in the bedroom) to the terminals in the local area network.
  • the terminal does not have the ability to obtain the location of the user (for example, the camera is damaged or is not equipped with a camera)
  • the user location y of the terminal is 0.5.
  • the smart phone can be in the master bedroom
  • the smart speaker can be in the second bedroom 1
  • the tablet computer can be in the second bedroom 2
  • the smart TV and smart watch can be in the living room.
  • the user location value y collected by these 6 terminals can be shown in Table 4 below:
  • the user location value y of the smart phone is 0, which means that the user is not around the smart phone.
  • the user location value y of the tablet computer is 0, which means that the user is not around the smart phone.
  • the user location value y of the smart TV is 1, indicating that the user is around the smart TV.
  • the user position y of the smart watch is 1, indicating that the user is around the smart watch.
  • the user position y of the personal computer is 0, which means that the user is not around the personal computer.
  • the content shown in Table 4 above is only used to explain this application and should not constitute a limitation.
  • the device state value s is used to indicate whether the terminal is currently available for answering voice calls.
  • the device state value s is 1.
  • the device state value s is 0.5.
  • the terminal is currently in an unavailable state, for example, the terminal has turned off the call transfer function, that is, the terminal does not receive calls transferred from other terminals, and the device state value s is 0.
  • the terminal can receive the user's input and turn off the call access function, and the terminal can also turn off the call transfer function when the call is in progress, which is not limited here.
  • the device state values s of these 6 terminals can be shown in Table 5 below:
  • the device state value s of the smart phone is 1, which means that the smart phone is currently available for answering voice calls.
  • the device state value s of the tablet computer is 0.5, which means that the tablet computer is currently occupied (for example, in a call).
  • the device status value s of the smart speaker is 1, which means that the smart speaker is currently available for answering voice calls.
  • the device status value s of the smart TV is 1, which means that the smart TV is currently available for answering voice calls.
  • the device state value s of the smart watch is 1, which means that the smart watch is currently available for answering voice calls.
  • the device state value s of the personal computer is 0, which means that the smart watch is currently not available for answering voice calls.
  • the respective voice capability parameters of other terminals are stored in the local memory of other terminals.
  • the voice capability parameters of terminal 2 are stored in the local memory of terminal 2
  • the voice capability parameters of terminal 3 are stored in the local memory of terminal 3,...
  • the voice capability parameters of terminal N are stored in the local memory of terminal N.
  • Terminal 1 can send a voice capability parameter acquisition instruction to other terminals (terminal 2, terminal 3,..., terminal N) in the local area network, and other terminals (terminal 2, terminal 3,..., terminal N) receive the voice capability After the parameter acquisition instruction, the respective voice capability parameters can be sent to the terminal 1.
  • the respective voice capability parameters of other terminals are stored on the hub device in the local area network.
  • Each terminal in the local area network can send its own voice capability parameter to the hub device 207.
  • the terminal 1 may send a voice capability parameter acquisition instruction to the central device 207.
  • the hub device 207 can send the voice capability parameters of other terminals (terminal 2, terminal 3, ..., terminal N) to terminal 1.
  • the respective voice capability parameters of other terminals are stored on the server 208 of the smart home cloud.
  • Each terminal in the local area network can be connected to the server 208, and each terminal in the local area network can send its own voice capability parameter to the server 208.
  • the terminal 1 may send an instruction to acquire the voice capability parameter to the server 208.
  • the server 208 After the server 208 receives the voice capability parameter acquisition instruction, it may send the voice capability parameters of other terminals (terminal 2, terminal 3, ..., terminal N) to terminal 1.
  • the following describes how the terminal 1 determines the answering terminal for the call transfer.
  • Terminal 1 determines the answering terminal according to the voice capability parameters of other terminals. Among them, the answering terminal is used to receive incoming calls transferred from the terminal 1.
  • the voice capability parameters may include voice capability priority m, call frequency n, voiceprint energy value x, user location value y, and device state value s.
  • voice capability priority m voice capability priority
  • call frequency n voiceprint energy value x
  • user location value y user location value y
  • device state value s device state value s.
  • terminal 1 can filter out the devices that can currently make calls (that is, terminals with a device status value of 1) from other terminals, such as terminal 2, terminal 3,..., terminal N, You can make a call.
  • the terminal 1 can first compare the respective voiceprint energy values x of other terminals. If there is only one terminal (for example, terminal 2) with the largest voiceprint energy value x among the terminals (terminal 2, terminal 3, ..., terminal N) that can currently make a call, then terminal 1 can have the largest voice energy value x The terminal (for example, terminal 2) is determined to be the answering terminal.
  • terminal 1 If there are multiple terminals (for example, terminal 2, terminal 3, terminal 4, and terminal 5) with the largest voiceprint energy value x among the terminals (terminal 2, terminal 3, ..., terminal N) that can make a call, then terminal 1
  • the user location values y of the multiple terminals (for example, terminal 2, terminal 3, terminal 4, and terminal 5) with the largest voiceprint energy value x may be compared. If there is only one terminal with the largest user location value y (such as terminal 2) among the multiple terminals with the largest voiceprint energy value x (such as terminal 2, terminal 3, terminal 4, and terminal 5), then terminal 1 can use the voice
  • the terminal (for example, terminal 2) with the largest energy value x and the largest user location value y is determined as the transfer terminal.
  • the terminal 1 can compare the respective call frequencies n of the terminals with the largest sound energy value x and the largest user location value y (for example, terminal 2, terminal 3, and terminal 4).
  • the terminal can The terminal (for example, terminal 2) with the largest voiceprint energy value x, the largest user location y, and the largest call frequency n (for example, terminal 2) is determined as the transfer terminal.
  • terminal 1 It is possible to compare the respective voice capability priority m values of the terminals with the largest sound energy value x, the largest user location value y, and the largest call frequency n (for example, terminal 2 and terminal 3).
  • the terminal may determine the terminal (for example, terminal 2) with the largest sound energy value x, the largest user location y, the largest call frequency n, and the largest voice capability priority m as the answering terminal.
  • the terminal 1 can randomly select one terminal from the multiple terminals as the answering terminal.
  • the terminal 1 can determine the user's location according to the user's voiceprint energy value x and/or the user's location value y.
  • the terminal 1 can determine the answering terminal according to one or more of the user location, the voice capability priority m, the call frequency n, and the device status value s.
  • the terminal 1 may only determine the answering terminal (for example, the terminal 2) based on the voiceprint capability value x. Among them, the answering terminal is the one with the largest voiceprint capability value x among other terminals except terminal 1.
  • the terminal 1 may determine the answering terminal (for example, the terminal 2) according to the voiceprint capability value x and the call frequency n. Among them, the answering terminal is the one with the largest voiceprint capability x and the largest call frequency n among other terminals except terminal 1.
  • terminal 1 can calculate the respective transfer capability values V of other terminals according to the following formula (1). among them:
  • V f(a*m, b*n, c*x, d*y)*s
  • m is the voice capability priority
  • n is the call frequency
  • x is the voiceprint energy value
  • y is the user location value
  • s is the device state value
  • a is the weight of the voice capability priority
  • b is the weight of the call frequency
  • c Is the weight of the voiceprint ability value
  • d is the weight of the user's position value.
  • f(z1, z2, z3, z4) is an arithmetic function, where the arithmetic function f(z1, z2, z3, z4) can be a sum function. which is,
  • the terminal 1 calculates the voice capability value V of the other terminals (terminal 2, terminal 3, ..., terminal N)
  • the terminal with the largest transfer capability value V (for example, terminal 2) can be used as the answering terminal.
  • the above-mentioned voice capability parameter (voice capability priority m or call frequency n or voiceprint energy value x or user location value y) has a greater impact on the determination result of the answering terminal, and the greater its weight.
  • the definite influence on the answering terminal the voiceprint energy value x is greater than the user location value y is greater than the call frequency n is greater than the voice capability priority m, then c>d>b>a.
  • the respective weights of the speech capability parameters are variable. Any terminal in the local area network can receive user input operations and reset the respective weights of the voice capability parameters.
  • the following formula (2) is used to exemplarily introduce how to determine the answering terminal according to the voice capability parameter.
  • the smart phone 201 is located in the master bedroom of the family floor plan
  • the smart watch 202 is located in the living room position of the family floor plan
  • the smart speaker 203 is located in the second bedroom 1 position of the family floor plan
  • the personal computer 204 is located in the study room of the family floor plan.
  • the smart TV 205 is located in the living room position of the family floor plan
  • the tablet computer 206 is located in the second bedroom 2 position of the family floor plan.
  • voice capability parameters of each terminal in the home LAN can be shown in Table 6 below:
  • the voice capability priority m 1 of smart phone 201 is 1, the call frequency n 1 of smart phone 201 is 0.4, the voiceprint energy value x 1 of smart phone 201 is 0.23, and the user of smart phone 201
  • the position value y 1 is 0, and the device state value s 1 of the smartphone 201 is 0.5.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.4, and the user position value y 2 of the smart watch 202 is 1.
  • the device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.25, and the user position value y 3 of the smart speaker 203 is 0.5.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.2, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.55, and the user position value y 5 of the smart TV 205 is 1.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.2, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 0.5.
  • Table 6 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of the other terminals through the above formula (2).
  • the transfer capability value V 2 of the smart watch 202 is 0.45
  • the transfer capability value V 3 of the smart speaker 203 is 0.305
  • the transfer capability value V 4 of the personal computer 204 is 0,
  • the transfer capability value V of the smart TV 205 5 is 0.585
  • the transfer capability value V 6 of the tablet computer 206 is 0.1.
  • the transfer capability value V 5 of the smart TV 205 is 0.585
  • the transfer capability value in other terminals is the largest, and smart phone 201 can It is determined that the smart TV 205 is the answering terminal.
  • the smart phone 201 is located in the master bedroom of the family floor plan
  • the smart watch 202 is located in the living room position of the family floor plan
  • the smart speaker 203 is located in the second bedroom 1 position of the family floor plan
  • the personal computer 204 is located in the study room of the family floor plan.
  • the smart TV 205 is located in the living room position of the family floor plan
  • the tablet computer 206 is located in the second bedroom 2 position of the family floor plan.
  • the user can be on the balcony of the family plan, and the location of each terminal in the home local area network is different. Among them, each terminal in the home local area network cannot obtain the voiceprint energy value x and the user location value y.
  • voice capability parameters of each terminal in the home LAN can be shown in Table 7 below:
  • the voice capability priority m 1 of the smart phone 201 is 1, the call frequency n 1 of the smart phone 201 is 0.4, the voiceprint energy value x 1 of the smart phone 201 is 0, and the user of the smart phone 201
  • the position value y 1 is 0, and the device state value s 1 of the smartphone 201 is 0.5.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0, and the user position value y 2 of the smart watch 202 is 0.
  • the device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0, and the user position value y 5 of the smart TV 205 is 0.
  • the device state value s 5 of the television 205 is 0.5.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 0.5.
  • Table 7 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of the other terminals through the above formula (2).
  • the transfer capability value V 2 of the smart watch 202 is 0.05
  • the transfer capability value V 3 of the smart speaker 203 is 0.08
  • the transfer capability value V 4 of the personal computer 204 is 0,
  • the transfer capability value V of the smart TV 205 5 is 0.055
  • the transfer capability value V 6 of the tablet computer 206 is 0.05.
  • the transfer capability value V 3 of the smart speaker 203 is 0.08
  • the transfer capability value in other terminals is the largest, and smart phone 201 can It is determined that the smart speaker 203 is the answering terminal.
  • the terminal 1 may receive the user's input operation (for example, voice input, etc.) before determining the answering terminal according to the voice capability parameters of other terminals in the above step S307, and the terminal 1
  • the incoming call is transferred to the designated terminal in the LAN.
  • the terminal 1 may receive the user's input operation (for example, voice input, etc.) before determining the answering terminal according to the voice capability parameters of other terminals in the above step S307, and the terminal 1
  • the incoming call is transferred to the designated terminal in the LAN.
  • FIG. 6 terminals with call capability in the home LAN such as smart phone 201, smart watch 202, smart speaker 203, personal computer 204, smart TV 205, and tablet computer 206.
  • the smart phone 201 is located in the master bedroom of the family floor plan
  • the smart watch 202 is located in the living room position of the family floor plan
  • the smart speaker 203 is located in the second bedroom 1 position of the family floor plan
  • the personal computer 204 is located in the study room of the family floor plan.
  • the smart TV 205 is located in the living room position of the family floor plan
  • the tablet computer 206 is located in the second bedroom 2 position of the family floor plan.
  • the smart phone 201 When the smart phone 201 receives an incoming call, the smart phone 201 can receive the user's voice input (for example, "Xiaoyi Xiaoyi, transfer to the living room TV"), and in response to the voice input, the smart phone 201 can transfer the received call To the smart TV 205 in the living room.
  • voice input for example, "Xiaoyi Xiaoyi, transfer to the living room TV”
  • the smart phone 201 can transfer the received call To the smart TV 205 in the living room.
  • the foregoing steps S306 and S307 may be executed by a hub device or a server. That is, the hub device or server can obtain the voice capability parameters of each terminal in the local area network. The hub device or the server can determine the answering terminal according to the voice capability parameters of the terminals except the terminal 1. Then, the hub device or server can send the identification of the answering terminal (for example, the IP address of terminal 2) to terminal 1.
  • the hub device or server can send the identification of the answering terminal (for example, the IP address of terminal 2) to terminal 1.
  • terminal 1 transfers the received incoming call to the answering terminal (terminal 2).
  • Terminal 1 sends an incoming call instruction to terminal 2.
  • the terminal 1 can send incoming call commands to the terminal 2 through the local area network.
  • Terminal 2 outputs an incoming call reminder.
  • the terminal 1 may output an incoming call reminder.
  • the incoming call reminder output by the terminal 2 may include at least one of the following: ringtone reminder, mechanical vibration reminder, and incoming call display reminder (for example, the terminal 2 displays the contact information of the contact on the display screen, etc.).
  • terminal 2 may return the incoming call confirmation information to terminal 1. After receiving the incoming call confirmation information, the terminal 1 can stop outputting the incoming call reminder.
  • the terminal 2 may receive the answer operation of the user. In response to the user's answering operation, S311, the terminal 2 may return answering confirmation information to the terminal 1.
  • the terminal 2 After the terminal 2 outputs the incoming call reminder, the terminal 2 can receive the user's answer operation (for example, click the answer button displayed on the terminal 2 screen or click the answer physical button on the terminal 2), and in response to the answer operation, the terminal 2 can return the answer confirmation To terminal 1. In response to the answering confirmation, the terminal 1 can transfer the voice to the terminal 2.
  • the user's answer operation for example, click the answer button displayed on the terminal 2 screen or click the answer physical button on the terminal 2
  • the terminal 2 can return the answer confirmation To terminal 1.
  • the terminal 1 In response to the answering confirmation, the terminal 1 can transfer the voice to the terminal 2.
  • the following specifically describes the process in which the terminal 1 transfers the voice call to the answering terminal (terminal 2) after the terminal 1 receives the answer confirmation message returned by the terminal 2.
  • Terminal 1 receives the voice data of the contact.
  • CS domain voice call The contact's terminal can collect the contact's voice, establish a call connection with the terminal 1 through the CS domain in the mobile communication network, and send the sound signal to the terminal 1.
  • VoLTE voice call The contact's terminal can collect the contact's voice, and use the voice compression algorithm to compress and encode the contact's voice to generate the contact's voice data. Then the voice data is encapsulated into a voice data packet, and the voice data packet of the contact is sent to the terminal 1 through the IMS in the mobile communication network.
  • VoIP voice call The contact's terminal can collect the contact's voice, compress and encode the contact's voice through the voice compression algorithm, generate the contact's voice data, and then encapsulate the voice data through the IP protocol and other related protocols Into a voice data packet, and send the voice data packet of the contact to terminal 1 through the Internet.
  • Terminal 1 sends the voice data of the contact to terminal 2.
  • the terminal 1 when the voice call transferred by the terminal 1 is a CS domain voice call, after receiving the voice signal of the contact, the terminal 1 can compress and encode the voice signal of the contact through a voice compression algorithm to generate the voice of the contact Data, and encapsulate the voice data into voice data packets through the IP protocol and other related protocols. Then, the terminal 1 sends the voice data packet of the contact to the terminal 2 through the local area network.
  • the terminal 1 may forward the voice data packet of the contact to the terminal 2 through the local area network after receiving the voice data packet of the contact.
  • the terminal 2 After receiving the voice data of the contact, the terminal 2 plays the voice data of the contact.
  • the terminal 2 After receiving the voice data packet of the contact from the terminal 1, the terminal 2 can obtain the voice data of the contact from the voice data packet of the contact, and play the voice data of the contact.
  • S315 The terminal 2 collects sound through a microphone, and generates voice data of the user.
  • Terminal 2 sends the user's voice data to terminal 1.
  • step S311 after the terminal 2 returns the answer confirmation to the terminal 1, the terminal 2 can continuously collect the user's voice and the surrounding environment's voice through the microphone.
  • the terminal 2 can compress and encode the sound collected by the microphone (including the user's voice and surrounding environment sound) through a voice compression algorithm, generate the user's voice data, and encapsulate the user's voice data into voice data pack. Then, the terminal 2 sends the user's voice data packet to the terminal 1 through the local area network.
  • the terminal 1 After receiving the user's voice data sent by the terminal 2, the terminal 1 sends the user's voice data to the terminal of the contact.
  • terminal 1 When the voice call transferred by terminal 1 is a voice call in the CS domain, after receiving the user’s voice data sent by terminal 2, terminal 1 converts the user’s voice data into the user’s voice signal, and passes the user’s voice signal through the mobile
  • the CS domain in the communication network is sent to the terminal of the contact. After receiving the user's voice signal sent by the terminal 1, the contact's terminal can parse the user's voice from the user's voice signal and play it.
  • the terminal 1 When the voice call transferred by the terminal 1 is a VoLTE voice call, after receiving the user's voice data sent by the terminal 2, the terminal 1 can forward the user's voice data to the contact's terminal through the IMS. After receiving the user's voice data, the contact's terminal can play the user's voice data.
  • the voice call transferred by the terminal 1 is a VoIP voice call
  • the terminal 1 receives the user's voice data sent by the terminal 2
  • the user's voice data can be forwarded to the contact's terminal via the Internet.
  • the contact's terminal can play the user's voice data.
  • the foregoing steps S313 and S316 may be forwarded via a hub device or a server.
  • the embodiment of the present application provides a voice communication method.
  • the terminal 1 can use the voice capability parameters of other terminals (such as voice capability priority m, call Frequency n, voiceprint energy value x, user location y, terminal status value s, etc.), determine the answering terminal (for example, smart TV) of the voice call, and transfer the voice call to the answering terminal (for example, smart TV).
  • voice capability priority m such as voice capability priority m, call Frequency n, voiceprint energy value x, user location y, terminal status value s, etc.
  • the answering terminal for example, smart TV
  • the call effect between the user and the contact can be improved when the user is moving indoors.
  • the local area network can include N terminals with call capabilities, and N is an integer greater than 2.
  • any terminal that is talking can be called terminal 1.
  • the user is using a smart phone.
  • the smart phone talks with the contact
  • the smart phone can be called terminal 1.
  • the smart TV can be called terminal 1, which is not limited here.
  • the method includes:
  • Terminal 1 establishes a connection with other terminals in the local area network.
  • the terminal 1 can establish a connection with other terminals (terminal 2, terminal 3, ..., terminal N) in the local area network, where the connection can be a connection based on the TCP/IP protocol, and under the connection based on the TCP/IP protocol , Terminal 1 can transfer incoming calls (including calls and conversations) to the transfer device based on VoIP technology.
  • the connection can also be Wi-Fi Direct.
  • the connection can also be a connection established through a router. If the two devices support the Bluetooth function, the connection can also be a Bluetooth connection.
  • the terminal 1 may only establish a connection with the switching device after determining the answering device. In this way, the terminal 1 only establishes a connection with the switching device.
  • Terminal 1 conducts a voice call with the terminal of the contact.
  • the voice call between the terminal 1 and the terminal of the contact may be the voice call in the CS domain, the VoLTE voice call, or the VoIP voice call, which will not be repeated here.
  • S703 The terminal 1 receives the call transfer operation of the user.
  • the call transfer operation may be the user clicking the call transfer control on the display screen of the terminal 1, or the user's voice command input (for example, "Xiaoyi Xiaoyi, call follow me").
  • the following is how the receiving terminal 1 determines the answering terminal. Among them, the following steps S704 to S705 can be executed periodically (for example, every 2 seconds).
  • the terminal 1 obtains the voice capability parameters of each terminal in the local area network.
  • the voice capability parameters include: voice capability priority m, call frequency n, voiceprint energy value x, user location value y, and device state value s.
  • voice capability priority m voice capability priority
  • call frequency n call frequency n
  • voiceprint energy value x voiceprint energy value x
  • user location value y user location value y
  • device state value s device state value s.
  • Terminal 1 determines the answering terminal according to the voice capability parameters of each terminal. The answering terminal is used to answer the call with the contact's terminal.
  • the terminal 1 may not perform call transfer.
  • terminal 1 can transfer the call with the contact’s terminal to the answering terminal.
  • terminal 1 can be used to transfer the contact’s terminal and answer Voice data between terminals.
  • the specific process of the terminal 1 determining the answering terminal can refer to step S307 in the embodiment shown in FIG. 3, which will not be repeated here.
  • the smart phone 201 is located in the master bedroom of the family plan
  • the smart speaker 203 is located in the second bedroom 1 of the family plan
  • the personal computer 204 is located in the study of the family plan
  • the smart TV 205 is located in the living room of the family plan.
  • the tablet computer 206 is located in the second bedroom 2 of the family floor plan.
  • voice capability parameters of each terminal in the home LAN can be shown in Table 8 below:
  • the voice capability priority m 1 of the smart phone 201 is 1, the call frequency n 1 of the smart phone 201 is 0.4, and the voiceprint energy value x 1 of the smart phone 201 is 0.6.
  • the position value y 1 is 1, and the device state value s 1 of the smartphone 201 is 1.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.55, and the user position value y 2 of the smart watch 202 is 1, smart The device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.35, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.35, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.2, and the user position value y 5 of the smart TV 205 is 0.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.2, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 1.
  • Table 8 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of other terminals through the above formula (2).
  • the transfer capability value V 1 of the smart phone 201 is 0.68
  • the transfer capability value V 2 of the smart watch 202 is 0.525
  • the transfer capability value V 3 of the smart speaker 203 is 0.255
  • the transfer capability value V of the personal computer 204 4 is 0,
  • the transfer capability value V 5 of the smart TV 205 is 0.21
  • the transfer capability value V 6 of the tablet computer 206 is 0.2.
  • the transfer capability value V 1 of the smart phone 201 is 0.68
  • the transfer capability value in each terminal (smart phone 201, smart watch 202, smart speaker 203, personal computer 204, smart TV 205, and tablet 206) is the largest.
  • the smart phone 201 can determine that the smart phone 201 itself is the answering terminal.
  • the above examples are only used to explain the application and should not constitute a limitation.
  • the voice capability parameters of each terminal in the home LAN can be shown in Table 9 below:
  • the voice capability priority m 1 of smart phone 201 is 1, the call frequency n 1 of smart phone 201 is 0.4, and the voiceprint energy value x 1 of smart phone 201 is 0.3.
  • the position value y 1 is 0, and the device state value s 1 of the smartphone 201 is 1.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.6, and the user position value y 2 of the smart watch 202 is 1, smart The device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.35, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.35, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.2, and the user position value y 5 of the smart TV 205 is 0.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.2, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 1.
  • Table 9 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of the other terminals through the above formula (2).
  • the transfer capability value V 1 of the smart phone 201 is 0.33
  • the transfer capability value V 2 of the smart watch 202 is 0.55
  • the transfer capability value V 3 of the smart speaker 203 is 0.255
  • the transfer capability value V of the personal computer 204 4 is 0,
  • the transfer capability value V 5 of the smart TV 205 is 0.21
  • the transfer capability value V 6 of the tablet computer 206 is 0.2.
  • the transfer capability value V 3 of the smart watch 202 is 0.55, the transfer capability value of each terminal (smart phone 201, smart watch 202, smart speaker 203, personal computer 204, smart TV 205, and tablet computer 206) in the local area network At most, the smart phone 201 can determine that the smart watch 202 is the answering terminal. Thus, the smart phone 201 can transfer the call to the smart watch 202.
  • the above examples are only used to explain the application and should not constitute a limitation.
  • the user can walk to the living room with the smart watch 202.
  • the user is in the same living room as the smart watch 202 and the smart TV 205, and the smart TV 205 in the living room can obtain the user's location, that is, the user location value y 5 of the smart TV 205 is 1.
  • the voice capability parameters of each terminal in the home LAN can be as shown in Table 10 below:
  • the voice capability priority m 1 of the smart phone 201 is 1, the call frequency n 1 of the smart phone 201 is 0.4, the voiceprint energy value x 1 of the smart phone 201 is 0.3, and the user of the smart phone 201
  • the position value y 1 is 0, and the device state value s 1 of the smartphone 201 is 1.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.6, and the user position value y 2 of the smart watch 202 is 1, smart The device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.35, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.35, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.6, and the user position value y 5 of the smart TV 205 is 1.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.2, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 1.
  • Table 10 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of the other terminals through the above formula (2).
  • the transfer capability value V 1 of the smart phone 201 is 0.33
  • the transfer capability value V 2 of the smart watch 202 is 0.55
  • the transfer capability value V 3 of the smart speaker 203 is 0.255
  • the transfer capability value V of the personal computer 204 4 is 0,
  • the transfer capability value V 5 of the smart TV 205 is 0.61
  • the transfer capability value V 6 of the tablet computer 206 is 0.2.
  • the transfer capability value V 5 of the smart TV 202 is 0.61
  • the transfer capability value in each terminal in the local area network
  • the smart phone 201 can determine that the smart TV 205 is the answering terminal.
  • the smart phone 201 can transfer the call to the smart TV 205.
  • the smart phone 201 is located in the master bedroom of the family floor plan
  • the smart watch 202 is located in the living room position of the family floor plan
  • the smart speaker 203 is located in the second bedroom 1 position of the family floor plan
  • the personal computer 204 is located in the study room of the family floor plan.
  • the smart TV 205 is located in the living room position of the family floor plan
  • the tablet computer 206 is located in the second bedroom 2 position of the family floor plan.
  • voice capability parameters of each terminal in the home LAN can be shown in Table 11 below:
  • the voice capability priority m 1 of the smart phone 201 is 1, the call frequency n 1 of the smart phone 201 is 0.4, the voiceprint energy value x 1 of the smart phone 201 is 0.6, and the user of the smart phone 201
  • the position value y 1 is 1, and the device state value s 1 of the smartphone 201 is 1.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.2, and the user position value y 2 of the smart watch 202 is 0.
  • the device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.35, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.35, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.2, and the user position value y 5 of the smart TV 205 is 0.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.2, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 1.
  • Table 11 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of other terminals through the above formula (2).
  • the transfer capability value V 1 of the smart phone 201 is 0.68
  • the transfer capability value V 2 of the smart watch 202 is 0.15
  • the transfer capability value V 3 of the smart speaker 203 is 0.255
  • the transfer capability value V of the personal computer 204 4 is 0,
  • the transfer capability value V 5 of the smart TV 205 is 0.21
  • the transfer capability value V 6 of the tablet computer 206 is 0.2.
  • the transfer capability value V 1 of the smart phone 201 is 0.68
  • the transfer capability value in each terminal (smart phone 201, smart watch 202, smart speaker 203, personal computer 204, smart TV 205, and tablet 206) is the largest.
  • the smart phone 201 can determine that the smart phone 201 itself is the answering terminal.
  • the above examples are only used to explain the application and should not constitute a limitation.
  • each terminal in the home local area network cannot obtain the user's location, that is, the user's location y is all 0.
  • the voice capability parameters of each terminal in the home LAN can be shown in Table 12 below:
  • the voice capability priority m 1 of the smart phone 201 is 1, the call frequency n 1 of the smart phone 201 is 0.4, the voiceprint energy value x 1 of the smart phone 201 is 0.2, and the user of the smart phone 201
  • the position value y 1 is 0, and the device state value s 1 of the smartphone 201 is 1.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.2, and the user position value y 2 of the smart watch 202 is 0.
  • the device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.45, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.45, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.2, and the user position value y 5 of the smart TV 205 is 0.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.2, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 1.
  • Table 12 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of other terminals through the above formula (2).
  • the transfer capability value V 1 of the smart phone 201 is 0.28
  • the transfer capability value V 2 of the smart watch 202 is 0.15
  • the transfer capability value V 3 of the smart speaker 203 is 0.305
  • the transfer capability value V of the personal computer 204 4 is 0,
  • the transfer capability value V 5 of the smart TV 205 is 0.21
  • the transfer capability value V 6 of the tablet computer 206 is 0.2.
  • the transfer capability value V 3 of the smart speaker 203 is 0.305
  • the transfer capability value in each terminal (smart phone 201, smart watch 202, smart speaker 203, personal computer 204, smart TV 205, and tablet 206) is the largest.
  • the smart phone 201 can determine that the smart speaker 203 is the answering terminal.
  • the smart phone 201 can transfer the voice call to the smart speaker 203.
  • the above examples are only used to explain the application and should not constitute a limitation.
  • the user when the smart speaker 203 answers a call, the user can walk to the living room.
  • the user is in the same living room as the smart watch 202 and the smart TV 205, and the smart TV 205 in the living room can obtain the location of the user, that is, the user location value y 2 of the smart watch 202 is 1, and the user location value y 5 of the smart TV 205 Is 1.
  • the voice capability parameters of each terminal in the home LAN can be as shown in Table 13 below:
  • the voice capability priority m 1 of the smart phone 201 is 1, the call frequency n 1 of the smart phone 201 is 0.4, the voiceprint energy value x 1 of the smart phone 201 is 0.2, and the user of the smart phone 201
  • the position value y 1 is 0, and the device state value s 1 of the smartphone 201 is 1.
  • the voice capability priority m 2 of the smart watch 202 is 0.4, the call frequency n 2 of the smart watch 202 is 0.05, the voiceprint energy value x 2 of the smart watch 202 is 0.5, and the user position value y 2 of the smart watch 202 is 1.
  • the device state value s 2 of the watch 202 is 1.
  • the voice capability priority m 3 of the smart speaker 203 is 0.6, the call frequency n 3 of the smart speaker 203 is 0.1, the voiceprint energy value x 3 of the smart speaker 203 is 0.3, and the user position value y 3 of the smart speaker 203 is 0.
  • the device state value s 3 of the speaker 203 is 1.
  • the voice capability priority m 4 of the personal computer 204 is 0.3, the call frequency n 4 of the personal computer 204 is 0.05, the voiceprint energy value x 4 of the personal computer 204 is 0.3, and the user position value y 4 of the personal computer 204 is 0.
  • the device state value s 4 of the computer 204 is 0.
  • the voice capability priority m 5 of the smart TV 205 is 0.5, the call frequency n 5 of the smart TV 205 is 0.3, the voiceprint energy value x 5 of the smart TV 205 is 0.6, and the user position value y 5 of the smart TV 205 is 1.
  • the device state value s 5 of the television 205 is 1.
  • the voice capability priority m 6 of the tablet computer 206 is 0.8, the call frequency n 6 of the tablet computer 206 is 0.1, the voiceprint energy value x 6 of the tablet computer 206 is 0.35, and the user position value y 6 of the tablet computer 206 is 0.
  • the device state value s 6 of the computer 206 is 1.
  • Table 13 is only used to explain this application and should not constitute a limitation.
  • the weight value a corresponding to the voice capability priority m may be 0.1
  • the weight value b corresponding to the call frequency n may be 0.2
  • the weight value c corresponding to the voiceprint energy value x may be 0.5
  • the user location value y The corresponding weight value d can be 0.2.
  • the smart phone 201 can calculate the transfer capability value V of other terminals through the above formula (2). Among them, the transfer capability value V 1 of the smart phone 201 is 0.28, the transfer capability value V 2 of the smart watch 202 is 0.5, the transfer capability value V 3 of the smart speaker 203 is 0.23, and the transfer capability value V of the personal computer 204 4 is 0, the transfer capability value V 5 of the smart TV 205 is 0.61, and the transfer capability value V 6 of the tablet computer 206 is 0.275. Since the transfer capability value V 5 of the smart TV 205 is 0.61, the transfer capability value in each terminal (smart phone 201, smart watch 202, smart speaker 203, personal computer 204, smart TV 205, and tablet 206) is the largest. The smart phone 201 can determine that the smart TV 205 is the answering terminal. Thus, the smart phone 201 can transfer the voice call to the smart TV 205.
  • the above examples are only used to explain the application and should not constitute a limitation.
  • the terminal 1 after the terminal 1 determines the answering terminal, it can determine whether the transfer capability value V of the answering terminal is higher than the transfer capability value V of the current call terminal by a specified threshold (for example, 0.2), if so, Then the terminal 1 transfers the voice call to the answering terminal.
  • the specified threshold may be 0.2.
  • the transfer capability value of the answering terminal (for example, terminal 2) is 0.61
  • the transfer capability value V of the current calling terminal (for example, terminal 3) is 0.23
  • the transfer capability value V (that is, 0.61) of the answering terminal is higher than the transfer capability value of the current calling terminal.
  • the difference in the connection capacity value V (ie 0.23) is 0.38, which is higher than the specified threshold (ie 0.2). Therefore, the terminal 1 can transfer the voice call to the answering terminal. In this way, frequent switching of call equipment can be avoided.
  • the terminal 1 can detect the user's voiceprint energy after determining the answering terminal.
  • the voiceprint energy is below a specified voiceprint energy threshold (for example, 10dB) and lasts for a certain time (for example, 0.5 seconds). ), terminal 1 transfers the call to the answering terminal.
  • a specified voiceprint energy threshold for example, 10dB
  • the user’s voiceprint energy is switched when the user’s voiceprint energy is less than a certain threshold. This can prevent the call terminal from collecting the user’s voice during the transfer process.
  • the discourse is incomplete.
  • Terminal 1 receives the voice data of the contact.
  • step S312 in the embodiment shown in FIG. 3, which will not be repeated here.
  • Terminal 1 sends the voice data of the contact to terminal 2.
  • the terminal 2 After receiving the voice data of the contact, the terminal 2 plays the voice data of the contact.
  • the terminal 2 collects sound through a microphone, and generates voice data of the user.
  • Terminal 2 sends the user's voice data to terminal 1.
  • Terminal 1 sends the user's voice data to the terminal of the contact.
  • the above steps S704 and S705 may be executed by a hub device or a server. That is, the hub device or server can obtain the voice capability parameters of each terminal in the local area network. The hub device or server can determine the answering terminal according to the voice capability parameters of other terminals. Then, the hub device or the server may send the identification of the answering terminal (for example, the IP address of the terminal 2) to the terminal 1.
  • the hub device or the server may send the identification of the answering terminal (for example, the IP address of the terminal 2) to the terminal 1.
  • the foregoing steps S707 and S710 may be forwarded via a hub device or a server.
  • the terminal 1 may be talking with the terminal of the contact A, and at this time, the terminal 1 may receive the incoming call from the terminal of the contact B again. Since the terminal 1 cannot talk with the terminals of more than two contacts at the same time, in a possible implementation manner, the terminal 1 can determine the incoming call of the terminal of the contact B through the above-mentioned embodiment shown in FIG. 3 In the step of answering the terminal, the answering terminal (for example, terminal 2) is determined. In addition, the terminal 1 can transfer the incoming call from the terminal of the contact B to the answering terminal (for example, the terminal 2), and the answering terminal can talk with the terminal of the contact B.
  • the answering terminal for example, terminal 2
  • the terminal 1 can transfer the incoming call from the terminal of the contact B to the answering terminal (for example, the terminal 2), and the answering terminal can talk with the terminal of the contact B.
  • the terminal 1 may transfer the voice call with the terminal of the contact A to the answering terminal (for example, the terminal 2), and the terminal 1 can answer the incoming call from the terminal of the contact B.
  • the answering terminal for example, the terminal 2
  • the terminal 1 can answer the incoming call from the terminal of the contact B.
  • the process of the terminal 1 to transfer a call or a conversation refer to the embodiment shown in FIG. 3 or FIG. 7, which will not be repeated here. In this way, missed calls can be avoided and the user experience can be improved.
  • the hub device can collect the voice capability parameters of each terminal in the local area network, and determine the answering terminal (such as terminal 2) from the terminals in the local area network, The voice data packet of the VoIP voice call is forwarded to the answering terminal (for example, terminal 2). In this way, the answering terminal is calculated through the central device, which can reduce the computing burden of the terminal and reduce the delay when transferring a call or talking.
  • FIG. 10 is a voice communication method provided by an embodiment of the application, and the method includes:
  • the hub device establishes a connection with each terminal.
  • the hub device can respectively establish a connection with each terminal (terminal 1, terminal 2, ..., terminal N) in the local area network.
  • the connection can be wireless (e.g. Wi-Fi connection) or wired connection. If each terminal has established a connection with the router, you can skip this step.
  • the hub device receives the call instruction information sent to the terminal 1.
  • the hub device can receive the call indication information sent by the contact’s terminal relying on the Internet, where the call indication information includes the address information of the call sender (the IP address of the contact’s terminal) and the recipient address information (for example, terminal 1 IP address).
  • the call indication information is used to instruct the terminal 1 to output an incoming call reminder.
  • the hub device forwards the call indication information to the terminal 1.
  • the hub device may send the call instruction information to the terminal 1 according to the address information of the recipient.
  • Terminal 1 outputs an incoming call reminder.
  • the terminal 1 may output an incoming call reminder in response to the call indication information.
  • the incoming call reminder may include at least one of the following: ringtone reminder, mechanical vibration reminder, and incoming call display reminder (for example, the terminal 1 displays the contact information of the contact on the display screen, etc.).
  • terminal 1 judges whether to enable the call transfer function. If yes, S1006, terminal 1 determines whether the incoming call has not been answered over time or whether terminal 1 is occupied, if yes, then S1007, terminal 1 sends a call transfer request to the central device. If terminal 1 does not enable the call transfer function, terminal 1 outputs an incoming call reminder.
  • the terminal 1 may receive the user's setting input before receiving the call from the contact. In response to the user's setting input, the terminal 1 may turn on or turn off the call transfer function. In this way, the terminal 1 can transfer the received call according to the needs of the user, which improves the user experience.
  • the terminal 1 After the terminal 1 determines that the call transfer function is enabled, the terminal 1 can first determine whether the terminal 1 is already occupied, and if so, the terminal 1 can send a call transfer request to the central device. If it is not occupied, the terminal 1 can determine whether the user's answer operation is not received after a specified time threshold (for example, 10s) after the received call is received. If the user does not answer after the specified time threshold, the terminal 1 can send a call to the central device Transfer request.
  • a specified time threshold for example, 10s
  • the terminal in the local area network can report to the central device after enabling the call transfer function. Therefore, the above steps S1005 and S1006 can be executed by the hub device. That is, the hub device can determine whether the terminal 1 has the call transfer function enabled. If so, the hub device determines whether the incoming call has not been answered over time or whether the terminal 1 is occupied. If so, the hub device performs step S1008, and the hub device obtains the voice capabilities of each terminal parameter.
  • the hub device obtains the voice capability parameters of each terminal.
  • the hub device After the hub device receives the call transfer request sent by the terminal 1, in response to the call transfer request, the hub device can obtain the voice capability parameters of each terminal (terminal 1, terminal 2, ..., terminal N) in the local area network according to the call transfer request.
  • the voice capability parameters include: voice capability priority m, call frequency n, voiceprint energy value x, user location value y, and device state value s.
  • voice capability priority m voice capability priority
  • call frequency n voiceprint energy value x
  • user location value y user location value
  • device state value s For a specific description of the voice capability parameter, reference may be made to step S306 in the embodiment shown in FIG. 3, which will not be repeated here.
  • the hub device determines the answering terminal (for example, the answering terminal is terminal 2) according to the voice capability parameters of other terminals.
  • the hub device can determine the answering device (such as terminal 2) according to the voice capability parameters of other terminals (terminal 2, ..., terminal N).
  • the process of determining the answering device by the hub device may refer to the process of determining the answering device by the terminal 1 in step S307 in the embodiment shown in FIG. 3, which will not be repeated here.
  • the hub device sends an incoming call instruction to the terminal 2.
  • the hub device sends a call end instruction to the terminal 1.
  • the terminal 2 In response to the received call instruction, the terminal 2 outputs an incoming call reminder.
  • the incoming call reminder may include at least one of the following: a ringtone reminder, a mechanical vibration reminder, and an incoming call display reminder (for example, the terminal 2 displays the contact information of the contact on the display screen, etc.).
  • the terminal 2 receives the answer operation of the user.
  • the terminal 2 After the terminal 2 outputs the incoming call reminder, the terminal 2 can receive the user's answer operation (for example, click the answer button displayed on the terminal 2 screen or click the answer physical button on the terminal 2), and in response to the answer operation, the terminal 2 can return the answer confirmation To the central equipment.
  • the user's answer operation for example, click the answer button displayed on the terminal 2 screen or click the answer physical button on the terminal 2
  • the hub device After receiving the answering confirmation returned by the terminal 2, the hub device receives the voice data of the contact.
  • the hub device can request the contact's voice data from the contact's terminal via the Internet. After receiving the request, the contact's terminal will collect the contact's voice, generate the contact's voice data, and send the contact's voice data to the central device in the form of data packets.
  • the hub device sends the voice data of the contact to the terminal 2.
  • the hub device may forward the voice data of the contact to the terminal 2 in the form of a data packet.
  • the terminal 2 After receiving the voice data of the contact, the terminal 2 plays the voice data of the contact.
  • the terminal 2 After receiving the voice data packet of the contact from the hub device, the terminal 2 can obtain the voice data of the contact from the voice data packet of the contact, and play the voice data of the contact.
  • the terminal 2 collects sound through a microphone, and generates voice data of the user.
  • Terminal 2 sends the user's voice data to the central device.
  • step S1015 after the terminal 2 returns the answer confirmation to the central device, the terminal 2 can continuously collect the user's voice and the surrounding environment's voice through the microphone.
  • the terminal 2 can compress and encode the sound collected by the microphone (including the user's voice and surrounding environment sound) through a voice compression algorithm, generate the user's voice data, and encapsulate the user's voice data into voice data pack. Then, the terminal 2 sends the user's voice data packet to the central device through the local area.
  • the hub device forwards the user's voice data to the contact terminal.
  • the hub device After receiving the user’s voice data packet from terminal 2, the hub device can forward the user’s voice data packet to the contact’s terminal via the Internet. After receiving the user’s voice data packet, the contact’s terminal can send it from the user The user’s voice data is parsed out of the voice data packet, and the user’s voice data is played.
  • terminal 1 When the above voice call is transferred, if the connection between terminal 1 and terminal 2 is established, terminal 1 can be directly transferred to terminal 2, that is, when terminal 1 receives the contact’s voice data, it is sent to terminal 2, and terminal 2 collects the user’s voice The data is then sent to terminal 1, which is sent to the contact's terminal.
  • the aforementioned voice call transfer can also be implemented by a server instead of a central device.
  • the aforementioned hub device may not be a router in a local area network, but may be a terminal other than terminal 1.
  • FIG. 11 shows a schematic diagram of another network architecture 1100 provided by an embodiment of the present application.
  • the network architecture 1100 includes multiple terminals.
  • the terminal may include: a smart phone 201, a smart watch 202, a smart speaker 203, a personal computer 204, a smart TV 205, a tablet computer 206, etc., which are not limited in this application.
  • the terminal may include: a smart phone 201, a smart watch 202, a smart speaker 203, a personal computer 204, a smart TV 205, a tablet computer 206, etc., which are not limited in this application.
  • the multiple terminals can all have the call capability, and the multiple terminals can receive incoming calls and calls in the following ways: 1. All the multiple terminals can receive from the circuit switched domain (CS) in the mobile communication network. Domain) incoming call or conversation. 2. The multiple terminals can receive incoming calls or calls based on VoLTE technology in the IP multimedia subsystem (IMS) in the mobile communication network. 3. The multiple terminals can receive incoming calls or conversations based on VoIP technology from the Internet.
  • CS circuit switched domain
  • IMS IP multimedia subsystem
  • the multiple terminals are all connected to the server 208 in the smart home cloud network through the Internet.
  • the server 208 may be a server in a smart home cloud network, the number of which is not limited to one, but may be multiple, which is not limited here.
  • the server 208 may include a memory, a processor, and a transceiver.
  • the memory may be used to store the respective voice capability parameters of the multiple terminals (for example, voice capability priority m, call frequency n, voiceprint energy value x, User location value y, device status value s, etc.).
  • the transceiver can be used to communicate with various terminals.
  • the processor can be used to process the data acquisition request of each terminal and instruct the transceiver to issue the respective voice capability parameters of the multiple terminals to each terminal.
  • terminals with call capabilities in the home such as smart phones, smart watches, smart speakers, personal computers, smart TVs, and smart tablets.
  • These terminals with call capability can all be connected to the server of the smart home cloud via the Internet.
  • the user can receive a voice call through any terminal (such as a smart phone).
  • the embodiment of the present application provides a voice communication method.
  • the server can determine the answering terminal (such as smart TV) of the incoming call according to the voice capability parameters of other terminals (such as voice capability priority m, call frequency n, voiceprint energy value x, user location y, device status value s, etc.) ), and transfer the incoming call on terminal 1 to the answering terminal (such as a smart TV). In this way, the user can be prevented from missing the incoming call on the terminal, and the user experience is improved.
  • FIG. 12 shows a voice communication method provided in an embodiment of the present application.
  • N terminals with call capability are connected to the server, and N is an integer greater than 2, wherein the N terminals with call capability are bound to the same account on the server.
  • any terminal that receives an incoming call can be referred to as terminal 1.
  • the smart phone when a smart phone receives an incoming call, the smart phone can be referred to as terminal 1, and when a smart TV receives an incoming call, the smart TV can be referred to as terminal 1, which is not limited here.
  • the method includes:
  • the server establishes a connection with each terminal.
  • each terminal can connect to the server through the Internet.
  • Terminal 1 receives the incoming call.
  • the incoming call may refer to a voice call.
  • the contact’s terminal can make a voice call to terminal 1 through the CS domain in the mobile communication network, and the contact’s terminal can also make a voice call based on VoLTE technology to terminal 1 through the IMS network in the mobile communication network.
  • the contact’s terminal can also use The Internet makes a voice call based on VoIP technology to the terminal 1.
  • Terminal 1 outputs an incoming call reminder.
  • the terminal 1 can output an incoming call reminder after receiving the call from the contact's terminal.
  • the incoming call reminder may include at least one of the following: ringtone reminder, mechanical vibration reminder, and incoming call display reminder (for example, the terminal 1 displays the contact information of the contact on the display screen, etc.).
  • Terminal 1 judges whether to enable the call transfer function, if yes, then S1205, terminal 1 judges whether it has not answered the call over time or whether it is occupied, if yes, then S1206, terminal 1 sends a call transfer request to the server. If the terminal 1 does not enable the call transfer function, the terminal 1 outputs an incoming call reminder, and the terminal 1 receives the user's answering operation and answers the incoming call.
  • the terminal 1 may receive the user's setting input before receiving the call from the contact. In response to the user's setting input, the terminal 1 may turn on or turn off the call transfer function. In this way, the terminal 1 can transfer the received call according to the needs of the user, which improves the user experience.
  • the terminal 1 can first determine whether the terminal 1 is already occupied, and if so, the terminal 1 can send a call transfer request to the server. If it is not occupied, the terminal 1 can determine whether the specified time threshold (for example, 10s) is exceeded after receiving the incoming call, and if so, the terminal 1 can send a call transfer request to the server.
  • the specified time threshold for example, 10s
  • the server obtains the voice capability parameters of each terminal.
  • the server After the server receives the call transfer request sent by the terminal 1, in response to the call transfer request, the server can obtain the voice capability parameters of each terminal (terminal 1, terminal 2, ..., terminal N) according to the call transfer request.
  • the voice capability parameters include: voice capability priority m, call frequency n, voiceprint energy value x, user location value y, and device state value s.
  • voice capability priority m voice capability priority
  • call frequency n call frequency n
  • voiceprint energy value x voiceprint energy value x
  • user location value y user location value
  • device state value s For a specific description of the voice capability parameter, reference may be made to step S306 in the embodiment shown in FIG. 3, which will not be repeated here.
  • the server determines the answering terminal (for example, the answering terminal is terminal 2) according to the voice capability parameters of other terminals.
  • the server may determine the answering device according to the voice capability parameters of other terminals (terminal 2, ..., terminal N).
  • the process of determining the answering device by the server may refer to the process of determining the answering device by the terminal 1 in step S307 in the embodiment shown in FIG. 3, which will not be repeated here.
  • terminal 1 transfers the received incoming call to the answering terminal (terminal 2).
  • S1209 The server sends an incoming call instruction to terminal 2.
  • Terminal 2 outputs an incoming call reminder.
  • the incoming call reminder may include at least one of the following: a ringtone reminder, a mechanical vibration reminder, and an incoming call display reminder (for example, the terminal 2 displays the contact information of the contact on the display screen, etc.).
  • the server sends an incoming call end instruction to the terminal 1.
  • Terminal 1 ends the output of incoming call reminders.
  • the terminal 2 receives the answer operation of the user. S1214. The terminal 2 returns an answer confirmation to the server.
  • the terminal 2 After the terminal 2 outputs the incoming call reminder, the terminal 2 can receive the user's answer operation (for example, click the answer button displayed on the terminal 2 screen or click the answer physical button on the terminal 2), and in response to the answer operation, the terminal 2 can return the answer confirmation To the server.
  • the user's answer operation for example, click the answer button displayed on the terminal 2 screen or click the answer physical button on the terminal 2
  • the terminal 1 After the server forwards the answering confirmation to the terminal 1, in response to the answering confirmation, the terminal 1 can transfer the voice call to the terminal 2.
  • terminal 1 transfers the voice call to the answering terminal (terminal 2).
  • Terminal 1 receives the voice data of the contact.
  • CS domain voice call The contact's terminal can collect the contact's voice, establish a call connection with the terminal 1 through the CS domain in the mobile communication network, and send the sound signal to the terminal 1.
  • VoLTE voice call The contact's terminal can also collect the contact's voice, and use the voice compression algorithm to compress and encode the contact's voice to generate the contact's voice data. Then the voice data is encapsulated into a voice data packet, and the voice data packet of the contact is sent to the terminal 1 through the IMS in the mobile communication network.
  • the contact’s terminal can also collect the contact’s voice, compress and encode the contact’s voice through the voice compression algorithm, generate the contact’s voice data, and then transfer the voice data through the IP protocol and other related protocols Encapsulate it into a voice data packet, and send the voice data packet of the contact to the terminal 1 through the Internet.
  • the terminal 1 sends the voice data of the contact to the server.
  • the terminal 1 when the voice call transferred by the terminal 1 is a CS domain voice call, after receiving the voice signal of the contact, the terminal 1 can compress and encode the voice signal of the contact through a voice compression algorithm to generate the voice of the contact Data, and encapsulate the voice data into voice data packets through the IP protocol and other related protocols. Then, the terminal 1 sends the voice data packet of the contact to the server via the Internet.
  • the terminal 1 can forward the voice data packet of the contact to the server via the Internet after receiving the voice data packet of the contact.
  • S1218 The server sends the voice data of the contact to the terminal 2.
  • Terminal 2 plays the voice data of the contact.
  • the terminal 2 After receiving the voice data packet of the contact from the terminal 1, the terminal 2 can obtain the voice data of the contact from the voice data packet of the contact, and play the voice data of the contact.
  • Terminal 2 collects sound through a microphone, and generates voice data of the user. S1221, Terminal 2 sends the user's voice data to the server.
  • step S1214 after the terminal 2 returns the answer confirmation to the server, the terminal 2 can continuously collect the user's voice and the surrounding environment's voice through the microphone.
  • the terminal 2 can compress and encode the sound collected by the microphone (including the user's voice and surrounding environment sound) through a voice compression algorithm, generate the user's voice data, and encapsulate the user's voice data into voice data pack. Then, the terminal 2 sends the user's voice data packet to the server via the Internet.
  • the server forwards the user's voice data to the terminal 1.
  • Terminal 1 sends the user's voice data to the terminal of the contact.
  • the terminal 1 can parse the user's voice data from the user's voice data packet, and combine the user's voice The data is converted into the user’s voice signal, and the user’s voice signal is sent to the contact’s terminal through the CS domain in the mobile communication network. After the contact’s terminal receives the user’s voice signal sent by the terminal 1, it can receive the user’s voice signal. The user's voice is parsed from the sound signal and played.
  • terminal 1 When the voice call transferred by terminal 1 is a VoLTE voice call, after terminal 1 receives the user's voice data packet sent by the server, it can forward the user's voice data packet to the contact's terminal through IMS, and the contact's terminal is in After receiving the user's voice data packet, the user's voice data can be parsed from the user's voice data packet, and the user's voice data can be played.
  • terminal 1 When the voice call transferred by terminal 1 is a VoIP voice call, after terminal 1 receives the user's voice data packet sent by the server, it can forward the user's voice data packet to the contact's terminal via the Internet, and the contact's terminal is in After receiving the user's voice data packet, the user's voice data can be parsed from the user's voice data packet, and the user's voice data can be played.
  • terminal 1 When the above voice call is transferred, if the connection between terminal 1 and terminal 2 is established, terminal 1 can be directly transferred to terminal 2, that is, when terminal 1 receives the contact’s voice data, it is sent to terminal 2, and terminal 2 collects the user’s voice The data is then sent to terminal 1, which is sent to the contact's terminal.
  • the aforementioned voice call transfer can also be implemented by a hub device instead of a server.
  • terminal 1 (that is, the terminal that receives the voice call) can transfer the contact’s call to terminal 2 to answer it. After terminal 2 answers the contact’s call, terminal 1 can also periodically acquire voice capabilities The parameters determine the new answering terminal (for example, terminal 3) according to the rules in the above-mentioned embodiment. After the new answering terminal (for example, terminal 3) is determined, terminal 1 can transfer the voice call to the new answering terminal (for example, terminal 3), and not to terminal 2 any more.
  • the terminal 1 can transfer the contact’s call to the terminal 2 to answer it. After the terminal 2 answers the contact’s call, the terminal 2 or the terminal 1 can receive the user’s transfer operation, and after receiving the user’s call After the transfer operation, the terminal 1 can obtain the voice capability parameters, and determine a new answering terminal (for example, the terminal 3) according to the rules in the foregoing embodiment. After the new answering terminal (for example, terminal 3) is determined, terminal 1 can transfer the voice call to the new answering terminal (for example, terminal 3), and not to terminal 2.
  • a new answering terminal for example, the terminal 3
  • terminal 1 can transfer the voice call to the new answering terminal (for example, terminal 3), and not to terminal 2.
  • the terminal 1 transfers the incoming call of the contact to the terminal 2, and the terminal 2 has output an incoming call reminder, but the terminal 2 has not answered the call over time.
  • the terminal 1 can choose to determine a new answering terminal (for example, the terminal 3) from other terminals other than the terminal 1 and the terminal 2, according to the voice capability parameters, and according to the rules in the foregoing embodiment. After the new answering terminal (for example, terminal 3) is determined, terminal 1 can transfer the incoming call to the new answering terminal (for example, terminal 3).
  • the aforementioned terminal 1 can be replaced by a hub device or a server.
  • the voice capability parameter value of each terminal can be stored on each terminal, or on the central device or server (for example, each terminal periodically reports the voice capability parameter value to the central device, server ).
  • the terminal, central device or server that receives the incoming call can determine the answering terminal for the call transfer or cloud call transfer according to the voice capability parameters of each terminal.
  • the receiving terminal can be The incoming call terminal is directly forwarded to the answering terminal, or it can be forwarded to the answering terminal via a hub device or server. It is determined that the several solutions for the answering terminal can be combined with the several solutions for forwarding calls, which are not limited here. For content not described in detail in the illustrated embodiments in this application, reference may be made to other illustrated embodiments.
  • the N terminals with call capability may not be in a local area network, but may be N terminals bound to the same account, or the N terminals with call capability are associated in other ways.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

本申请公开了一种语音通信包括:首先,第一终端接收到语音来电。然后当第一终端判断语音来电超时未接听,或者第一终端正在通话中,第一终端获取多个终端上报的用户位置。其中,多个终端中的任一终端与该第一终端不同。第一终端根据该多个终端上报的用户位置,从该多个终端中确定出第二终端。其中,在多个终端中第二终端与用户距离最近。第一终端将语音来电转移至第二终端上接听。这样,实现了通话设备之间的来电呼叫与语音通话的转移,避免用户漏接通话设备上的来电,提高了用户体验。

Description

一种语音通信方法及相关装置
本申请要求在2019年6月14日提交中国国家知识产权局、申请号为201910517494.3的中国专利申请的优先权,发明名称为“一种语音通信方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种语音通信方法及相关装置。
背景技术
随着通信技术的发展,智能控制技术、信息技术得到了飞速地发展,各种移动智能终端普及的同时,智能化也逐渐应用到传统的家居设备中,智能家居这一概念逐渐走进用户生活,用户可通过其移动终端控制家中的智能设备,使用户的生活更加方便。
目前,家庭局域网具有通话功能的智能设备都是单独通话的。例如个人电脑上接收到的语音来电只能在个人电脑上接听,手机上接听到的来电只能在手机上接听。在家中,当用户与来电设备相隔较远时,例如,手机在卧室,用户在客厅,用户可能无法察觉到设备的来电提醒(铃声或振动),因此可能会导致用户漏接来电,影响用户的体验。
发明内容
本申请提供了一种语音通信方法及相关装置,实现了家庭中通话设备之间的来电呼叫与语音通话的转移,避免用户漏接通话设备上的来电,提高了用户体验。
第一方面,本申请提供了一种语音通信方法,包括:首先,第一终端接收到语音来电。然后当第一终端判断语音来电超时未接听,或者第一终端正在通话中,第一终端获取多个终端上报的用户位置。其中,多个终端中的任一终端与该第一终端不同。第一终端根据该多个终端上报的用户位置,从该多个终端中确定出第二终端。其中,在多个终端中第二终端与用户距离最近。第一终端将语音来电转移至第二终端上接听。
通过本申请提出了一种语音通信方法,实现了在终端接收到语音来电之后,若终端被占用或超时未接听,终端可以从其他具备通话能力的终端中,根据各终端的上报的用户位置,确定出最优的接听终端,并将语音来电转移至该接听终端上。这样,通过家庭中各通话终端之间的语音来电的转移,避免了用户漏接通话终端上的语音来电,提高了用户体验。
在一种可能的实现方式中,上述第一终端获取多个终端上报的用户位置,具体包括:第一终端获取多个终端上报的用户的声纹能量值。其中,所述声纹能量值越高表示上报所述声纹能量值的终端距离用户越近。在第一终端获取多个终端上报的用户位置后,该第一终端根据该多个终端上报的声纹能力值,从该多个终端中确定出第二终端;其中,在该多个终端中该第二终端的声纹能量值最高。这样,第一终端可以通过用户的声纹能量值确定出距离用户最近的终端,并将语音来电转移至用户最近的终端,避免了用户漏接语音来电。
在一种可能的实现方式中,该方法还包括:首先,第一终端获取该多个终端上报的通话频率。其中,该通话频率为该终端的通话次数与路由器上连接的所有终端的总通话次数之比。然后,当该多个终端中有多个距离用户最近的终端时,第一终端根据该通话频率,从多个距 离用户最近的终端中,确定出该第二终端。其中,在多个距离用户最近的终端中第二终端通话频率最大。这样,在距离用户最近的终端有多个时,可以进一步根据个终端的通话频率,选出用户常用通话的第二终端,提高了用户的体验。
在一种可能的实现方式中,该方法还包括:首先,第一终端获取上述多个终端上报的语音能力优先级。其中,该语音能力优先级由终端的设备类型确定。当多个距离用户最近的终端中有多个通话频率最大的终端时,第一终端根据语音能力优先级,从多个距离用户最近且通话频率最大的终端中,确定出第二终端。其中,在多个距离用户最近且通话频率最大的终端中,该第二终端的语音能力优先级最高。这样,在距离用户最近的终端有多个时,可以进一步根据各终端的语音能力优先级,选出语音能力优先级最高的第二终端,提高了用户的体验。
在一种可能的实现方式中,第一终端将语音来电转移至第二终端接听,具体包括:第一终端接收联系人的终端发送的该联系人的语音数据,并接收该第二终端发送的用户的语音数据。该第一终端将联系人的语音数据发送给第二终端,并将用户的语音数据发送该联系人的终端。
在一种可能的实现方式中,在该第一终端接收联系人的终端发送的联系人的语音数据,并接收第二终端发送的用户的语音数据之前,该方法还包括:首先,第一终端发送来电指令给第二终端。其中,该来电指令用于指示第二终端输出来电提醒。然后,第一终端接收该第二终端发送的接听确认。响应于该接听确认,第一终端接收联系人的终端发送的联系人的语音数据,并接收该第二终端发送的用户的语音数据。
在一种可能的实现方式中,在该第一终端将语音来电转移至该第二终端上接听之前,该方法还包括:该第一终端与第二终端建立连接。
在一种可能的实现方式中,在第二终端接听上述联系人的来电后,第一终端还可以周期性获取上述多个终端中除第二终端之外的各终端上报的用户位置,并确定出第三终端。其中,第三终端为上述多个终端中除第二终端之外距离用户最近的终端。在确定出第三终端后,第一终端可以将语音通话转移至第三终端上,不再转移至第二终端上。
在一种可能的实现方式中,在第二终端接听上述联系人的来电后,第一终端还可以接收用户的转接操作。在接收到用户的转接操作之后,第一终端可以获取上述多个终端中除第二终端之外的各终端上报的用户位置,并确定出第三终端。其中,第三终端为上述多个终端中除第二终端之外距离用户最近的终端。在确定出第三终端后,第一终端可以将语音通话转移至第三终端上,不再转移至第二终端上。
在一种可能的实现方式中,第一终端将联系人的来电转移至第二终端上,第一终端检测第二终端已经输出来电提醒但超时未接听,则终端1可以从上述多个终端中除第二终端之外的其他终端中,根据其他终端上报的用户位置,确定出第三终端。其中,第三终端为上述其他终端中距离用户最近的终端。在确定出第三终端后,第一终端可以将联系人的来电转移至第三终端上,不再转移至第二终端。
在一种可能的实现方式中,在第一终端接收到语音来电后,可以先判断第一终端是否开启通话转移功能,若开启,则当语音来电未接听,或者第一终端正在通话中,第一终端获取多个终端上报的用户位置。若关闭通话转移功能,则第一终端输出来电提醒,等待接收用户接听操作。
第二方面,本申请提供另一种语音通信方法,包括:首先,第一终端接收语音来电。然后,当第一终端判断该语音来电超时未接听,或者第一终端正在通话中,第一终端获取多个终端上报的用户位置、通话频率、语音能力优先级和设备状态。其中,上述多个终端中的任一终端与该第一终端不同。然后,第一终端根据上述多个终端上报的用户位置、通话频率、语音能力优先级和设备状态值,从上述多个终端中确定出第二终端。接着,第一终端将该语音来电转移至第二终端上接听。
本申请提出了一种语音通信方法,实现了在终端接收到语音来电之后,若终端被占用或超时未接听,终端可以从其他具备通话能力的终端中,根据各终端的上报的用户位置、语音能力优先级、通话频率和设备状态,确定出最优的接听终端,并将来电呼叫与语音通话转移至该接听终端上。这样,通过家庭中各通话终端之间的来电呼叫与语音通话的转移,避免了用户漏接通话终端上的语音来电,提高了用户体验。
第三方面,本申请提供了另一种语音通信方法,包括:首先,当第一终端正在与联系人的终端进行语音通话时,第一终端接收用户的通话转移操作。然后,响应于该通话转移操作,第一终端获取多个终端上报的用户位置。其中,上述多个终端中的任一终端与所述第一终端不同。接着,第一终端根据上述多个终端上报的用户位置,从上述多个终端中确定出第二终端;其中,在上述多个终端中该第二终端与用户距离最近。最后,第一终端将语音通话转移至第二终端。
通过本申请提供了一种语音通信方法,用户在通过第一终端(如智能手机)与联系人进行通话时,第一终端可以根据其他终端的上报的用户位置,确定出语音通话的第二终端,并将语音通话转移至第二终端(例如智能电视)上,这样,可以在用户在室内移动的过程中,提升用户与联系人的通话效果。
第四方面,本申请提供了一种终端,包括一个或多个处理器、一个或多个存储器和收发器。该一个或多个存储器、收发器和一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得终端执行上述任一方面任一项可能的实现方式中的语音通信方法。
第五方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在终端上运行时,使得终端执行上述任一方面任一项可能的实现方式中的语音通信方法。
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述任一方面任一项可能的实现方式中的语音通信方法。
第七方面,本申请提供了另一种语音通信方法,包括:首先,中枢设备接收第一终端发送的通话转移请求。然后,响应于第一终端发送的通话转移请求,中枢设备获取多个终端上报的用户位置。其中,多个终端中的任一终端与第一终端不同。接着,第一终端与上述多个终端都连接在该中枢设备上。接着,中枢设备根据上述多个终端上报的用户位置,从上述多个终端中确定出第二终端。其中,在上述多个终端中该第二终端与用户距离最近。最后,中枢设备发送来电通知给第二终端,该来电通知用于第二终端输出来电提醒。
通过本申请提出了一种语音通信方法,实现了在终端接收到语音来电之后,若终端被占用或超时未接听,中枢设备可以根据各终端的上报的用户位置,从与中枢设备连接的其他具备通话能力的终端中,确定出最优的接听终端。中枢设备将语音来电转移至该接听终端上。这样,通过家庭中各通话终端之间的语音来电的转移,避免了用户漏接通话终端上的语音来电,提高了用户体验。
在一种可能的实现方式中,在中枢设备发送来电通知给第二终端后,该方法还包括:该中枢设备接收第二终端发送的接听指令。响应于该接听指令,中枢设备接收联系人的终端发送的联系人的语音数据,并接收第二终端发送的用户的语音数据。中枢设备将该联系人的语音数据发送给第二终端,并将该用户的语音数据发送给联系人的终端。
第八方面,本申请提供了另一种语音通信方法,包括:首先,服务器接收到第一终端发送的通话转移请求。然后,响应于通话转移请求,服务器获取多个终端上报的用户位置。其中,多个终端中的任一终端与所述第一终端不同。接着,服务器根据上述多个终端上报的用户位置,从上述多个终端中确定出第二终端。其中,在上述多个终端中该第二终端与用户距离最近。最后,服务器发送来电通知给第二终端,该来电通知用于第二终端输出来电提醒。
通过本申请提出了一种语音通信方法,实现了在终端接收到语音来电之后,若终端被占用或超时未接听,服务器可以根据各终端的上报的用户位置,从与服务器连接的其他具备通话能力的终端中,确定出最优的接听终端。服务器将语音来电转移至该接听终端上。这样,通过各通话终端之间的语音来电的转移,避免了用户漏接通话终端上的语音来电,提高了用户体验。
在一种可能的实现方式中,在服务器发送来电通知给第二终端后,该方法还包括:该服务器接收第二终端发送的接听指令。响应于该接听指令,服务器接收联系人的终端发送的联系人的语音数据,并接收第二终端发送的用户的语音数据。服务器将该联系人的语音数据发送给第二终端,并将该用户的语音数据发送给联系人的终端。
附图说明
图1为本申请实施例提供的一种终端的结构示意图;
图2为本申请实施例提供的一种网络架构示意图;
图3为本申请实施例提供的一种语音通信方法的流程示意图;
图4为本申请实施例提供的一种语音通信场景示意图;
图5为本申请另一实施例提供的一种语音通信场景示意图;
图6为本申请另一实施例提供的一种语音通信场景示意图;
图7为本申请另一实施例提供的一种语音通信方法的流程示意图;
图8A-8C为本申请另一实施例提供的一种语音通信方法的场景示意图;
图9A-9C为本申请另一实施例提供的一种语音通信方法的场景示意图;
图10为本申请另一实施例提供的一种语音通信方法的流程示意图;
图11为本申请另一实施例提供的一种网络架构示意图;
图12为本申请另一实施例提供的一种语音通信方法的流程示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清除、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
图1示出了终端100的结构示意图。
下面以终端100为例对实施例进行具体说明。应该理解的是,图1所示终端100仅是一个范例,并且终端100可以具有比图1中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
终端100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对终端100的具体限定。在本申请另一些实施例中,终端100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是终端100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous  receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现终端100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现终端100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现终端100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为终端100充电,也可以用于终端100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端100的结构限定。在本申请另一些实施例中,终端100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过终端100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过 电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
终端100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。终端100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在终端100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在终端100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,终端100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得终端100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE), BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
终端100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端100可以包括1个或N个显示屏194,N为大于1的正整数。
终端100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,终端100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当终端100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。终端100可以支持一种或多种视频编解码器。这样,终端100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行终端100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操 作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储终端100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
终端100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。终端100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当终端100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。终端100可以设置至少一个麦克风170C。在另一些实施例中,终端100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,终端100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。终端100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,终端100根据压力传感器180A检测所述触摸操作强度。终端100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定终端100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定终端100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测终端100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消终端100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,终端100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。终端100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当终端100是翻盖机时,终端100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测终端100在各个方向上(一般为三轴)加速度的大小。当终端100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。终端100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,终端100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。终端100通过发光二极管向外发射红外光。终端100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定终端100附近有物体。当检测到不充分的反射光时,终端100可以确定终端100附近没有物体。终端100可以利用接近光传感器180G检测用户手持终端100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。终端100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测终端100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。终端100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,终端100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,终端100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,终端100对电池142加热,以避免低温导致终端100异常关机。在其他一些实施例中,当温度低于又一阈值时,终端100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于终端100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。终端100可以接收按键输入,产生与终端100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈 效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和终端100的接触和分离。终端100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。终端100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,终端100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在终端100中,不能和终端100分离。
目前,在现有技术中,当用户不方便在移动终端上接听电话时,用户可以通过如下方式转接电话:1、用户提前开启移动终端呼叫转移功能,并让用户填写接听终端的电话号码,移动终端可以将接听终端的电话号码发送给移动通信网的网络侧设备,网络侧设备可以将移动终端的号码与接听终端的电话号码绑定在一起,当网络侧设备接收到针对移动终端的呼叫请求时,网络侧设备可以将针对移动终端的呼叫请求,转移至接听终端上,从而可以让接听终端接听打给移动终端的电话。但是这样,一旦用户开启移动终端呼叫转移功能之后,打给移动终端的所有电话,都会被转移至接听终端,若用户想继续使用移动终端接听电话,却忘记关闭呼叫转移功能时,会漏接很多电话,给用户造成了不便。
2、用户可以将移动终端通过蓝牙或Wi-Fi技术,与耳机或音箱相连,当移动终端接听到来电时,移动终端会默认将来电呼叫和语音通话转移至耳机或音箱,并通过耳机或者音箱采集用户的声音。当用户需要用移动终端来进行通话时,移动终端需要接收用户选择输入,才能够将通话切换回移动终端上,给用户造成了繁杂的操作步骤,且在用户切换语音通话的过程中,用户容易漏掉部分通话内容,给用户造成了不便。
针对上述问题,本申请提出了一种语音通信方法,实现了在终端接收到语音来电之后,若终端被占用或超时未接听,终端可以从家庭局域网内其他具备通话能力的终端中,根据各终端的语音能力参数(例如语音能力优先级m、通话频率n、用户声音能量值x、用户位置值y、设备状态值s等),确定出最优的接听终端,并将来电呼叫与语音通话转移至该接听终端上。这样,通过家庭中各通话终端之间的来电呼叫与语音通话的转移,避免了用户漏接通话终端上的语音来电,提高了用户体验。
下面介绍本申请实施例提供的一种网络架构。
请参照图2,图2示出了本申请实施例中提供的一种网络架构200示意图。如图2所示,该网络架构200包括多个终端。其中,该终端可以包括智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206等,本申请对此不作任何限制。
其中,该多个终端都可以具备通话能力,该多个终端可以通过如下方式接收到呼叫与通通话:1、该多个终端都可以接收来自移动通信网中电路交换域(circuit switched domain,CS 域)的来电呼叫或通话。2、该多个终端可以接收来自移动通信网中IP多媒体子系统(IP multimedia subsystem,IMS)中基于VoLTE技术的来电呼叫或通话。3、该多个终端可以接收来自因特网(Internet)中基于VoIP技术的来电呼叫或通话。
该多个终端都可以通过有线或无线保真(wireless fidelity,WiFi)连接的方式连接至局域网(local area network,LAN)。在该网络架构200中的终端的结构可以参考上述图1所示的终端100,在此不再赘述。比如该多个终端连接到同一个路由器。
在一种可能的实现方式中,该网络架构200的局域网中还包括有中枢设备207,该中枢设备207可以与网络架构200中的多个终端(例如终端201、终端202、终端203、终端204、终端205和终端206。)连接。该中枢设备207可以是路由器、网关、智能设备控制器等。该中枢设备207可以包括有存储器、处理器和收发器,存储器可以用于存储有该多个终端各自的语音能力参数(例如语音能力优先级m、通话频率n、声纹能量值x,用户位置值y,设备状态值s等)。处理器可以用于当连接到局域网的某个终端需要转接来电时,从多个终端各自的语音能力参数中,确定出接听终端。收发器可用于与连接到局域网的该多个终端进行通信。
在一种可能的实现方式中,该网络架构200的还包括服务器208,其中,该服务器208可以是智能家居云网络中的服务器,其数量不限于一个,可以是多个,在此不作限定。其中,该服务器208可以包括存储器、处理器和收发器,其中,存储器可以用于存储有该多个终端各自的语音能力参数(例如语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s等)。收发器可以用于与局域网中各终端进行通信。处理器可用于处理局域网中各终端的获取数据请求,并指示收发器下发多个终端各自的语音能力参数给各终端。
下面结合上述图2所示的网络架构200,以及应用场景,具体解释本申请实施例提供的一种语音通信方法。在本申请实施例中,接收到语音来电的终端可以被称为第一终端,接听终端可以被称为第二终端。
在一些应用场景中,家庭内的局域网中具有多个具备通话能力的终端,例如智能手机、智能手表、智能音箱、个人电脑、智能电视和智能平板等。用户可以通过任一终端(例如智能手机)接收到语音来电,当用户长时间未接听时,或者来电设备已经被占用(例如正在通话中),用户可能会漏接来电,给用户造成不便。因此,本申请实施例提供了一种语音通信方法,终端1(例如智能手机)接收到来电后,当终端1超时未接收到用户的接听操作或者该终端已经被占用(例如正在通话中)时,终端1可以根据其他终端的语音能力参数(例如语音能力优先级m,通话频率n,声纹能量值x,用户位置y,设备状态值s等),确定出该来电的接听终端(例如智能电视),并将来电转移至接听终端(例如智能电视)上。这样,可以避免用户错过终端上的来电,提高了用户体验。
请参照图3,图3示出了本申请实施例中提供的一种语音通信方法。其中,局域网中可以包括N个具备通话能力的终端,N为大于2的整数。局域网是指N个具备通话能力的终端都连接在一个路由器上构成的计算机网络。其中,局域网中的多个具备通话能力的终端可以绑定有同一个用户账号。在图3所示实施例中,任一个接收到来电的终端都可以被称为终端1。例如智能手机接收到来电时,智能手机可以被称为终端1,当智能电视接收到来电时,智能电视可以被称为终端1,在此不作任何限定。如图3所示,该方法包括:
S301、终端1接收到来电。
其中,该来电可以指语音来电。联系人的终端(即发起呼叫的终端)可以通过移动通信网络中CS域拨打语音电话给终端1,联系人的终端还可以通过移动通信网络中IMS网络拨打基于VoLTE技术的语音电话给终端1,联系人的终端还可以通过因特网(Internet)拨打基于VoIP技术的语音电话给终端1。
S302、终端1输出来电提醒。
终端1在接收到联系人的终端拨打过来的电话后,可以输出来电提醒。其中,该来电提醒可以包括以下至少一种:铃声提醒、机械振动提醒和来电显示提醒(例如终端1在显示屏上显示联系人的联系方式等)。
S303、终端1判断是否开启通话转移功能,若是,则S304、终端1与局域网中的其他各终端建立连接。若否,则终端1与局域网中的其他各终端不建立连接。
其中,终端1可以在接收到联系人的来电之前,接收用户的设置输入,响应于用户的设置输入,终端1可以开启或关闭通话转移功能。这样,终端1可以根据用户的需求,转移接收到的来电,提高了用户的体验。可以理解地,通话转移功能是将本终端收到的来电呼叫或语音通话转出到其他终端上,由其他终端进行来电提醒,和/或采集用户的语音,播放来电方(即联系人)的语音。当终端1开启通话转移功能时,终端1接收到来电且需要将来电转出,则终端1收到的来电呼叫转出到其他终端上,由其他终端进行来电提醒,当其他终端上检测到用户接听来电,终端1将语音通话转移到其他终端上。当终端1开启通话转移功能时,终端1正在进行语音通话且需要将通话转出,则终端1将语音通话转移到其他终端上。
当终端1判断已经开启通话转移功能之后,终端1可以与局域网中的其他各终端(终端2、终端3、……、终端N)建立连接。其中,该连接可以是基于TCP/IP协议的连接,在该基于TCP/IP协议的连接下,终端1可以基于VoIP技术将来电(包括呼叫和语音通话)转移至转接设备。该连接可以是Wi-Fi直连。该连接还可以是通过路由器建立的连接。若两个设备支持蓝牙功能时,该连接还可以是蓝牙连接,等等。这样,终端1在确定需要转移来电之后,可以及时将来电转移至转接设备(例如终端2),无需用户等待过多时间,减少了转移来电所带来的时延。
在一种可能的实现方式中,若终端1是从中枢设备或者服务器上获取其他各终端的语音能力参数时,终端1可以在确定出接听设备之后,只与转接设备建立连接。这样,在终端1只与转接设备建立连接,减少无线资源消耗。
S305、终端1判断来电是否超时未被接听或者终端1是否被占用,若是,则S306、终端1获取局域网中其他各终端的语音能力参数。
在终端1与局域网中的其他各终端建立连接之后,终端1可以先判断终端1是否已经被占用,若是,则终端1可以获取其他各终端(终端2、终端3、……、终端N)的语音能力参数,若未被占用,则终端1可以判断在接收到的来电之后是否超过指定时间阈值(例如10s)未接收到用户的接听操作。若超过指定时间阈值未接听,则终端1获取其他各终端(终端2、终端3、……、终端N)的语音能力参数。其中,语音能力参数包括:语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s。其中:
1、语音能力优先级m用于表示终端的通话效果,终端的通话效果越好,语音能力优先级越大,。例如,终端1的语音能力优先级m可以为1,终端2的语音能力优先级n可以为0.5,则表示终端1的通话效果比终端2的通话效果好。终端的语音能力优先级是由终端的类型确定的。示例性的,终端类型与终端语音能力值的对应关系可以如下列表1所示:
表1
终端类型 语音能力优先级m
智能手机 1
平板电脑 0.8
智能音箱 0.6
智能电视 0.5
智能手表 0.4
个人电脑 0.3
由上表1可以看出,智能手机类终端的语音能力值可以是1,平板电脑类终端的语音能力值可以是0.8,智能音箱类终端的语音能力值可以是0.6,智能电视类终端的语音能力值可以是0.5,智能手表类终端的语音能力值可以是0.4,个人电脑类终端的语音能力值可以是0.3。上述表1所示内容,仅仅用于解释本申请,不应构成限定。
2、通话频率n用于表示终端接通语音通话的频繁程度,通话频率n越大表示终端接通语音通话次数越频繁。其中,通话频率n可以为该终端的通话次数与局域网中所有终端的总通话次数之比。其中,局域网中的任一终端都可以周期性(例如周期可以是一周,一个月等等)发送通话次数给其他终端。终端在接收其他终端各自的通话次数之后,可以确定出局域网中所有终端的总通话次数。示例性的,假设局域网中有6个终端,分别为智能手机、平板电脑、智能音箱、智能电视、智能手表、个人电脑。这6个终端的通话次数与通话频率n可以如下表2所示:
表2
终端 通话次数 通话频率n
智能手机 80 0.4
平板电脑 20 0.1
智能音箱 20 0.1
智能电视 60 0.3
智能手表 10 0.05
个人电脑 10 0.05
由上表2可以看出,智能手机的通话次数为80、平板电脑的通话次数为20、智能音箱的通话次数为20,智能电视的通话次数为60,智能手表的通话次数为10,个人电脑的通话次数为10。其中,局域网中这6个终端的总通话次数为200。因此,智能手机的通话频率为0.4,平板电脑的通话频率为0.1,智能音箱的通话频率为0.1,智能电视的通话频率为0.3,智能手表的通话频率为0.05,个人电脑的通话频率为0.05。上述表2所示内容,仅仅用于解释本申请,不应构成限定。
3、声纹能量值x用于表示终端接收到用户的声音大小,声纹能量值越大,表示该用户越靠近此终端。其中,终端可以通过麦克风实时采集声音,终端根据采集到的声音计算用户的声纹能量值。示例性的,假设局域网中有6个终端,分别为智能手机、平板电脑、智能音箱、智能电视、智能手表、个人电脑。这6个终端采集到的用户声音能力值x可以如下表3所示:
表3
终端 声纹能量值x(单位:100dB)
智能手机 0.23
平板电脑 0
智能音箱 0.25
智能电视 0.55
智能手表 0.5
个人电脑 0
由上表3可以看出,智能手机的声纹能量值为0.23、平板电脑的声纹能量值为0、智能音箱的声纹能量值为0.25,智能电视的声纹能量值为0.55,智能手表的声纹能量值为0.5,个人电脑的声纹能量值为0。上述表3所示内容,仅仅用于解释本申请,不应构成限定。
4、用户位置值y用于表示终端与用户的位置关系。例如,用户位置值y为1时,则表示用户在该终端周围。当用户位置值y为0时,则表示该终端未检测用户在该终端的周围。终端可以通过摄像头获取该终端周围的图像,并从图像中检测用户是否与该终端处于家庭户型图中的同一位置(例如都在卧室),若是,则该终端的用户位置值y为1;若否,则该终端的用户位置值y为0。其中,该摄像头可以是终端自身的摄像头,还可以是连接在局域网中单独的摄像头。当该摄像头为连接在局域网中单独的摄像头时,摄像头在获取到用户的位置(例如在卧室)之后,可以将用户的位置(例如在卧室)发送给局域网中的各终端。当该终端不具备获取用户的位置的能力时(例如摄像头已损坏或未配置有摄像头),该终端的用户位置y为0.5。示例性的,假设局域网中有6个终端,分别为智能手机、平板电脑、智能音箱、智能电视、智能手表、个人电脑。其中,智能手机可以在主卧中,智能音箱可以在次卧1中,平板电脑可以在次卧2中,智能电视和智能手表可以在客厅处。当用户处在客厅时,这6个终端采集到用户位置值y可以如下表4所示:
表4
终端 用户位置值y
智能手机 0
平板电脑 0
智能音箱 0.5
智能电视 1
智能手表 1
个人电脑 0
由上表4可以看出,智能手机的用户位置值y为0,表示用户不在智能手机的周围。平板电脑的用户位置值y为0,表示用户不在智能手机的周围。智能电视的用户位置值y为1,表示用户在智能电视的周围。智能手表的用户位置y为1,表示用户在智能手表的周围。个人电脑的用户位置y为0,表示用户不在个人电脑的周围。上述表4所示内容,仅仅用于解释本申请,不应构成限定。
5、设备状态值s用于表示终端当前是否可用于接听语音通话。当该终端当前处于空闲状态,可用于接听语音通话时,该设备状态值s为1。当该终端当前处于被占用状态时,例如该终端正在语音通话中,该设备状态值s为0.5。当该终端当前处于不可用状态时,例如该终端 已关闭通话转入功能,即该终端不接收其他终端的转移来的通话,该设备状态值s为0。其中,终端可以接收用户的输入,关闭通话接入功能,终端还可以在正在通话时关闭通话转入功能,在此不作限定。示例性的,假设局域网中有6个终端,分别为智能手机、平板电脑、智能音箱、智能电视、智能手表、个人电脑。这6个终端的设备状态值s可以如下表5所示:
表5
终端 设备状态值s
智能手机 1
平板电脑 0.5
智能音箱 1
智能电视 1
智能手表 1
个人电脑 0
由上述表5可以看出,智能手机的设备状态值s为1,即表示智能手机当前可用于接听语音通话。平板电脑的设备状态值s为0.5,即表示平板电脑当前为占用状态(例如正在通话中)。智能音箱的设备状态值s为1,即表示智能音箱当前可用于接听语音通话。智能电视的设备状态值s为1,即表示智能电视当前可用于接听语音通话。智能手表的设备状态值s为1,即表示智能手表当前可用于接听语音通话。个人电脑的设备状态值s为0,即表示智能手表当前不可用于接听语音通话。
在一种可能的实现方式中,其他终端各自的语音能力参数存储于其他终端本地的存储器上。例如,终端2的语音能力参数存储于终端2本地的存储器上,终端3的语音能力参数存储于终端3本地的存储器上,……,终端N的语音能力参数存储于终端N本地的存储器上。终端1可以发送语音能力参数的获取指令给局域网中的其他终端(终端2、终端3、……、终端N),其他终端(终端2、终端3、……、终端N)在接收到语音能力参数的获取指令后,可以将各自的语音能力参数发送给终端1。
在一种可能的实现方式中,其他终端各自的语音能力参数存储于局域网中的中枢设备上。局域网中的每个终端可以将各自的语音能力参数发送给中枢设备207。当终端1判定接收到的来电超过指定时间阈值未接收到用户的接听操作或者终端1已经被占用(例如正在通话中),终端1可以向中枢设备207发送语音能力参数的获取指令。中枢设备207在接收到语音能力参数的获取指令后,可以将其他终端(终端2、终端3、……、终端N)的语音能力参数发送给终端1。
在一种可能的实现方式中,其他终端各自的语音能力参数存储于智能家居云的服务器208上。局域网中的每个终端可以连接至服务器208,局域网中的每个终端可以将各自的语音能力参数发送给服务器208。当终端1判断接收到的来电超过指定时间阈值未接收到用户的接听操作或者终端1已经被占用(例如正在通话中),终端1可以向服务器208发送语音能力参数的获取指令。服务器208在接收到语音能力参数的获取指令后,可以将其他终端(终端2、终端3、……、终端N)的语音能力参数发送给终端1。
下面介绍终端1如何确定出来电转移的接听终端。
S307、终端1根据其他各终端的语音能力参数,确定出接听终端。其中,该接听终端用于接收终端1转移的来电。
其中,语音能力参数可以包括语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s。针对语音能力参数的文字描述,可以参考前述实施例,在此不再赘述。
在一种可能的实现方式中,终端1可以从其他终端中筛选出当前可进行通话的设备(即设备状态值s为1的终端),例如终端2、终端3、……、终端N,当前都可以进行通话。终端1可以先比较其他终端各自的声纹能量值x的大小。若当前可以进行通话的终端(终端2、终端3、……、终端N)中仅有一个声纹能量值x最大的终端(例如,终端2),则终端1可以将该声音能量值x最大的终端(例如终端2)确定为接听终端。
若当前可以进行通话的终端(终端2、终端3、……、终端N)中有多个声纹能量值x最大的终端(例如终端2、终端3、终端4和终端5),则终端1可以比较该多个声纹能量值x最大的终端(例如终端2、终端3、终端4和终端5)各自的用户位置值y的大小。若该多个声纹能量值x最大的终端(例如终端2、终端3、终端4和终端5)中仅有一个用户位置值y最大的终端(例如终端2),则终端1可以将该声音能量值x最大且用户位置值y最大的终端(例如终端2)确定为转移终端。
若该多个声纹能量值x最大的终端(例如终端2、终端3、终端4和终端5)中有多个用户位置值y最大的终端(例如终端2、终端3和终端4),则终端1可以比较该声音能量值x最大且用户位置值y最大的终端(例如终端2、终端3和终端4)各自的通话频率n大小。若该多个声纹能量值x最大且用户位置值y最大的终端(例如终端2、终端3和终端4)中仅有一个通话频率n最大的终端(例如终端2),则终端可以将该声纹能量值x最大、且用户位置y最大、且通话频率n最大的终端(例如终端2)确定为转移终端。
若该多个声纹能量值x最大且用户位置y最大的终端(例如终端2、终端3和终端4)中有多个通话频率n最大的终端(例如终端2和终端3),则终端1可以比较该声音能量值x最大、且用户位置值y最大、且通话频率n最大的终端(例如终端2和终端3)各自的语音能力优先级m值大小。若该多个声纹能量值x最大且用户位置值y最大且通话频率n最大的终端(例如终端2和终端3)中仅有一个语音能力优先级m最大的终端(例如终端2),则终端可以将该声音能量值x最大、且用户位置y最大、且通话频率n最大、且语音能力优先级m最大的终端(例如终端2)确定为接听终端。
若声音能量值x最大、且用户位置y最大、且通话频率n最大、且语音能力优先级m最大的终端有多个,则终端1可以从该多个终端中随机选择一个终端作为接听终端。
可以理解的是,终端1可以根据用户的声纹能量值x和/或用户位置值y确定出用户位置。终端1可以根据用户位置、语音能力优先级m、通话频率n以及设备状态值s中的一项或多项,确定出接听终端。例如,终端1可以只根据声纹能力值x,确定出接听终端(例如终端2)。其中,接听终端为除终端1之外的其他终端中声纹能力值x最大的一个。又例如,终端1可以根据声纹能力值x和通话频率n,确定出接听终端(例如终端2)。其中,接听终端为除终端1之外的其他终端中声纹能力x最大且通话频率n最大的一个。
在一种可能的实现方式中,终端1可以根据如下公式(1)计算其他终端各自的转接能力 值V。其中:
V=f(a*m,b*n,c*x,d*y)*s       公式(1)
其中,m为语音能力优先级,n为通话频率,x为声纹能量值,y为用户位置值,s为设备状态值,a为语音能力优先级的权重,b为通话频率的权重,c为声纹能力值的权重,d为用户位置值的权重。f(z1,z2,z3,z4)为运算函数,其中,该运算函数f(z1,z2,z3,z4)可以求和函数。即,
V=(a*m+b*n+c*x+d*y)*s       公式(2)
终端1在计算出其他终端(终端2、终端3、……、终端N)各自的语音能力值V之后,可以将转接能力值V最大的终端(例如终端2)作为接听终端。
其中,上述语音能力参数(语音能力优先级m或通话频率n或声纹能量值x或用户位置值y)对接听终端的确定结果影响越大,其权重越大。例如,对接听终端的确定影响:声纹能量值x大于用户位置值y大于通话频率n大于语音能力优先级m,则c>d>b>a。在一种可能的实现方式中,语音能力参数各自的权重可变。局域网中的任一终端可以接收用户的输入操作,重新设置语音能力参数各自的权重。
下面通过上述公式(2),示例性的介绍如何根据语音能力参数,确定出接听终端。
示例性的,如图4所示,家庭局域网中可以有6个具备通话能力的终端,例如智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206。其中,智能手机201位于家庭户型图的主卧位置处,智能手表202位于家庭户型图的客厅位置处,智能音箱203位于家庭户型图的次卧1位置处,个人电脑204位于家庭户型图的书房位置处,智能电视205位于家庭户型图的客厅位置处,平板电脑206位于家庭户型图的次卧2位置处。当智能手机201接收到来电时,用户在家庭户型图的客厅处,与智能手表202和智能电视205所处位置相同。
其中,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表6所示:
表6
Figure PCTCN2020095751-appb-000001
由上表6可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.23,智能手机201的用户位置值y 1为0,智能手机201的设备状态值s 1为0.5。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.4,智能手表202的用户位置值y 2为1,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话 频率n 3为0.1,智能音箱203的声纹能量值x 3为0.25,智能音箱203的用户位置值y 3为0.5,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.2,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.55,智能电视205的用户位置值y 5为1,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.2,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为0.5。上述表6仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
当智能手机201接收到来电,获取到其他各终端的语音能力参数后,智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手表202的转接能力值V 2为0.45,智能音箱203的转接能力值V 3为0.305,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.585,平板电脑206的转接能力值V 6为0.1。由于智能电视205的转接能力值V 5为0.585,在其他终端(智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能电视205为接听终端。上述示例仅仅用于解释本申请,不应构成限定,具体实现过程中,家庭局域网中的任一设备都可以接收到来电,在此不一一赘述。
又示例性的,如图5所示,家庭局域网中可以有6个具备通话能力的终端,例如智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206。其中,智能手机201位于家庭户型图的主卧位置处,智能手表202位于家庭户型图的客厅位置处,智能音箱203位于家庭户型图的次卧1位置处,个人电脑204位于家庭户型图的书房位置处,智能电视205位于家庭户型图的客厅位置处,平板电脑206位于家庭户型图的次卧2位置处。当智能手机201接收到来电时,用户可以在家庭户型图的阳台处,和家庭局域网中的各终端所处位置都不相同。其中,家庭局域网中的各终端都无法获取到声纹能量值x和用户位置值y。
其中,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表7所示:
表7
Figure PCTCN2020095751-appb-000002
由上表7可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0,智能手机201的用户位置值y 1为0,智能手机201的设备状态值s 1为0.5。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0,智能手表202的用户位置值y 2为0,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0,智能电视205的用户位置值y 5为0,智能电视205的设备状态值s 5为0.5。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为0.5。上述表7仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
当智能手机201接收到来电,获取到其他各终端的语音能力参数后,智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手表202的转接能力值V 2为0.05,智能音箱203的转接能力值V 3为0.08,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.055,平板电脑206的转接能力值V 6为0.05。由于智能音箱203的转接能力值V 3为0.08,在其他终端(智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能音箱203为接听终端。上述示例仅仅用于解释本申请,不应构成限定,具体实现过程中,家庭局域网中的任一设备都可以接收到来电,在此不一一赘述。
在一种可能的实现方式中,终端1可以在上述步骤S307、根据其他各终端的语音能力参数,确定出接听终端之前,接收用户的输入操作(例如语音输入等),将终端1接收到的来电转移至局域网中的指定终端。示例性的,如图6所示,家庭局域网中可以有6个具备通话能力的终端,例如智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206。其中,智能手机201位于家庭户型图的主卧位置处,智能手表202位于家庭户型图的客厅位置处,智能音箱203位于家庭户型图的次卧1位置处,个人电脑204位于家庭户型图的书房位置处,智能电视205位于家庭户型图的客厅位置处,平板电脑206位于家庭户型图的次卧2位置处。当智能手机201接收到来电时,智能手机201可以接收用户的语音输入(例如“小艺小艺,转接到客厅电视”),响应于该语音输入,智能手机201可以将接收到的来电转移至处于客厅的智能电视205上。上述示例仅仅用于解释本申请,不应构成限定。
在一种可能的实现方式中,上述步骤S306和步骤S307,可以由中枢设备或服务器来执行。即,中枢设备或服务器可以获取局域网中各终端的语音能力参数。中枢设备或服务器可以根据除终端1以外的其他各终端的语音能力参数,确定出接听终端。然后,中枢设备或服 务器可以将接听终端的标识(例如终端2的IP地址)发送给终端1。
下面介绍终端1将接收到来电呼叫转移给接听终端(终端2)的过程。
S308、终端1发送来电指令给终端2。
由于终端1和其他终端都通过局域网连接,因此,终端1可以通过局域网将来电指令发送给终端2。
S309、终端2输出来电提醒。
终端2在接收到终端1发送来的来电指令后,响应于该来电指令,终端1可以输出来电提醒。其中,终端2输出的来电提醒可以包括一下至少一种:铃声提醒、机械振动提醒和来电显示提醒(例如终端2在显示屏上显示联系人的联系方式等)。
在一种可能的实现方式中,为了避免同一联系人的来电在两个终端上同时输出来电提醒,终端2在接收到终端1发送的来电指令后,终端2可以返回来电确认信息给终端1,终端1在接收到来电确认信息之后,可以停止输出来电提醒。
S310、终端2可以接收用户的接听操作。响应于用户的接听操作,S311、终端2可以返回接听确认信息给终端1。
在终端2输出来电提醒之后,终端2可以接收用户的接听操作(例如点击终端2屏幕上显示的接听按钮或者点击终端2上的接听物理按键),响应于该接听操作,终端2可以返回接听确认给终端1。响应于该接听确认,终端1可以将语音转移至终端2上。
下面具体介绍在终端1在接收到终端2返回的接听确认信息之后,终端1将语音通话转移给接听终端(终端2)的过程。
S312、终端1接收联系人的语音数据。
其中:
1、CS域语音通话:联系人的终端可以采集联系人的声音,并通过移动通信网中的CS域与终端1建立通话连接,将声音信号发送给终端1。
2、VoLTE语音通话:联系人的终端可以采集联系人的声音,并将联系人的声音,通过语音压缩算法,对联系人的声音进行压缩编码处理,生成联系人的语音数据。然后将语音数据封装成语音数据包,并通过移动通信网中的IMS,将联系人的语音数据包发送给终端1。
3、VoIP语音通话:联系人的终端可以采集联系人的声音,通过语音压缩算法,对联系人的声音进行压缩编码处理,生成联系人的语音数据,然后通过IP协议等相关协议将语音数据封装成语音数据包,并通过Internet将联系人的语音数据包发送给终端1。
S313、终端1将联系人的语音数据发送给终端2。
其中,当终端1转移的语音通话是CS域语音通话,终端1在接收到联系人的声音信号后,可以将联系人的声音信号,通过语音压缩算法,进行压缩编码处理,生成联系人的语音数据,并通过IP协议等相关协议将语音数据封装成语音数据包。然后,终端1通过局域网将联系人的语音数据包发送给终端2。
当终端1转移的语音通话是VoLTE语音通话或VoIP语音通话时,终端1在接收到联系人的语音数据包之后,可以通过局域网将联系人的语音数据包转发给终端2。
S314、终端2在接收到联系人的语音数据之后,播放该联系人的语音数据。
终端2在接收到终端1发送的联系人的语音数据包后,可以从该联系人的语音数据包中,获取到联系人的语音数据,并播放该联系人的语音数据。
S315、终端2通过麦克风采集声音,生成用户的语音数据。
S316、终端2将用户的语音数据发送给终端1。
在步骤S311、终端2返回接听确认给终端1之后,终端2可以通过麦克风持续采集用户的声音以及周围环境的声音。终端2可以将麦克风采集到的声音(包括用户的声音和周围环境声音),通过语音压缩算法,对采集到的声音进行压缩编码处理,生成用户的语音数据,并将用户的语音数据封装成语音数据包。然后,终端2通过局域网将用户的语音数据包发送给终端1。
S317、终端1接收到终端2发送的用户的语音数据之后,将用户的语音数据发送给联系人的终端。
其中,
当终端1转移的语音通话是CS域的语音通话时,终端1在接收到终端2发送的用户的语音数据后,将用户的语音数据转换成用户的声音信号,并将用户的声音信号通过移动通信网络中的CS域发送给联系人的终端。联系人的终端在接收到终端1发送的用户的声音信号后,可以从用户的声音信号中解析出用户的声音并播放。
当终端1转移的语音通话是VoLTE语音通话时,终端1在接收到终端2发送的用户的语音数据之后,可以将用户的语音数据通过IMS,转发给联系人的终端。联系人的终端在接收到用户的语音数据后,可以播放该用户的语音数据。
当终端1转移的语音通话是VoIP语音通话时,终端1在接收到终端2发送的用户的语音数据之后,可以将用户的语音数据通过Internet,转发给联系人的终端。联系人的终端在接收到用户的语音数据后,可以播放该用户的语音数据。
上述步骤S312-S314和S315-S317之间没有先后顺序,下述实施例中也类似。
在一些可能的实现方式中,上述步骤S313和S316可以经由中枢设备或服务器来进行转发。
在一些应用场景中,家庭内的局域网中具有多个具备通话能力的终端,例如智能手机、智能手表、智能音箱、个人电脑、智能电视和智能平板等。用户通过任一终端(例如智能手机)与联系人进行语音通话。由于家庭局域中的许多终端移动性差,无法根据用户移动,当用户在家中走动时,终端的语音通话效果会变差。因此,本申请实施例提供了一种语音通信方法,用户在通过终端1(如智能手机)与联系人进行通话时,终端1可以根据其他终端的语音能力参数(例如语音能力优先级m,通话频率n,声纹能量值x,用户位置y,终端状态值s等),确定出语音通话的接听终端(例如智能电视),并将语音通话转移至接听终端(例如智能电视)上,这样,可以在用户在室内移动的过程中,提升用户与联系人的通话效果。
请参照图7,图7示出了本申请实施例中提供的一种语音通信方法。其中,局域网中可以包括N个具备通话能力的终端,N为大于2的整数,在图7所示实施例中,任一个正在通话的的终端都可以被称为终端1,例如用户正在使用智能手机与联系人通话时,智能手机可以被称为终端1,当用户正在使用智能电视与联系人通话时,智能电视可以被称为终端1,在此不作任何限定。如图7所示,该方法包括:
S701、终端1与局域网中的其他各终端建立连接。
终端1可以与局域网中的其他各终端(终端2、终端3、……、终端N)建立连接,其中,该连接可以是基于TCP/IP协议的连接,在该基于TCP/IP协议的连接下,终端1可以基于VoIP技术将来电(包括呼叫和通话)转移至转接设备。该连接还可以是Wi-Fi直连。该连接还可以是通过路由器建立的连接。若两个设备支持蓝牙功能时,该连接还可以是蓝牙连接。
在一种可能的实现方式中,若终端1是从中枢设备或者服务器上获取其他各终端的语音能力参数时,终端1可以在确定出接听设备之后,只与转接设备建立连接。这样,在终端1只与转接设备建立连接。
S702、终端1与联系人的终端进行语音通话。
其中,终端1与联系人的终端进行的语音通话可以是上述CS域的语音通话,还可以是上述VoLTE语音通话,还可以是上述VoIP语音通话,在此不再赘述。
S703、终端1接收用户的通话转移操作。
该通话转移操作可以是用户在终端1的显示屏上点击通话转接控件,或者用户的语音指令输入(例如“小艺小艺,通话跟着我走”)等。
下面接收终端1如何确定出接听终端。其中,下述步骤S704至步骤S705,可以周期性(例如每2秒)执行。
S704、响应于用户的通话转移操作,终端1获取局域网中各终端的语音能力参数。
其中,语音能力参数包括:语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s。具体内容可以参考前述图3所示实施例中的步骤S306,在此不再赘述。
S705、终端1根据各终端的语音能力参数,确定出接听终端。该接听终端用于接听与联系人的终端进行的通话。
其中,当终端1确定出的接听终端为终端1自身时,终端1可以不进行通话转接。当终端1确定出的接听终端为局域网中的其他终端时,终端1可以将与联系人的终端进行的通话转接至该接听终端,此时,终端1可以用于中转联系人的终端与接听终端之间的语音数据。其中,终端1确定出接听终端的具体过程可以参考前述图3所实施例中的步骤S307,在此不再赘述。
示例性的,如图8A所示,家庭局域网中可以有6个具备通话能力的终端,例如智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206。其中,智能手机201位于家庭户型图的主卧位置处,智能音箱203位于家庭户型图的次卧1位置处,个人电脑204位于家庭户型图的书房位置处,智能电视205位于家庭户型图的客厅位置处,平板电脑206位于家庭户型图的次卧2位置处。当用户通过智能手机201与联系人进行通话时,用户可以随身携带智能手表202。由于用户随身携带智能手表202,智能手表202可以检测到用户的位置与智能手表202处于同一位置,即用户位置值y 2恒为1。
其中,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表8所示:
表8
Figure PCTCN2020095751-appb-000003
Figure PCTCN2020095751-appb-000004
由上表8可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.6,智能手机201的用户位置值y 1为1,智能手机201的设备状态值s 1为1。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.55,智能手表202的用户位置值y 2为1,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0.35,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.35,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.2,智能电视205的用户位置值y 5为0,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.2,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为1。上述表8仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手机201的转接能力值V 1为0.68,智能手表202的转接能力值V 2为0.525,智能音箱203的转接能力值V 3为0.255,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.21,平板电脑206的转接能力值V 6为0.2。由于智能手机201的转接能力值V 1为0.68,在各终端(智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能手机201自身为接听终端。上述示例仅仅用于解释本申请,不应构成限定。
如图8B所示,智能手机201在接收到用户的通话转移操作之后,用户携带着智能手表202可以走动到书房与次卧之间的走廊。此时,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表9所示:
表9
Figure PCTCN2020095751-appb-000005
Figure PCTCN2020095751-appb-000006
由上表9可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.3,智能手机201的用户位置值y 1为0,智能手机201的设备状态值s 1为1。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.6,智能手表202的用户位置值y 2为1,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0.35,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.35,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.2,智能电视205的用户位置值y 5为0,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.2,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为1。上述表9仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
当智能手机201接收到来电,获取到其他各终端的语音能力参数后,智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手机201的转接能力值V 1为0.33,智能手表202的转接能力值V 2为0.55,智能音箱203的转接能力值V 3为0.255,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.21,平板电脑206的转接能力值V 6为0.2。由于智能手表202的转接能力值V 3为0.55,在局域网中各终端(智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能手表202为接听终端。于是,智能手机201可以将通话转移至智能手表202上。上述示例仅仅用于解释本申请,不应构成限定。
如图8C所示,智能手机201在接收到用户的通话转移操作之后,用户携带着智能手表202可以走动到客厅。其中,用户与智能手表202和智能电视205同处于客厅,客厅中的智能电视205可以获取到用户的位置,即智能电视205的用户位置值y 5为1。此时,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表10所示:
表10
Figure PCTCN2020095751-appb-000007
Figure PCTCN2020095751-appb-000008
由上表10可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.3,智能手机201的用户位置值y 1为0,智能手机201的设备状态值s 1为1。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.6,智能手表202的用户位置值y 2为1,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0.35,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.35,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.6,智能电视205的用户位置值y 5为1,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.2,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为1。上述表10仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
当智能手机201接收到来电,获取到其他各终端的语音能力参数后,智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手机201的转接能力值V 1为0.33,智能手表202的转接能力值V 2为0.55,智能音箱203的转接能力值V 3为0.255,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.61,平板电脑206的转接能力值V 6为0.2。由于智能电视202的转接能力值V 5为0.61,在局域网中各终端(智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能电视205为接听终端。于是,智能手机201可以将通话转移至智能电视205。上述示例仅仅用于解释本申请,不应构成限定。
示例性的,如图9A所示,家庭局域网中可以有6个具备通话能力的终端,例如智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206。其中,智能手机201位于家庭户型图的主卧位置处,智能手表202位于家庭户型图的客厅位置处,智能音箱203位于家庭户型图的次卧1位置处,个人电脑204位于家庭户型图的书房位置处,智能电视205位于家庭户型图的客厅位置处,平板电脑206位于家庭户型图的次卧2位置处。当用户通过智能手机201与联系人进行通话时,智能手机201的用户位置值y 1为1。
其中,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表11所示:
表11
Figure PCTCN2020095751-appb-000009
Figure PCTCN2020095751-appb-000010
由上表11可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.6,智能手机201的用户位置值y 1为1,智能手机201的设备状态值s 1为1。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.2,智能手表202的用户位置值y 2为0,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0.35,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.35,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.2,智能电视205的用户位置值y 5为0,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.2,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为1。上述表11仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手机201的转接能力值V 1为0.68,智能手表202的转接能力值V 2为0.15,智能音箱203的转接能力值V 3为0.255,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.21,平板电脑206的转接能力值V 6为0.2。由于智能手机201的转接能力值V 1为0.68,在各终端(智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能手机201自身为接听终端。上述示例仅仅用于解释本申请,不应构成限定。
如图9B所示,智能手机201在接收到用户的通话转移操作之后,用户可以走动到书房与次卧之间的走廊。此时,家庭局域网中的各终端都无法获取到用户的位置,即用户位置y都为0。其中,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表12所示:
表12
Figure PCTCN2020095751-appb-000011
Figure PCTCN2020095751-appb-000012
由上表12可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.2,智能手机201的用户位置值y 1为0,智能手机201的设备状态值s 1为1。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.2,智能手表202的用户位置值y 2为0,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0.45,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.45,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.2,智能电视205的用户位置值y 5为0,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.2,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为1。上述表12仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。例如,智能手机201的转接能力值V 1为0.28,智能手表202的转接能力值V 2为0.15,智能音箱203的转接能力值V 3为0.305,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.21,平板电脑206的转接能力值V 6为0.2。由于智能音箱203的转接能力值V 3为0.305,在各终端(智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能音箱203为接听终端。于是,智能手机201可以将语音通话转移至智能音箱203上。上述示例仅仅用于解释本申请,不应构成限定。
如图9C所示,当智能音箱203接听通话时,用户可以走动到客厅。其中,用户与智能手表202和智能电视205同处于客厅,客厅中的智能电视205可以获取到用户的位置,即智能手表202的用户位置值y 2为1,智能电视205的用户位置值y 5为1。此时,家庭局域网中各终端的语音能力参数(语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s)可以如下表13所示:
表13
Figure PCTCN2020095751-appb-000013
Figure PCTCN2020095751-appb-000014
由上表13可以看出,智能手机201的语音能力优先级m 1为1,智能手机201的通话频率n 1为0.4,智能手机201的声纹能量值x 1为0.2,智能手机201的用户位置值y 1为0,智能手机201的设备状态值s 1为1。智能手表202的语音能力优先级m 2为0.4,智能手表202的通话频率n 2为0.05,智能手表202的声纹能量值x 2为0.5,智能手表202的用户位置值y 2为1,智能手表202的设备状态值s 2为1。智能音箱203的语音能力优先级m 3为0.6,智能音箱203的通话频率n 3为0.1,智能音箱203的声纹能量值x 3为0.3,智能音箱203的用户位置值y 3为0,智能音箱203的设备状态值s 3为1。个人电脑204的语音能力优先级m 4为0.3,个人电脑204的通话频率n 4为0.05,个人电脑204的声纹能量值x 4为0.3,个人电脑204的用户位置值y 4为0,个人电脑204的设备状态值s 4为0。智能电视205的语音能力优先级m 5为0.5,智能电视205的通话频率n 5为0.3,智能电视205的声纹能量值x 5为0.6,智能电视205的用户位置值y 5为1,智能电视205的设备状态值s 5为1。平板电脑206的语音能力优先级m 6为0.8,平板电脑206的通话频率n 6为0.1,平板电脑206的声纹能量值x 6为0.35,平板电脑206的用户位置值y 6为0,平板电脑206的设备状态值s 6为1。上述表13仅仅用于解释本申请,不应构成限定。
其中,示例性的,语音能力优先级m对应的权重值a可以为0.1,通话频率n对应的权重值b可以为0.2,声纹能量值x对应的权重值c可以为0.5,用户位置值y对应的权重值d可以为0.2。
智能手机201可以通过上述公式(2),计算出其他各终端的转接能力值V。其中,智能手机201的转接能力值V 1为0.28,智能手表202的转接能力值V 2为0.5,智能音箱203的转接能力值V 3为0.23,个人电脑204的转接能力值V 4为0,智能电视205的转接能力值V 5为0.61,平板电脑206的转接能力值V 6为0.275。由于智能电视205的转接能力值V 5为0.61,在各终端(智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205和平板电脑206)中的转接能力值最大,智能手机201可以确定智能电视205为接听终端。于是,智能手机201可以将语音通话转移至智能电视205上。上述示例仅仅用于解释本申请,不应构成限定。
在一种可能的实现方式中,终端1在确定出接听终端之后,可以判断接听终端的转接能力值V是否比当前通话终端的转接能力值V高出指定阈值(例如0.2),若是,则终端1将语音通话转移至该接听终端上。示例性的,例如指定阈值可以是0.2。接听终端(例如终端2)的转接能力值为0.61,当前通话终端(例如终端3)的转接能力值V为0.23,接听终端的转接能力值V(即0.61)比当前通话终端的转接能力值V(即0.23)的差值为0.38,差值高于指定阈值(即0.2)。因此,终端1可以将语音通话转移至接听终端上。这样,可以避免通话设备的频繁切换。
在一种可能的实现方式中,终端1可以在确定出接听终端之后,可以检测用户的声纹能量,声纹能量在指定声纹能量阈值(例如10dB)以下且持续一定的时间(例如0.5秒)时,终端1转移通话至接听终端上。这样,由于用户说话过程中,在每一句话结尾时,用户的声纹能量最小,在用户的声纹能量小于一定阈值时,切换转移通话,可以避免转移通话过程中, 通话终端采集到用户的话语不完整。
下面介绍语音通话转移的过程。
S706、终端1接收联系人的语音数据。
具体内容,可以参考前述图3所示实施例的步骤S312,在此不再赘述。
S707、终端1将联系人的语音数据发送给终端2。
具体内容,可以参考前述图3所示实施例的步骤S313,在此不再赘述。
S708、终端2在接收到联系人的语音数据之后,播放该联系人的语音数据。
具体内容,可以参考前述图3所示实施例的步骤S314,在此不再赘述。
S709、终端2通过麦克风采集声音,并生成用户的语音数据。
具体内容,可以参考前述图3所示实施例的步骤S315,在此不再赘述。
S710、终端2将用户的声音数据发送给终端1。
具体内容,可以参考前述图3所示实施例的步骤S316,在此不再赘述。
S711、终端1发送用户的语音数据给联系人的终端。
具体内容,可以参考前述图3所示实施例的步骤S317,在此不再赘述。
在一些可能的实现方式中,上述步骤S704和步骤S705,可以由中枢设备或服务器来执行。即,中枢设备或服务器可以获取局域网中各终端的语音能力参数。中枢设备或服务器可以根据其他各终端的语音能力参数,确定出接听终端。然后,中枢设备或服务器可以将接听终端的标识(例如终端2的IP地址)发送给终端1。
在一些可能的实现方式中,上述步骤S707和S710可以经由中枢设备或服务器来进行转发。
在一些应用场景中,终端1可以正在与联系人A的终端进行通话,这时,终端1又可以接收到联系人B的终端的来电呼叫。由于终端1无法同时与两个以上联系人的终端同时进行通话,因此,在一种可能的实现方式中,终端1可以将联系人B的终端的来电呼叫通过上述图3所示实施例中确定接听终端的步骤,确定出接听终端(例如终端2)。并且,终端1可以将来自于联系人B的终端的来电呼叫转接至接听终端(例如终端2)上,并由接听终端与联系人B的终端进行通话。在一种可能的实现方式中,终端1可以将与联系人A的终端的语音通话转移至接听终端(例如终端2)上,并由终端1接听来自于联系人B的终端的来电。其中,终端1转移呼叫或通话的过程可以参考前述图3或图7所示实施例,在此不再赘述。这样,可以避免漏接来电,提高了用户的体验。
在一些应用场景中,局域网中可以有多个具备通话能力的终端,同时,局域网中还可以有中枢设备,该中枢设备具有路由功能,当终端1接收到联系人的终端通过Internet拨打的VoIP语音电话时,终端1与联系人的终端在通话过程中的语音数据包,都需要经过该中枢设备中转。当终端1被占用或者超时未接听联系人终端拨打的VoIP语音电话时,该中枢设备可以采集局域网中各终端的语音能力参数,并从局域网的各终端中确定出接听终端(例如终端2),将该VoIP语音电话的语音数据包转发至该接听终端(例如终端2)。这样,通过中枢设备计算出接听终端,可以减少终端运算负担,降低转移呼叫或通话时的时延。
请参考图10,图10为本申请实施例提供的一种语音通信方法,该方法包括:
S1001、中枢设备与各终端建立连接。
其中。中枢设备可以分别与局域网中的各终端(终端1、终端2、……、终端N)建立连接。该连接可以是无线(例如Wi-Fi连接)或者有线连接。如果各个终端已经和路由器建立连接,则可以跳过此步骤。
S1002、中枢设备接收到发给终端1的呼叫指示信息。
中枢设备可以接收到联系人的终端依托Internet发送来的呼叫指示信息,其中,该呼叫指示信息包括有呼叫发送方的地址信息(联系人的终端的IP地址)以及接收方地址信息(例如终端1的IP地址)。该呼叫指示信息用于指示终端1输出来电提醒。
S1003、中枢设备将呼叫指示信息转发给终端1。
中枢设备可以根据接收方地址信息,将该呼叫指示信息发送给终端1。
S1004、终端1输出来电提醒。
终端1在接收到中枢设备转发的呼叫指示信息之后,响应于该呼叫指示信息,终端1可以输出来电提醒。其中,该来电提醒可以包括该来电提醒可以包括以下至少一种:铃声提醒、机械振动提醒和来电显示提醒(例如终端1在显示屏上显示联系人的联系方式等)。
S1005、终端1判断是否开启通话转移功能。若是,则S1006、终端1判断来电是否超时未被接听或者终端1是否被占用,若是,则S1007、终端1发送通话转移请求给中枢设备。若终端1未开启通话转移功能时,终端1输出来电提醒。
其中,终端1可以在接收到联系人的来电之前,接收用户的设置输入,响应于用户的设置输入,终端1可以开启或关闭通话转移功能。这样,终端1可以根据用户的需求,转移接收到的来电,提高了用户的体验。
当终端1判断已开启通话转移功能之后,终端1可以先判断终端1是否已经被占用,若是,则终端1可以向中枢设备发送通话转移请求。若未被占用,则终端1可以判断在接收到的来电之后是否超过指定时间阈值(例如10s)未接收到用户的接听操作,若超过指定时间阈值未接听,则终端1可以向中枢设备发送通话转移请求。
在一种可能的实现方式中,局域网中的终端在开启通话转移功能后,可以上报给中枢设备。因此,上述步骤S1005和步骤S1006可以由中枢设备执行。即,中枢设备可以判断终端1是否开启通话转移功能,若是,则中枢设备判断来电是否超时未被接听或者终端1是否被占用,若是,则中枢设备执行步骤S1008、中枢设备获取各终端的语音能力参数。
S1008、中枢设备获取各终端的语音能力参数。
在中枢设备接收到终端1发送的通话转移请求之后,响应于该通话转移请求,中枢设备可以根据获取局域网中各终端(终端1、终端2、……、终端N)的语音能力参数。其中,语音能力参数包括:语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s。针对语音能力参数的具体说明,可以参考前述图3所示实施例中的步骤S306,在此不再赘述。
S1009、中枢设备根据其他各终端的语音能力参数确定出接听终端(例如接听终端为终端2)。
其中,中枢设备在接收到各终端的语音能力参数之后,可以根据其他各终端(终端2、……、 终端N)的语音能力参数,确定出接听设备(如终端2)。中枢设备确定出接听设备的过程可以参考前述图3所示实施例中的步骤S307中终端1确定出接听设备的过程,在此不再赘述。
S1010、中枢设备向终端2发送来电指令。
S1011、中枢设备发送呼叫结束指令给终端1。
S1012、终端2响应于接收到的来电指令,输出来电提醒。
其中,该来电提醒可以包括该来电提醒可以包括以下至少一种:铃声提醒、机械振动提醒和来电显示提醒(例如终端2在显示屏上显示联系人的联系方式等)。
S1013、终端1响应于接收到的来电结束指令,结束输出来电提醒。
这样,通过上述步骤S1010至步骤S1013,可以避免同一联系人的来电在两个终端上同时输出来电提醒。
S1014、终端2接收用户的接听操作。S1015、终端2返回接听确认给中枢设备。
在终端2输出来电提醒之后,终端2可以接收用户的接听操作(例如点击终端2屏幕上显示的接听按钮或者点击终端2上的接听物理按键),响应于该接听操作,终端2可以返回接听确认给中枢设备。
S1016、中枢设备在接收终端2返回的接听确认后,接收联系人的语音数据。
中枢设备在接收到终端2返回的接听确认之后,可以通过Internet向联系人的终端请求联系人的语音数据。联系人的终端在接收到该请求之后,就会采集联系人的声音,生成联系人的语音数据,并以数据包的形式将联系人的语音数据发给中枢设备。
S1017、中枢设备将联系人的语音数据,发送给终端2。
中枢设备在接收到联系人的语音数据之后,可以将联系人的语音数据以数据包的形式转发给终端2。
S1018、终端2在接收到联系人的语音数据之后,播放联系人的语音数据。
终端2在接收到中枢设备发送的联系人的语音数据包后,可以从该联系人的语音数据包中,获取到联系人的语音数据,并播放该联系人的语音数据。
S1019、终端2通过麦克风采集声音,生成用户的语音数据。
S1020、终端2将用户的语音数据发送给中枢设备。
在步骤S1015、终端2返回接听确认给中枢设备之后,终端2可以通过麦克风持续采集用户的声音以及周围环境的声音。终端2可以将麦克风采集到的声音(包括用户的声音和周围环境声音),通过语音压缩算法,对采集到的声音进行压缩编码处理,生成用户的语音数据,并将用户的语音数据封装成语音数据包。然后,终端2通过局域将用户的语音数据包发送给中枢设备。
S1021、中枢设备将用户的语音数据转发给联系人的终端。
中枢设备在接收到终端2发送的用户的语音数据包之后,可以将用户的语音数据包通过Internet,转发给联系人的终端,联系人的终端在接收到用户的语音数据包后,可以从用户的语音数据包中解析出用户的语音数据,并播放该用户的语音数据。
上述步骤S1016-S1018和S1019-S1021之间没有先后顺序,下述实施例中也类似。
上述语音通话转移时,若终端1和终端2建立连接,可以由终端1直接转移给终端2,即当终端1收到联系人的语音数据后,发送给终端2,终端2采集到用户的语音数据后发送给终端1,由终端1发送给联系人的终端。上述语音通话转移也可以有服务器代替中枢设备实现。
上述中枢设备可以不是局域网中的路由器,可以是除了终端1以外的其他终端。
下面介绍本申请实施例提供的另一种网络架构。
请参见图11,图11示出了本申请实施例提供的另一种网络架构1100示意图。其中,该网络架构1100中包括有多个终端。其中,该终端可以包括:智能手机201、智能手表202、智能音箱203、个人电脑204、智能电视205、平板电脑206等,本申请对此不作任何限制。在该网络架构1100中的终端的结构可以参考上述图1所示的终端100,在此不再赘述。
其中,该多个终端都可以具备通话能力,该多个终端可以通过如下方式接收到来电与通通话:1、该多个终端都可以接收来自移动通信网中电路交换域(circuit switched domain,CS域)的来电呼叫或通话。2、该多个终端可以接收来自移动通信网中IP多媒体子系统(IP multimedia subsystem,IMS)中基于VoLTE技术的来电呼叫或通话。3、该多个终端可以接收来自因特网(Internet)中基于VoIP技术的来电呼叫或通话。
该多个终端都通过Internet连接上智能家居云网络中的服务器208。该服务器208可以是智能家居云网络中的服务器,其数量不限于一个,可以是多个,在此不作限定。其中,该服务器208可以包括存储器、处理器和收发器,其中,存储器可以用于存储有该多个终端各自的语音能力参数(例如语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s等)。收发器可以用于与各终端进行通信。处理器可用于处理各终端的获取数据请求,并指示收发器下发多个终端各自的语音能力参数给各终端。
下面结合上述图11所示的网络架构1100,以及应用场景,具体介绍本申请实施例提供的一种语音通信方法。
在一些应用场景中,家中可以有多个具备通话能力的终端,例如智能手机、智能手表、智能音箱、个人电脑,智能电视和智能平板等。这些具备通话能力的终端都可以通过Internet连接上智能家居云的服务器。用户可以通过任一终端(例如智能手机)接收到语音来电,当用户长时间未接听时,或者来电设备已经被占用(例如正在通话中),用户可能会漏接来电,给用户造成不便。因此,本申请实施例提供了一种语音通信方法,终端1(例如智能手机)接收到来电后,当终端1超时未接收到用户的接听操作或者该终端已经被占用(例如正在通话中)时,服务器可以根据其他终端的语音能力参数(例如语音能力优先级m,通话频率n,声纹能量值x,用户位置y,设备状态值s等),确定出该来电的接听终端(例如智能电视),并将终端1上的来电转移至接听终端(例如智能电视)上。这样,可以避免用户错过终端上的来电,提高了用户体验。
请参照图12,图12示出了本申请实施例中提供的一种语音通信方法。其中,服务器上连接的N个具备通话能力的终端,N为大于2的整数,其中,该N个具备通话能力的终端在服务器上绑定同一个账户。在图12所示实施例中,任一个接收到来电的终端都可以被称为终端1。例如智能手机接收到来电时,智能手机可以被称为终端1,当智能电视接收到来电时,智能电视可以被称为终端1,在此不作任何限定。如图12所示,该方法包括:
S1201、服务器与各终端建立连接。
其中,各终端可以通过Internet连接上服务器。
S1202、终端1接收到来电。
其中,该来电可以指语音来电。联系人的终端可以通过移动通信网络中CS域拨打语音 电话给终端1,联系人的终端还可以通过移动通信网络中IMS网络拨打基于VoLTE技术的语音电话给终端1,联系人的终端还可以通过因特网(Internet)拨打基于VoIP技术的语音电话给终端1。
S1203、终端1输出来电提醒。
终端1在接收到联系人的终端拨打过来的电话后,可以输出来电提醒。其中,该来电提醒可以包括以下至少一种:铃声提醒、机械振动提醒和来电显示提醒(例如终端1在显示屏上显示联系人的联系方式等)。
S1204、终端1判断是否开启通话转移功能,若是,则S1205、终端1判断是否超时未接听或者是否被占用,若是,则S1206、终端1发送通话转移请求给服务器。若终端1未开启通话转移功能,则由终端1输出来电提醒,并由终端1接收用户的接听操作,接听来电。
其中,终端1可以在接收到联系人的来电之前,接收用户的设置输入,响应于用户的设置输入,终端1可以开启或关闭通话转移功能。这样,终端1可以根据用户的需求,转移接收到的来电,提高了用户的体验。
其中,终端1可以先判断终端1是否已经被占用,若是,则终端1可以发送通话转移请求给服务器。若未被占用,则终端1可以判断在接收到来电之后是否超过指定时间阈值(例如10s),若是,则终端1可以发送通话转移请求给服务器。
S1207、服务器获取各终端的语音能力参数。
在服务器接收到终端1发送的通话转移请求之后,响应于该通话转移请求,服务器可以根据获取各终端(终端1、终端2、……、终端N)的语音能力参数。其中,语音能力参数包括:语音能力优先级m,通话频率n,声纹能量值x,用户位置值y,设备状态值s。针对语音能力参数的具体说明,可以参考前述图3所示实施例中的步骤S306,在此不再赘述。
S1208、服务器根据其他各终端的语音能力参数,确定出接听终端(例如接听终端为终端2)。
其中,服务器在接收到各终端的语音能力参数之后,可以根据其他各终端(终端2、……、终端N)的语音能力参数,确定出接听设备。服务器确定出接听设备的过程可以参考前述图3所示实施例中的步骤S307中终端1确定出接听设备的过程,在此不再赘述。
下面介绍终端1将接收到来电呼叫转移给接听终端(终端2)的过程。
S1209、服务器发送来电指令给终端2。
S1210、终端2输出来电提醒。
其中,该来电提醒可以包括该来电提醒可以包括以下至少一种:铃声提醒、机械振动提醒和来电显示提醒(例如终端2在显示屏上显示联系人的联系方式等)。
S1211、服务器发送来电结束指令给终端1。
S1212、终端1结束输出来电提醒。
这样,通过上述步骤S1010至步骤S1013,可以避免同一联系人的来电在两个终端上同时输出来电提醒。
S1213、终端2接收用户的接听操作。S1214、终端2返回接听确认给服务器。
在终端2输出来电提醒之后,终端2可以接收用户的接听操作(例如点击终端2屏幕上显示的接听按钮或者点击终端2上的接听物理按键),响应于该接听操作,终端2可以返回接听确认给服务器。
S1215、服务器转发接听确认给终端1。
服务器转发接听确认给终端1之后,响应于该接听确认,终端1可以将语音通话转移至终端2上。
上述步骤S1209-S1210和S1211-S1212之间没有先后顺序,其他实施例中也类似。
下面具体介绍终端1将语音通话转移至接听终端(终端2)的过程。
S1216、终端1接收联系人的语音数据。
其中:
1、CS域语音通话:联系人的终端可以采集联系人的声音,并通过移动通信网中的CS域与终端1建立通话连接,将声音信号发送给终端1。
2、VoLTE语音通话:联系人的终端还可以采集联系人的声音,并将联系人的声音,通过语音压缩算法,对联系人的声音进行压缩编码处理,生成联系人的语音数据。然后将语音数据封装成语音数据包,并通过移动通信网中的IMS,将联系人的语音数据包发送给终端1。
3、VoIP语音通话:联系人的终端还可以采集联系人的声音,通过语音压缩算法,对联系人的声音进行压缩编码处理,生成联系人的语音数据,然后通过IP协议等相关协议将语音数据封装成语音数据包,并通过Internet将联系人的语音数据包发送给终端1。
S1217、终端1将联系人的语音数据发送给服务器。
其中,当终端1转移的语音通话是CS域语音通话,终端1在接收到联系人的声音信号后,可以将联系人的声音信号,通过语音压缩算法,进行压缩编码处理,生成联系人的语音数据,并通过IP协议等相关协议将语音数据封装成语音数据包。然后,终端1通过Internet将联系人的语音数据包发送给服务器。
当终端1转移的语音通话是VoLTE语音通话或VoIP语音通话时,终端1在接收到联系人的语音数据包之后,可以通过Internet将联系人的语音数据包转发给服务器。
S1218、服务器将联系人的语音数据发送给终端2。
S1219、终端2播放联系人的语音数据。
终端2在接收到终端1发送的联系人的语音数据包后,可以从该联系人的语音数据包中,获取到联系人的语音数据,并播放该联系人的语音数据。
S1220、终端2通过麦克风采集声音,生成用户的语音数据。S1221、终端2将用户的语音数据发送给服务器。
在步骤S1214、终端2返回接听确认给服务器之后,终端2可以通过麦克风持续采集用户的声音以及周围环境的声音。终端2可以将麦克风采集到的声音(包括用户的声音和周围环境声音),通过语音压缩算法,对采集到的声音进行压缩编码处理,生成用户的语音数据,并将用户的语音数据封装成语音数据包。然后,终端2通过Internet将用户的语音数据包发送给服务器。
S1222、服务器将用户的语音数据转发给终端1。
S1223、终端1发送用户的语音数据给联系人的终端。
其中,
当终端1转移的语音通话是CS域的语音通话时,终端1在接收到服务器发送的用户的语音数据包之后,可以从用户的语音数据包中解析出用户的语音数据,并将用户的语音数据转换成用户的声音信号,并将用户的声音信号通过移动通信网络中的CS域发送给联系人的 终端,联系人的终端在接收到终端1发送的用户的声音信号后,可以从用户的声音信号中解析出用户的声音并播放。
当终端1转移的语音通话是VoLTE语音通话时,终端1在接收到服务器发送的用户的语音数据包之后,可以将用户的语音数据包通过IMS,转发给联系人的终端,联系人的终端在接收到用户的语音数据包后,可以从用户的语音数据包中解析出用户的语音数据,并播放该用户的语音数据。
当终端1转移的语音通话是VoIP语音通话时,终端1在接收到服务器发送的用户的语音数据包之后,可以将用户的语音数据包通过Internet,转发给联系人的终端,联系人的终端在接收到用户的语音数据包后,可以从用户的语音数据包中解析出用户的语音数据,并播放该用户的语音数据。
上述语音通话转移时,若终端1和终端2建立连接,可以由终端1直接转移给终端2,即当终端1收到联系人的语音数据后,发送给终端2,终端2采集到用户的语音数据后发送给终端1,由终端1发送给联系人的终端。上述语音通话转移也可以由中枢设备代替服务器实现。
在一些可能的实现方式中,终端1(即接收到语音来电的终端)可以将联系人的来电转移至终端2上接听,终端2接听联系人的来电后,终端1还可以周期性获取语音能力参数,按照上述实施例中的规则,确定出新的接听终端(例如终端3)。在确定出新的接听终端(例如终端3)后,终端1可以将语音通话转移至新的接听终端(例如终端3)上,不再转移到终端2上。
在一些可能的实现方式中,终端1可以将联系人的来电转移至终端2上接听,终端2接听联系人的来电后,终端2或者终端1可以接收用户的转接操作,在接收到用户的转接操作之后,终端1可以获取语音能力参数,按照上述实施例中的规则,确定出新的接听终端(例如终端3)。在确定出新的接听终端(例如终端3)后,终端1可以将语音通话,转移至新的接听终端(例如终端3)上,不再转移到终端2上。
在一些可能的实现方式中,终端1将联系人的来电转移至终端2上,终端2已经输出来电提醒,但终端2超时未接听。终端1可以选择从除终端1和终端2之外的其他终端中,根据语音能力参数,按照上述实施例中的规则,确定出新的接听终端(例如终端3)。在确定出新的接听终端(例如终端3)后,终端1可以将来电转移至新的接听终端(例如终端3)上。
上述终端1可以替换为中枢设备或服务器。
在上述的本申请实施例中,各终端的语音能力参数值可以存储于各个终端上,也可以存储于中枢设备、服务器上(如各个终端周期性地将语音能力参数值上报给中枢设备、服务器)。在本申请实施例中,可以由接收到来电的终端、中枢设备或服务器,根据各终端的语音能力参数,确定出来电转移或云通话转移的接听终端,在执行通话转移时,可以由收到来电的终端直接转给接听终端,也可以经由中枢设备或服务器转给接听终端,确定接听终端的几种方案可以和转接来电的几种方案进行不同的组合,在此不作限定。本申请中各图示实施例未详述的内容可以参考其他图示实施例。
在本申请的实施例中,N个具备通话能力的终端可以不在一个局域网中,可以是绑定同一账号的N个终端,或者这N个具备通话能力的终端通过其他方式相关联。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (14)

  1. 一种语音通信方法,其特征在于,包括:
    第一终端接收到语音来电;
    当所述第一终端判定所述语音来电超时未接听,或者所述第一终端正在通话中,所述第一终端获取多个终端上报的用户位置;其中,所述多个终端中的任一终端与所述第一终端不同;
    所述第一终端根据所述多个终端上报的所述用户位置,从所述多个终端中确定出第二终端;其中,在所述多个终端中所述第二终端与用户距离最近;
    所述第一终端将所述语音来电转移至所述第二终端上接听。
  2. 根据权利要求1所述的方法,其特征在于,所述第一终端获取多个终端上报的用户位置,具体包括:
    所述第一终端获取多个终端上报的用户的声纹能量值;所述声纹能量值越高表示上报所述声纹能量值的终端距离用户越近;
    所述第一终端根据所述多个终端上报的所述用户位置,从所述多个终端中确定出第二终端,具体包括:
    所述第一终端根据所述多个终端上报的声纹能力值,从所述多个终端中确定出第二终端;其中,在所述多个终端中所述第二终端的声纹能量值最高。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一终端获取所述多个终端上报的通话频率;所述通话频率为所述终端的通话次数与所述第一终端以及所述多个终端的总通话次数之比;
    当所述多个终端中有多个距离用户最近的终端时,第一终端根据所述通话频率,从所述多个距离用户最近的终端中,确定出所述第二终端;其中,在所述多个距离用户最近的终端中所述第二终端通话频率最大。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    所述第一终端获取所述多个终端上报的语音能力优先级;所述语音能力优先级由终端的设备类型确定;
    当所述多个距离用户最近的终端中有多个通话频率最大的终端时,第一终端根据所述语音能力优先级,从所述多个距离用户最近且通话频率最大的终端中,确定出所述第二终端;其中,在所述多个距离用户最近且通话频率最大的终端中,所述第二终端的语音能力优先级最高。
  5. 根据权利要求1所述的方法,其特征在于,所述第一终端将所述语音来电转移至第二终端接听,具体包括:
    所述第一终端接收联系人的终端发送的所述联系人的语音数据,并接收所述第二终端发送的用户的语音数据;
    所述第一终端将所述联系人的语音数据发送给所述第二终端,并将所述用户的语音数据发送给所述联系人的终端。
  6. 根据权利要求5所述的方法,其特征在于,在所述第一终端接收联系人的终端发送的联系人的语音数据,并接收所述第二终端发送的用户的语音数据之前,所述方法还包括:
    所述第一终端发送来电指令给所述第二终端;所述来电指令用于指示所述第二终端输出来电提醒;
    所述第一终端接收所述第二终端发送的接听确认;
    所述第一终端接收联系人的终端发送的联系人的语音数据,并接收所述第二终端发送的用户的语音数据,具体包括:
    响应于所述接听确认,所述第一终端接收联系人的终端发送的联系人的语音数据,并接收所述第二终端发送的用户的语音数据。
  7. 根据权利要求1所述的方法,其特征在于,在所述第一终端将所述语音来电转移至所述第二终端上接听之前,所述方法还包括:
    所述第一终端与所述第二终端建立连接。
  8. 一种语音通信方法,其特征在于,包括:
    第一终端接收语音来电;
    当所述第一终端判断所述语音来电超时未接听,或者所述第一终端正在通话中,所述第一终端获取多个终端上报的用户位置、通话频率、语音能力优先级和设备状态;其中,所述多个终端中的任一终端与所述第一终端不同;
    所述第一终端根据所述多个终端上报的用户位置、通话频率、语音能力优先级和设备状态值,从所述多个终端中确定出第二终端;
    所述第一终端将所述语音来电转移至所述第二终端上接听。
  9. 一种语音通信方法,其特征在于,包括:
    当第一终端正在与联系人的终端进行语音通话时,所述第一终端接收用户的通话转移操作;
    响应于所述通话转移操作,所述第一终端获取多个终端上报的用户位置;其中,所述多个终端中的任一终端与所述第一终端不同;
    所述第一终端根据所述多个终端上报的所述用户位置,从所述多个终端中确定出第二终端;其中,在所述多个终端中所述第二终端与用户距离最近;
    所述第一终端将所述语音通话转移至所述第二终端。
  10. 一种终端,包括:存储器、收发器和至少一个处理器,所述存储器中存储有程序代码,所述存储器、所述收发器和所述至少一个处理器通信,所述处理器运行所述程序代码以指令所述终端执行所述权利要求1-9中任一项所述的方法。
  11. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述电子设备执行如权利要求1-9任一项所述的方法。
  12. 一种计算机存储介质,包括:计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-9任一项所述的方法。
  13. 一种语音通信方法,其特征在于,包括:
    中枢设备接收所述第一终端发送的通话转移请求;
    响应于所述第一终端发送的通话转移请求,所述中枢设备获取多个终端上报的用户位置;所述多个终端中的任一终端与所述第一终端不同;
    所述中枢设备根据所述多个终端上报的用户位置,从所述多个终端中确定出第二终端;其中,在所述多个终端中所述第二终端与用户距离最近;
    所述中枢设备发送来电通知给所述第二终端,所述来电通知用于所述第二终端输出来电提醒。
  14. 一种语音通信方法,其特征在于,包括:
    服务器接收到第一终端发送的通话转移请求;
    响应于所述通话转移请求,所述服务器获取多个终端上报的用户位置;所述多个终端中的任一终端与所述第一终端不同;
    所述服务器根据所述多个终端上报的用户位置,从所述多个终端中确定出第二终端;其中,在所述多个终端中所述第二终端与用户距离最近;
    所述服务器发送来电通知给所述第二终端,所述来电通知用于所述第二终端输出来电提醒。
PCT/CN2020/095751 2019-06-14 2020-06-12 一种语音通信方法及相关装置 WO2020249062A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910517494.3A CN110191241B (zh) 2019-06-14 2019-06-14 一种语音通信方法及相关装置
CN201910517494.3 2019-06-14

Publications (1)

Publication Number Publication Date
WO2020249062A1 true WO2020249062A1 (zh) 2020-12-17

Family

ID=67721888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/095751 WO2020249062A1 (zh) 2019-06-14 2020-06-12 一种语音通信方法及相关装置

Country Status (2)

Country Link
CN (1) CN110191241B (zh)
WO (1) WO2020249062A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11979516B2 (en) 2020-01-22 2024-05-07 Honor Device Co., Ltd. Audio output method and terminal device

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191241B (zh) * 2019-06-14 2021-06-29 华为技术有限公司 一种语音通信方法及相关装置
CN110737337A (zh) * 2019-10-18 2020-01-31 向勇 一种人机交互系统
CN112929481B (zh) * 2019-11-20 2022-04-12 Oppo广东移动通信有限公司 来电处理方法和装置、电子设备、计算机可读存储介质
CN112887926B (zh) 2019-11-30 2022-08-09 华为技术有限公司 一种呼叫方法及装置
CN113098920A (zh) * 2020-01-09 2021-07-09 京东方科技集团股份有限公司 会话建立方法、装置及其相关设备
CN114245328B (zh) * 2020-02-29 2022-12-27 华为技术有限公司 语音通话转移方法及电子设备
CN113364921A (zh) * 2020-03-05 2021-09-07 华为技术有限公司 通话方法、系统及设备
CN111445612A (zh) * 2020-04-02 2020-07-24 北京声智科技有限公司 开锁方法、控制设备、电子设备及门禁控制系统
CN111786963B (zh) * 2020-06-12 2022-10-28 青岛海尔科技有限公司 通信过程的实现方法和装置、存储介质及电子装置
CN111988426B (zh) * 2020-08-31 2023-07-18 深圳康佳电子科技有限公司 基于声纹识别的通信方法、装置、智能终端及存储介质
CN112422733A (zh) * 2020-11-19 2021-02-26 Oppo广东移动通信有限公司 通知提醒方法、装置、终端及存储介质
CN113296729A (zh) * 2021-06-01 2021-08-24 青岛海尔科技有限公司 提示信息播报方法、装置、和系统、存储介质及电子装置
CN113572731B (zh) * 2021-06-18 2022-08-26 荣耀终端有限公司 语音通话方法、个人计算机、终端和计算机可读存储介质
CN113595866A (zh) * 2021-06-21 2021-11-02 青岛海尔科技有限公司 多设备间音视频通话建立方法以及装置
CN116249081A (zh) * 2021-12-08 2023-06-09 荣耀终端有限公司 分布式通话冲突处理方法、系统、电子设备及存储介质
CN113923305B (zh) * 2021-12-14 2022-06-21 荣耀终端有限公司 一种多屏协同的通话方法、系统、终端及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100009665A1 (en) * 2008-07-14 2010-01-14 Embarq Holdings Company, Llc System and method for providing emergency call forwarding services
CN104427137A (zh) * 2013-08-29 2015-03-18 鸿富锦精密工业(深圳)有限公司 电话装置、服务器及自动转接电话的方法
CN104468962A (zh) * 2013-09-24 2015-03-25 联想(北京)有限公司 一种呼叫请求的处理方法及电子设备
WO2016197674A1 (zh) * 2016-01-05 2016-12-15 中兴通讯股份有限公司 一种呼叫转移方法、装置及系统
CN106535149A (zh) * 2016-11-25 2017-03-22 深圳市国华识别科技开发有限公司 终端自动呼叫转移方法与系统
WO2017140157A1 (zh) * 2016-02-16 2017-08-24 上海斐讯数据通信技术有限公司 一种基于WiFi的呼叫转移方法及智能终端
CN110191241A (zh) * 2019-06-14 2019-08-30 华为技术有限公司 一种语音通信方法及相关装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752479B (zh) * 2012-05-30 2014-12-03 中国农业大学 蔬菜病害场景检测方法
KR102213640B1 (ko) * 2013-09-23 2021-02-08 삼성전자주식회사 홈 네트워크 시스템에서 사용자 디바이스가 홈 디바이스 관련 정보를 전달하는 장치 및 방법
CN104581665B (zh) * 2014-12-17 2018-11-20 广东欧珀移动通信有限公司 一种通话转移的方法及装置
CN105101131A (zh) * 2015-06-18 2015-11-25 小米科技有限责任公司 来电接听方法及装置
CN105228118B (zh) * 2015-09-28 2019-01-04 小米科技有限责任公司 呼叫转移方法、装置和终端设备
CN105959191A (zh) * 2016-07-01 2016-09-21 上海卓易云汇智能技术有限公司 一种智能接听来电的智能家居系统的控制方法及其系统
CN106713682A (zh) * 2016-11-25 2017-05-24 深圳市国华识别科技开发有限公司 呼叫转移方法与系统
CN106817683A (zh) * 2017-04-12 2017-06-09 北京奇虎科技有限公司 来电转移信息的显示方法、装置及系统
CN108735216B (zh) * 2018-06-12 2020-10-16 广东小天才科技有限公司 一种基于语义识别的语音搜题方法及家教设备
CN108900502B (zh) * 2018-06-27 2021-05-11 佛山市云米电器科技有限公司 一种基于家居智能互联的通信方法、系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100009665A1 (en) * 2008-07-14 2010-01-14 Embarq Holdings Company, Llc System and method for providing emergency call forwarding services
CN104427137A (zh) * 2013-08-29 2015-03-18 鸿富锦精密工业(深圳)有限公司 电话装置、服务器及自动转接电话的方法
CN104468962A (zh) * 2013-09-24 2015-03-25 联想(北京)有限公司 一种呼叫请求的处理方法及电子设备
WO2016197674A1 (zh) * 2016-01-05 2016-12-15 中兴通讯股份有限公司 一种呼叫转移方法、装置及系统
WO2017140157A1 (zh) * 2016-02-16 2017-08-24 上海斐讯数据通信技术有限公司 一种基于WiFi的呼叫转移方法及智能终端
CN106535149A (zh) * 2016-11-25 2017-03-22 深圳市国华识别科技开发有限公司 终端自动呼叫转移方法与系统
CN110191241A (zh) * 2019-06-14 2019-08-30 华为技术有限公司 一种语音通信方法及相关装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11979516B2 (en) 2020-01-22 2024-05-07 Honor Device Co., Ltd. Audio output method and terminal device

Also Published As

Publication number Publication date
CN110191241A (zh) 2019-08-30
CN110191241B (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
WO2020249062A1 (zh) 一种语音通信方法及相关装置
CN113225693B (zh) 一种蓝牙连接方法、设备及系统
CN110138937B (zh) 一种通话方法、设备及系统
CN112640505B (zh) 一种传输速率的控制方法及设备
WO2020107485A1 (zh) 一种蓝牙连接方法及设备
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
CN111601199A (zh) 无线耳机盒及系统
WO2021043219A1 (zh) 一种蓝牙回连方法及相关装置
WO2022100610A1 (zh) 投屏方法、装置、电子设备及计算机可读存储介质
WO2021175300A1 (zh) 数据传输方法、装置、电子设备和可读存储介质
WO2021083128A1 (zh) 一种声音处理方法及其装置
CN110401767B (zh) 信息处理方法和设备
WO2020216098A1 (zh) 一种跨电子设备转接服务的方法、设备以及系统
WO2021017909A1 (zh) 一种通过nfc标签实现功能的方法、电子设备及系统
WO2022042265A1 (zh) 通信方法、终端设备及存储介质
WO2020118641A1 (zh) 一种麦克风mic切换方法及设备
WO2022156555A1 (zh) 屏幕亮度的调整方法、装置和终端设备
WO2022206825A1 (zh) 一种调节音量的方法、系统及电子设备
WO2022105674A1 (zh) 来电接听方法、电子设备及存储介质
WO2021043250A1 (zh) 一种蓝牙通信方法及相关装置
CN109285563B (zh) 在线翻译过程中的语音数据处理方法及装置
WO2022135144A1 (zh) 自适应显示方法、电子设备及存储介质
CN113467747B (zh) 音量调节方法、电子设备及存储介质
CN115525366A (zh) 一种投屏方法及相关装置
CN115706755A (zh) 回声消除方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20823325

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20823325

Country of ref document: EP

Kind code of ref document: A1