CN110191241A - A kind of voice communication method and relevant apparatus - Google Patents
A kind of voice communication method and relevant apparatus Download PDFInfo
- Publication number
- CN110191241A CN110191241A CN201910517494.3A CN201910517494A CN110191241A CN 110191241 A CN110191241 A CN 110191241A CN 201910517494 A CN201910517494 A CN 201910517494A CN 110191241 A CN110191241 A CN 110191241A
- Authority
- CN
- China
- Prior art keywords
- terminal
- user
- voice
- value
- call
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42187—Lines and connections with preferential service
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42348—Location-based services which utilize the location information of a target
- H04M3/42357—Location-based services which utilize the location information of a target where the information is provided to a monitoring entity such as a potential calling party or a call processing server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42365—Presence services providing information on the willingness to communicate or the ability to communicate in terms of media capability or network connectivity
- H04M3/42374—Presence services providing information on the willingness to communicate or the ability to communicate in terms of media capability or network connectivity where the information is provided to a monitoring entity such as a potential calling party or a call processing server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/54—Arrangements for diverting calls for one subscriber to another predetermined subscriber
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
Abstract
It include: firstly, first terminal receives voice incoming call this application discloses a kind of voice communication.Then when first terminal judges that voice incoming call time-out is not answered or first terminal is busy now, first terminal obtains the user location that multiple terminals report.Wherein, any terminal in multiple terminals is different from the first terminal.The user location that first terminal is reported according to multiple terminal determines second terminal from multiple terminal.Wherein, second terminal and user distance are nearest in multiple terminals.Voice incoming call is transferred in second terminal and answers by first terminal.In this way, realizing the incoming call between verbal system and the transfer of voice communication, avoids user from missing the incoming call on verbal system, improve user experience.
Description
Technical field
This application involves field of communication technology more particularly to a kind of voice communication methods and relevant apparatus.
Background technique
With the development of communication technology, intelligent control technology, information technology have obtained development by leaps and bounds, various intelligent movables
While terminal is popularized, intelligence is also gradually applied in traditional home equipment, this concept of smart home gradually comes into use
Family life, user can be controlled the smart machine in family by its mobile terminal, keep the life of user more convenient.
Currently, there is family lan the smart machine of call function all individually to converse.Such as it is connect on PC
The voice incoming call received can only answer on PC, and the incoming call heard on mobile phone can only answer on mobile phone.At home,
When user is with electric equipment is carried out relatively far apart, for example, mobile phone, in bedroom, user possibly can not perceive equipment in parlor, user
Call reminding (the tinkle of bells or vibration), it is thus possible to will lead to user's missed call, influence the experience of user.
Summary of the invention
This application provides a kind of voice communication method and relevant apparatus, the incoming call in family between verbal system is realized
The transfer of calling and voice communication, avoids user from missing the incoming call on verbal system, improves user experience.
In a first aspect, this application provides a kind of voice communication methods, comprising: firstly, first terminal receives voice
Electricity.Then when first terminal judges that voice incoming call time-out is not answered or first terminal is busy now, first terminal obtains more
The user location that a terminal reports.Wherein, any terminal in multiple terminals is different from the first terminal.First terminal is according to this
The user location that multiple terminals report determines second terminal from multiple terminal.Wherein, the second terminal in multiple terminals
It is nearest with user distance.Voice incoming call is transferred in second terminal and answers by first terminal.
By present applicant proposes a kind of voice communication method, realizing after terminal receives voice incoming call, if eventually
Hold it is occupied or it is overtime do not answer, terminal can be from the terminal that other have delivery value, according to the use of each terminal reported
It determines optimal terminal of answering, and voice incoming call is transferred to this and is answered in terminal in family position.In this way, by each in family
The transfer of voice incoming call between call terminal avoids the voice that user misses on call terminal and sends a telegram here, improves user's body
It tests.
In one possible implementation, above-mentioned first terminal obtains the user location that multiple terminals report, specific to wrap
Include: first terminal obtains the vocal print energy value for the user that multiple terminals report.Wherein, the higher expression of the vocal print energy value reports
The terminal of the vocal print energy value is closer apart from user.After the user location that first terminal obtains that multiple terminals report, this
The vocal print ability value that one terminal is reported according to multiple terminal, determines second terminal from multiple terminal;Wherein, more at this
The vocal print energy value highest of the second terminal in a terminal.In this way, first terminal can be determined by the vocal print energy value of user
The terminal nearest apart from user out, and voice incoming call is transferred to the nearest terminal of user, it avoids user and misses voice incoming call.
In one possible implementation, this method further include: firstly, first terminal obtains what multiple terminal reported
Voice frequency.Wherein, which is total call time of all terminals connected on the talk times and router of the terminal
The ratio between number.Then, when there is multiple nearest apart from user terminals in multiple terminal, first terminal according to the voice frequency,
From multiple terminals nearest apart from user, the second terminal is determined.Wherein, in multiple terminals nearest apart from user
Two terminal call frequencies are maximum.In this way, when the terminal nearest apart from user has multiple, it can be further according to the logical of terminal
Voice frequency rate selects the second terminal of the common call of user, improves the experience of user.
In one possible implementation, this method further include: reported firstly, first terminal obtains above-mentioned multiple terminals
Speech capability priority.Wherein, which is determined by the device type of terminal.When multiple nearest apart from user
Terminal in when having the maximum terminal of multiple voice frequencies, first terminal is according to speech capability priority, from multiple apart from user
Recently and in the maximum terminal of voice frequency, second terminal is determined.Wherein, it is multiple apart from user recently and voice frequency most
In big terminal, the speech capability highest priority of the second terminal.In this way, when the terminal nearest apart from user has multiple,
The second terminal of speech capability highest priority can be selected, is improved further according to the speech capability priority of each terminal
The experience of user.
In one possible implementation, voice incoming call is transferred to second terminal and answered by first terminal, is specifically included:
The voice data for the contact person that the terminal that first terminal receives contact person is sent, and receive the user of second terminal transmission
Voice data.The voice data of contact person is sent to second terminal by the first terminal, and the voice data of user is sent should
The terminal of contact person.
In one possible implementation, the voice of the contact person sent in the terminal that the first terminal receives contact person
Data, and receive second terminal transmission user voice data before, this method further include: firstly, first terminal is sent
Electricity instruction is to second terminal.Wherein, incoming call instruction is used to indicate the prompting of second terminal output incoming.Then, first terminal connects
That receives second terminal transmission answers confirmation.Confirmation is answered in response to this, first terminal receives the connection that the terminal of contact person is sent
It is the voice data of people, and receives the voice data of the user of second terminal transmission.
In one possible implementation, voice incoming call is transferred in the second terminal in the first terminal and answers it
Before, this method further include: the first terminal and second terminal establish connection.
In one possible implementation, after the incoming call that second terminal answers above-mentioned contact person, first terminal may be used also
To periodically acquire the user location that each terminal in above-mentioned multiple terminals in addition to second terminal reports, and determine third end
End.Wherein, third terminal is terminal nearest apart from user in addition to second terminal in above-mentioned multiple terminals.Determining third
After terminal, voice communication can be transferred on third terminal by first terminal, not transferred in second terminal.
In one possible implementation, after the incoming call that second terminal answers above-mentioned contact person, first terminal may be used also
To receive the switch over operation of user.After receiving the switch over operation of user, the available above-mentioned multiple terminals of first terminal
In the user location that reports of each terminal in addition to second terminal, and determine third terminal.Wherein, third terminal is above-mentioned more
Terminal nearest apart from user in addition to second terminal in a terminal.After determining third terminal, first terminal can be by language
In sound call transfer to third terminal, do not transfer in second terminal.
In one possible implementation, the incoming call of contact person is transferred in second terminal by first terminal, and first eventually
Remind but time-out is not answered by output incoming for end detection second terminal, then terminal 1 can remove second from above-mentioned multiple terminals
In other terminals except terminal, according to the user location that other terminals report, third terminal is determined.Wherein, third terminal
For terminal nearest apart from user in other above-mentioned terminals.After determining third terminal, first terminal can be by contact person's
Incoming call is transferred on third terminal, does not transfer to second terminal.
In one possible implementation, after first terminal receives voice incoming call, it can first judge first terminal
Whether call transfer function is opened, if opening, when voice call missing is listened or first terminal is busy now, first eventually
End obtains the user location that multiple terminals report.If closing call transfer function, first terminal output incoming is reminded, and is waited waiting
It receives user and answers operation.
Second aspect, the application provide another voice communication method, comprising: firstly, first terminal receives voice incoming call.
Then, when first terminal judges that voice incoming call time-out is not answered or first terminal is busy now, first terminal obtains more
User location, voice frequency, speech capability priority and the equipment state that a terminal reports.Wherein, in above-mentioned multiple terminals
Any terminal is different from the first terminal.Then, first terminal is reported according to above-mentioned multiple terminals user location, call frequency
Rate, speech capability priority and equipment state value determine second terminal from above-mentioned multiple terminals.Then, first terminal will
Voice incoming call, which is transferred in second terminal, answers.
Present applicant proposes a kind of voice communication method, realize after terminal receives voice incoming call, if terminal quilt
It occupies or time-out is not answered, terminal can be from the terminal that other have delivery value, according to the user position of each terminal reported
It sets, speech capability priority, voice frequency and equipment state, determines optimal terminal of answering, and by incoming call and voice
Call transfer is answered in terminal to this.In this way, passing through turn of incoming call and voice communication between each call terminal in family
It moves, avoids the voice that user misses on call terminal and send a telegram here, improve user experience.
The third aspect, this application provides another voice communication methods, comprising: firstly, when first terminal with connection
When being that the terminal of people carries out voice communication, first terminal receives the call transfer operation of user.Then, in response to the call transfer
Operation, first terminal obtain the user location that multiple terminals report.Wherein, any terminal in above-mentioned multiple terminals and described the
One terminal is different.Then, the user location that first terminal is reported according to above-mentioned multiple terminals is determined from above-mentioned multiple terminals
Second terminal;Wherein, the second terminal and user distance are nearest in above-mentioned multiple terminals.Finally, first terminal leads to voice
Words are transferred to second terminal.
By the way that this application provides a kind of voice communication method, user is passing through first terminal (such as smart phone) and is contacting
When people converses, first terminal according to the user location of other terminals reported, can determine the second end of voice communication
End, and voice communication is transferred on second terminal (such as smart television), in this way, can be in the process that user moves indoors
In, promote the communication effect of user and contact person.
Fourth aspect, this application provides a kind of terminal, including one or more processors, one or more memories and
Transceiver.The one or more memory, transceiver and one or more processors coupling, one or more memories are for depositing
Computer program code is stored up, computer program code includes computer instruction, is referred to when one or more processors execute computer
When enabling, so that terminal executes the voice communication method in the possible implementation of any one of any of the above-described aspect.
5th aspect, the embodiment of the present application provide a kind of computer storage medium, including computer instruction, work as computer
When instruction is run at the terminal, so that terminal executes the voice communication side in the possible implementation of any one of any of the above-described aspect
Method.
6th aspect, the embodiment of the present application provides a kind of computer program product, when computer program product is calculating
When being run on machine, so that computer executes the voice communication method in the possible implementation of any one of any of the above-described aspect.
7th aspect, this application provides another voice communication methods, comprising: firstly, maincenter equipment receives first eventually
The call transfer request that end is sent.Then, it is requested in response to the call transfer that first terminal is sent, maincenter equipment obtains multiple ends
Hold the user location reported.Wherein, any terminal in multiple terminals is different from first terminal.Then, first terminal with it is above-mentioned
Multiple terminals are all connected in the maincenter equipment.Then, the user location that maincenter equipment is reported according to above-mentioned multiple terminals, from upper
It states in multiple terminals and determines second terminal.Wherein, the second terminal and user distance are nearest in above-mentioned multiple terminals.Most
Afterwards, maincenter equipment sends call-in reporting to second terminal, which reminds for second terminal output incoming.
By present applicant proposes a kind of voice communication method, realizing after terminal receives voice incoming call, if eventually
Hold it is occupied or it is overtime do not answer, maincenter equipment can be connect according to the user location of each terminal reported from maincenter equipment
Other have in the terminal of delivery value, determine optimal to answer terminal.Voice incoming call is transferred to this and connect by maincenter equipment
It listens in terminal.In this way, avoiding user by the transfer of the voice incoming call between each call terminal in family and missing call terminal
On voice incoming call, improve user experience.
In one possible implementation, after maincenter equipment sends call-in reporting to second terminal, this method is also wrapped
Include: the maincenter equipment receives the answer command that second terminal is sent.In response to the answer command, maincenter equipment receives contact person's
The voice data for the contact person that terminal is sent, and receive the voice data of the user of second terminal transmission.Maincenter equipment is by the connection
It is that the voice data of people is sent to second terminal, and the voice data of the user is sent to the terminal of contact person.
Eighth aspect, this application provides another voice communication methods, comprising: firstly, server receives first eventually
The call transfer request that end is sent.Then, it is requested in response to call transfer, server obtains the user position that multiple terminals report
It sets.Wherein, any terminal in multiple terminals is different from the first terminal.Then, server is according in above-mentioned multiple terminals
The user location of report determines second terminal from above-mentioned multiple terminals.Wherein, in above-mentioned multiple terminals the second terminal with
User distance is nearest.Finally, server sends call-in reporting to second terminal, which is used for second terminal output incoming
It reminds.
By present applicant proposes a kind of voice communication method, realizing after terminal receives voice incoming call, if eventually
Hold it is occupied or it is overtime do not answer, server can be according to the user location of each terminal reported, from its connecting with server
He has in the terminal of delivery value, determines optimal to answer terminal.Voice incoming call is transferred to this and answers terminal by server
On.In this way, the transfer sent a telegram here by the voice between each call terminal, avoids the voice that user misses on call terminal and comes
Electricity improves user experience.
In one possible implementation, after server sends call-in reporting to second terminal, this method further include:
The server receives the answer command that second terminal is sent.In response to the answer command, server receives the terminal hair of contact person
The voice data of the contact person sent, and receive the voice data of the user of second terminal transmission.Server is by the language of the contact person
Sound data are sent to second terminal, and the voice data of the user is sent to the terminal of contact person.
Detailed description of the invention
Fig. 1 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application;
Fig. 2 is a kind of network architecture schematic diagram provided by the embodiments of the present application;
Fig. 3 is a kind of flow diagram of voice communication method provided by the embodiments of the present application;
Fig. 4 is a kind of voice communication schematic diagram of a scenario provided by the embodiments of the present application;
Fig. 5 is a kind of voice communication schematic diagram of a scenario that another embodiment of the application provides;
Fig. 6 is a kind of voice communication schematic diagram of a scenario that another embodiment of the application provides;
Fig. 7 is a kind of flow diagram for voice communication method that another embodiment of the application provides;
Fig. 8 A-8C is a kind of schematic diagram of a scenario for voice communication method that another embodiment of the application provides;
Fig. 9 A-9C is a kind of schematic diagram of a scenario for voice communication method that another embodiment of the application provides;
Figure 10 is a kind of flow diagram for voice communication method that another embodiment of the application provides;
Figure 11 is a kind of network architecture schematic diagram that another embodiment of the application provides;
Figure 12 is a kind of flow diagram for voice communication method that another embodiment of the application provides.
Specific embodiment
It is purged below in conjunction with attached drawing technical solutions in the embodiments of the present application, at large describes.Wherein, at this
Apply embodiment description in, unless otherwise indicated, "/" indicate or the meaning, for example, A/B can indicate A or B;In text
"and/or" is only a kind of incidence relation for describing affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B, it can
To indicate: individualism A exists simultaneously A and B, these three situations of individualism B, in addition, in the description of the embodiment of the present application
In, " multiple " refer to two or more.
Hereinafter, term " first ", " second " are used for description purposes only, and it cannot be construed as to imply that or imply relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
One or more of the features is implicitly included, in the description of the embodiment of the present application, unless otherwise indicated, " multiples' " contains
Justice is two or more.
Fig. 1 shows the structural schematic diagram of terminal 100.
Embodiment is specifically described by taking terminal 100 as an example below.It should be understood that terminal 100 shown in Fig. 1 is only
One example, and terminal 100 can have the more or less component than shown in Fig. 1, can combine two or
Multiple components, or can have different component configurations.Various parts shown in the drawings can include one or more
It is realized in the combination of hardware, software or hardware and software including a signal processing and/or specific integrated circuit.
Terminal 100 may include: processor 110, and external memory interface 120, internal storage 121, general serial is total
Line (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142,
Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio-frequency module 170, loudspeaker 170A, receiver
170B, microphone 170C, earphone interface 170D, sensor module 180, key 190, motor 191, indicator 192, camera
193, display screen 194 and Subscriber Identity Module (subscriber identification module, SIM) card interface 195
Deng.Wherein sensor module 180 may include pressure sensor 180A, gyro sensor 180B, baroceptor 180C, magnetic
Sensor 180D, acceleration transducer 180E, range sensor 180F, close to optical sensor 180G, fingerprint sensor 180H, temperature
Spend sensor 180J, touch sensor 180K, ambient light sensor 180L, bone conduction sensor 180M etc..
It is understood that the specific restriction of the structure not structure paired terminal 100 of signal of the embodiment of the present invention.In this Shen
Please in other embodiments, terminal 100 may include than illustrating more or fewer components, perhaps combine certain components or
Split certain components or different component layouts.The component of diagram can be with hardware, and the combination of software or software and hardware is real
It is existing.
Processor 110 may include one or more processing units, such as: processor 110 may include application processor
(application processor, AP), modem processor, graphics processor (graphics processing
Unit, GPU), image-signal processor (image signal processor, ISP), controller, memory, coding and decoding video
Device, digital signal processor (digital signal processor, DSP), baseband processor and/or Processing with Neural Network
Device (neural-network processing unit, NPU) etc..Wherein, different processing units can be independent device,
Also it can integrate in one or more processors.
Wherein, controller can be nerve center and the command centre of terminal 100.Controller can be according to instruction operation code
And clock signal, operating control signal is generated, the control completing instruction fetch and executing instruction.
Memory can also be set in processor 110, for storing instruction and data.In some embodiments, processor
Memory in 110 is cache memory.The memory can save the instruction that processor 110 is just used or is recycled
Or data.If processor 110 needs to reuse the instruction or data, can be called directly from the memory.It avoids
Repeated access, reduces the waiting time of processor 110, thus improves the efficiency of system.
In some embodiments, processor 110 may include one or more interfaces.Interface may include integrated circuit
(inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit
Sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiving-transmitting transmitter
(universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface
(mobile industry processor interface, MIPI), universal input export (general-purpose
Input/output, GPIO) interface, Subscriber Identity Module (subscriber identity module, SIM) interface, and/or
Universal serial bus (universal serial bus, USB) interface etc..
I2C interface is a kind of bi-directional synchronization universal serial bus, including serial data line (serial data line,
SDA) He Yigen serial time clock line (derail clock line, SCL).In some embodiments, processor 110 may include
Multiple groups I2C bus.Processor 110 can by different I2C bus interface distinguish coupled with touch sensors 180K, charger,
Flash lamp, camera 193 etc..Such as: processor 110 can make processor by I2C interface coupled with touch sensors 180K
110 are communicated with touch sensor 180K by I2C bus interface, realize the touch function of terminal 100.
I2S interface can be used for voice communication.In some embodiments, processor 110 may include multiple groups I2S bus.
Processor 110 can be coupled by I2S bus with audio-frequency module 170, be realized logical between processor 110 and audio-frequency module 170
Letter.In some embodiments, audio-frequency module 170 can transmit audio signal to wireless communication module 160 by I2S interface, real
The function of now being received calls by bluetooth headset.
Pcm interface can be used for voice communication, by analog signal sampling, quantization and coding.In some embodiments, sound
Frequency module 170 can be coupled with wireless communication module 160 by pcm bus interface.In some embodiments, audio-frequency module 170
Audio signal can also be transmitted to wireless communication module 160 by pcm interface, realize the function to receive calls by bluetooth headset
Energy.The I2S interface and the pcm interface may be used to voice communication.
UART interface is a kind of Universal Serial Bus, is used for asynchronous communication.The bus can be bidirectional communications bus.
The data that it will be transmitted are converted between serial communication and parallel communications.In some embodiments, UART interface usually by with
In connection processor 110 and wireless communication module 160.Such as: processor 110 passes through UART interface and wireless communication module 160
In bluetooth module communication, realize Bluetooth function.In some embodiments, audio-frequency module 170 can be by UART interface to nothing
Line communication module 160 transmits audio signal, realizes the function that music is played by bluetooth headset.
MIPI interface can be used to connect the peripheral components such as processor 110 and display screen 194, camera 193.MIPI connects
Mouth includes camera serial line interface (camera serial interface, CSI), display screen serial line interface (display
Serial interface, DSI) etc..In some embodiments, processor 110 and camera 193 are communicated by CSI interface, real
The shooting function of existing terminal 100.Processor 110 and display screen 194 realize the display function of terminal 100 by DSI interface communication
Energy.
GPIO interface can pass through software configuration.GPIO interface can be configured as control signal, may be alternatively configured as counting
It is believed that number.In some embodiments, GPIO interface can be used for connecting processor 110 and camera 193, display screen 194, wirelessly
Communication module 160, audio-frequency module 170, sensor module 180 etc..GPIO interface can be additionally configured to I2C interface, and I2S connects
Mouthful, UART interface, MIPI interface etc..
Usb 1 30 is the interface for meeting USB standard specification, specifically can be Mini USB interface, and Micro USB connects
Mouthful, USB Type C interface etc..Usb 1 30 can be used for connecting charger for the charging of terminal 100, can be used for terminal
Data are transmitted between 100 and peripheral equipment.It can be used for connection earphone, audio played by earphone.The interface can also be used
In connection other electronic equipments, such as AR equipment etc..
It is understood that the interface connection relationship of each intermodule of signal of the embodiment of the present invention, only schematically illustrates,
The not structure qualification of structure paired terminal 100.In other embodiments of the application, terminal 100 can also use above-mentioned implementation
The combination of different interface connection type or multiple interfaces connection type in example.
Charge management module 140 is used to receive charging input from charger.Wherein, charger can be wireless charger,
It is also possible to wired charger.In the embodiment of some wired chargings, charge management module 140 can pass through usb 1 30
Receive the charging input of wired charger.In the embodiment of some wireless chargings, charge management module 140 can pass through terminal
100 Wireless charging coil receives wireless charging input.While charge management module 140 is that battery 142 charges, it can also lead to
Crossing power management module 141 is power electronic equipment.
Power management module 141 is for connecting battery 142, charge management module 140 and processor 110.Power management mould
Block 141 receives the input of battery 142 and/or charge management module 140, is processor 110, internal storage 121, external storage
Device, display screen 194, the power supply such as camera 193 and wireless communication module 160.Power management module 141 can be also used for monitoring
Battery capacity, circulating battery number, the parameters such as cell health state (electric leakage, impedance).In some other embodiment, power supply pipe
Reason module 141 also can be set in processor 110.In further embodiments, power management module 141 and Charge Management mould
Block 140 also can be set in the same device.
The wireless communication function of terminal 100 can pass through antenna 1, antenna 2, mobile communication module 150, wireless communication module
160, modem processor and baseband processor etc. are realized.
Antenna 1 and antenna 2 electromagnetic wave signal for transmitting and receiving.Each antenna in terminal 100 can be used for covering list
A or multiple communication bands.Different antennas can also be multiplexed, to improve the utilization rate of antenna.Such as: antenna 1 can be multiplexed
For the diversity antenna of WLAN.In other embodiments, antenna can be used in combination with tuning switch.
Mobile communication module 150 can be provided using the solution of wireless communications such as including 2G/3G/4G/5G on the terminal 100
Certainly scheme.Mobile communication module 150 may include at least one filter, switch, power amplifier, low-noise amplifier (low
Noise amplifier, LNA) etc..Mobile communication module 150 can receive electromagnetic wave by antenna 1, and to received electromagnetic wave
It is filtered, the processing such as amplification is sent to modem processor and is demodulated.Mobile communication module 150 can also be to through adjusting
The modulated signal amplification of demodulation processor processed, switchs to electromagenetic wave radiation through antenna 1 and goes out.In some embodiments, mobile logical
At least partly functional module of letter module 150 can be arranged in processor 110.In some embodiments, mobile communication mould
At least partly functional module of block 150 can be arranged in the same device at least partly module of processor 110.
Modem processor may include modulator and demodulator.Wherein, modulator is used for low frequency base to be sent
Band signal is modulated into high frequency signal.Demodulator is used to received electromagnetic wave signal being demodulated into low frequency baseband signal.Then solution
Adjust device that the low frequency baseband signal that demodulation obtains is sent to baseband processor.Low frequency baseband signal is through baseband processor
Afterwards, it is delivered to application processor.Application processor is defeated by audio frequency apparatus (being not limited to loudspeaker 170A, receiver 170B etc.)
Voice signal out, or image or video are shown by display screen 194.In some embodiments, modem processor can be
Independent device.In further embodiments, modem processor can be independently of processor 110, with mobile communication module
150 or other function module be arranged in the same device.
Wireless communication module 160 can be provided using on the terminal 100 including WLAN (wireless local
Area networks, WLAN) (such as Wireless Fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth,
BT), Global Navigation Satellite System (global navigation satellite system, GNSS), frequency modulation (frequency
Modulation, FM), the short distance wireless communication technology (near field communication, NFC), infrared technique
The solution of wireless communications such as (infrared, IR).Wireless communication module 160 can be integrated into a few communication process mould
One or more devices of block.Wireless communication module 160 receives electromagnetic wave via antenna 2, by electromagnetic wave signal frequency modulation and filter
Wave processing, by treated, signal is sent to processor 110.Wireless communication module 160 can also receive pending from processor 110
The signal sent carries out frequency modulation to it, and amplification switchs to electromagenetic wave radiation through antenna 2 and goes out.
In some embodiments, antenna 1 and mobile communication module 150 coupling of terminal 100, antenna 2 and radio communication mold
Block 160 couples, and allowing terminal 100, technology is communicated with network and other equipment by wireless communication.The wireless communication
Technology may include global system for mobile communications (global system for mobile communications, GSM), lead to
With grouping wireless service (general packet radio service, GPRS), CDMA accesses (code division
Multiple access, CDMA), wideband code division multiple access (wideband code division multiple access,
WCDMA), time division CDMA (time-division code division multiple access, TD-SCDMA), it is long
Phase evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM and/or IR technology etc..The GNSS can
To include GPS (global positioning system, GPS), Global Navigation Satellite System (global
Navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation
Satellite system, BDS), quasi- zenith satellite system (quasi-zenith satellite system, QZSS) and/or
Satellite-based augmentation system (satellite based augmentation systems, SBAS).
Terminal 100 realizes display function by GPU, display screen 194 and application processor etc..GPU is image procossing
Microprocessor connects display screen 194 and application processor.GPU is calculated for executing mathematics and geometry, is rendered for figure.Place
Managing device 110 may include one or more GPU, execute program instructions to generate or change display information.
Display screen 194 is for showing image, video etc..Display screen 194 includes display panel.Display panel can use liquid
Crystal display screen (liquid crystal display, LCD), Organic Light Emitting Diode (organic light-emitting
Diode, OLED), active matrix organic light-emitting diode or active-matrix organic light emitting diode (active-matrix
Organic light emitting diode's, AMOLED), Flexible light-emitting diodes (flex light-emitting
Diode, FLED), Miniled, MicroLed, Micro-oLed, light emitting diode with quantum dots (quantum dot light
Emitting diodes, QLED) etc..In some embodiments, terminal 100 may include 1 or N number of display screen 194, and N is big
In 1 positive integer.
Terminal 100 can pass through ISP, camera 193, Video Codec, GPU, display screen 194 and application processor
Deng realization shooting function.
ISP is used to handle the data of the feedback of camera 193.For example, opening shutter when taking pictures, light is passed by camera lens
It is delivered on camera photosensitive element, optical signal is converted to electric signal, and camera photosensitive element passes to the electric signal at ISP
Reason, is converted into macroscopic image.ISP can also be to the noise of image, brightness, colour of skin progress algorithm optimization.ISP can be with
Exposure to photographed scene, the parameter optimizations such as colour temperature.In some embodiments, ISP can be set in camera 193.
Camera 193 is for capturing still image or video.Object generates optical imagery by camera lens and projects photosensitive member
Part.Photosensitive element can be charge-coupled device (charge coupled device, CCD) or complementary metal oxide is partly led
Body (complementary metal-oxide-semiconductor, CMOS) phototransistor.Photosensitive element turns optical signal
It changes electric signal into, electric signal is passed into ISP later and is converted into data image signal.Data image signal is output to DSP by ISP
Working process.Data image signal is converted into the RGB of standard, the picture signal of the formats such as YUV by DSP.In some embodiments,
Terminal 100 may include 1 or N number of camera 193, and N is the positive integer greater than 1.
Digital signal processor, in addition to can handle data image signal, can also handle it for handling digital signal
His digital signal.For example, digital signal processor is used to carry out Fourier to frequency point energy when terminal 100 is when frequency point selects
Transformation etc..
Video Codec is used for compression of digital video or decompression.Terminal 100 can support one or more videos
Codec.In this way, terminal 100 can play or record the video of a variety of coded formats, and such as: dynamic image expert group
(moving picture experts group, MPEG) 1, MPEG2, mpeg 3, MPEG4 etc..
NPU is neural network (neural-network, NN) computation processor, by using for reference biological neural network structure,
Such as transfer mode between human brain neuron is used for reference, it, can also continuous self study to input information fast processing.Pass through NPU
The application such as intelligent cognition of terminal 100 may be implemented, such as: image recognition, recognition of face, speech recognition, text understanding etc..
External memory interface 120 can be used for connecting external memory card, such as Micro SD card, realize extension terminal
100 storage capacity.External memory card is communicated by external memory interface 120 with processor 110, realizes that data store function
Energy.Such as by music, the files such as video are stored in external memory card.
Internal storage 121 can be used for storing computer executable program code, and the executable program code includes
Instruction.Processor 110 is stored in the instruction of internal storage 121 by operation, thereby executing the various function application of terminal 100
And data processing.Internal storage 121 may include storing program area and storage data area.Wherein, storing program area can deposit
Store up operating system, application program (such as sound-playing function, image player function etc.) needed at least one function etc..Storage
Data field can store the data (such as audio data, phone directory etc.) etc. created in 100 use process of terminal.In addition, internal
Memory 121 may include high-speed random access memory, can also include nonvolatile memory, for example, at least a disk
Memory device, flush memory device, generic flash memory (universal flash storage, UFS) etc..
Terminal 100 can pass through audio-frequency module 170, loudspeaker 170A, receiver 170B, microphone 170C, earphone interface
170D and application processor etc. realize audio-frequency function.Such as music, recording etc..
Audio-frequency module 170 is used to for digitized audio message to be converted into analog audio signal output, is also used for analogue audio frequency
Input is converted to digital audio and video signals.Audio-frequency module 170 can be also used for audio-frequency signal coding and decoding.In some embodiments
In, audio-frequency module 170 can be set in processor 110, or the partial function module of audio-frequency module 170 is set to processor
In 110.
Loudspeaker 170A, also referred to as " loudspeaker ", for audio electrical signal to be converted to voice signal.Terminal 100 can pass through
Loudspeaker 170A listens to music, or listens to hand-free call.
Receiver 170B, also referred to as " earpiece ", for audio electrical signal to be converted into voice signal.When terminal 100 answers electricity
It, can be by the way that receiver 170B be answered voice close to human ear when words or voice messaging.
Microphone 170C, also referred to as " microphone ", " microphone ", for voice signal to be converted to electric signal.When making a phone call
Or when sending voice messaging, voice signal can be input to microphone by mouth close to microphone 170C sounding by user
170C.At least one microphone 170C can be set in terminal 100.In further embodiments, two wheats can be set in terminal 100
Gram wind 170C can also realize decrease of noise functions in addition to collected sound signal.In further embodiments, terminal 100 can also be set
Three, four or more microphone 170C are set, realizes that collected sound signal, noise reduction can also identify sound source, realizes orientation
Sound-recording function etc..
Earphone interface 170D is for connecting wired earphone.Earphone interface 170D can be usb 1 30, be also possible to
Opening mobile electronic device platform (open mobile terminal platform, OMTP) standard interface of 3.5mm, the U.S.
Cellular telecommunication industrial association (cellular telecommunications industry association of the USA,
CTIA) standard interface.
Pressure signal can be converted into electric signal for experiencing pressure signal by pressure sensor 180A.In some implementations
In example, pressure sensor 180A be can be set in display screen 194.There are many type of pressure sensor 180A, such as resistive pressure
Sensor, inductance pressure transducer, capacitance pressure transducer, etc..Capacitance pressure transducer, can be including at least two
Parallel-plate with conductive material.When effectively acting on pressure sensor 180A, the capacitor between electrode changes.100, terminal
The intensity of pressure is determined according to the variation of capacitor.When there is touch operation to act on display screen 194, terminal 100 is according to pressure sensor
180A detects the touch operation intensity.Terminal 100 can also calculate touch according to the detection signal of pressure sensor 180A
Position.In some embodiments, identical touch location, but the touch operation of different touch operation intensity are acted on, can be corresponded to
Different operational orders.Such as: it is answered when there is touch operation of the touch operation intensity less than first pressure threshold value to act on short message
When with icon, the instruction for checking short message is executed.When the touch for having touch operation intensity to be greater than or equal to first pressure threshold value is grasped
When acting on short message application icon, the instruction of newly-built short message is executed.
Gyro sensor 180B is determined for the athletic posture of terminal 100.In some embodiments, can pass through
Gyro sensor 180B determines that terminal 100 surrounds the angular speed of three axis (that is, x, y and z-axis).Gyro sensor 180B can
For shooting stabilization.Illustratively, when pressing shutter, gyro sensor 180B detects the angle that terminal 100 is shaken, according to
Angle calculation goes out the distance that lens module needs to compensate, and camera lens is allowed to offset the shake of terminal 100 by counter motion, realizes anti-
It trembles.Gyro sensor 180B can be also used for navigating, somatic sensation television game scene.
Baroceptor 180C is for measuring air pressure.In some embodiments, terminal 100 is surveyed by baroceptor 180C
The atmospheric pressure value obtained calculates height above sea level, auxiliary positioning and navigation.
Magnetic Sensor 180D includes Hall sensor.Terminal 100 can use Magnetic Sensor 180D flip cover leather sheath
Folding.In some embodiments, when terminal 100 is liding machine, terminal 100 can be according to Magnetic Sensor 180D flip cover
Folding.And then according to the folding condition of the leather sheath detected or the folding condition of flip lid, the characteristics such as setting flip lid automatic unlocking.
Acceleration transducer 180E can detect the size of (the generally three axis) acceleration in all directions of terminal 100.When
Terminal 100 can detect that size and the direction of gravity when static.It can be also used for identification electronic equipment posture, be applied to horizontal/vertical screen
Switching, the application such as pedometer.
Range sensor 180F, for measuring distance.Terminal 100 can pass through infrared or laser distance measuring.Some
In embodiment, photographed scene, terminal 100 can use range sensor 180F ranging to realize rapid focus.
It may include such as light emitting diode (LED) and photodetector, such as photodiode close to optical sensor 180G.
Light emitting diode can be infrared light-emitting diode.Terminal 100 launches outward infrared light by light emitting diode.Terminal 100 makes
The infrared external reflection light from neighbouring object is detected with photodiode.When detecting sufficient reflected light, terminal can be determined
100 nearby have object.When detecting insufficient reflected light, terminal 100 can determine terminal 100 nearby without object.Eventually
End 100 can use close to optical sensor 180G and detect user's handheld terminal 100 close to ear call, so as to automatic distinguishing screen
Achieve the purpose that power saving.It can also be used for leather sheath mode, pocket pattern automatic unlocking and screen locking close to optical sensor 180G.
Ambient light sensor 180L is for perceiving environmental light brightness.Terminal 100 can according to the environmental light brightness of perception from
It adapts to adjust 194 brightness of display screen.Automatic white balance adjustment when ambient light sensor 180L can also be used for taking pictures.Ambient light sensing
Device 180L can also cooperate with close to optical sensor 180G, terminal 100 be detected whether in pocket, with false-touch prevention.
Fingerprint sensor 180H is for acquiring fingerprint.The fingerprint characteristic that terminal 100 can use acquisition realizes unlocked by fingerprint,
Application lock is accessed, fingerprint is taken pictures, fingerprint incoming call answering etc..
Temperature sensor 180J is for detecting temperature.In some embodiments, terminal 100 is examined using temperature sensor 180J
The temperature of survey executes Temperature Treatment strategy.For example, terminal 100 executes when the temperature sensor 180J temperature reported is more than threshold value
The performance for the processor being located near temperature sensor 180J is reduced, implements Thermal protection to reduce power consumption.In other implementations
In example, when temperature is lower than another threshold value, terminal 100 heats battery 142, causes terminal 100 to be shut down extremely to avoid low temperature.
In some other embodiment, when temperature is lower than another threshold value, terminal 100 executes boosting to the output voltage of battery 142, with
It avoids shutting down extremely caused by low temperature.
Touch sensor 180K, also referred to as " touch panel ".Touch sensor 180K can be set in display screen 194, by touching
It touches sensor 180K and display screen 194 forms touch screen, also referred to as " touch screen ".Touch sensor 180K acts on it for detecting
On or near touch operation.The touch operation that touch sensor can will test passes to application processor, to determine touching
Touch event type.Visual output relevant to touch operation can be provided by display screen 194.In further embodiments, it touches
Touching sensor 180K also can be set in the surface of terminal 100, different from the location of display screen 194.
The available vibration signal of bone conduction sensor 180M.In some embodiments, bone conduction sensor 180M can be with
Obtain the vibration signal of human body part vibration bone block.Bone conduction sensor 180M can also contact human pulse, receive blood pressure and jump
Dynamic signal.In some embodiments, bone conduction sensor 180M also can be set in earphone, be combined into bone conduction earphone.Sound
Frequency module 170 can parse voice based on the vibration signal for the part vibration bone block that the bone conduction sensor 180M is obtained
Signal realizes phonetic function.The blood pressure jitter solution that application processor can be obtained based on the bone conduction sensor 180M
Heart rate information is analysed, realizes heart rate detecting function.
Key 190 includes power button, volume key etc..Key 190 can be mechanical key.It is also possible to touch-key.
Terminal 100 can receive key-press input, generate key signals input related with the user setting of terminal 100 and function control.
Motor 191 can produce vibration prompt.Motor 191 can be used for calling vibration prompt, can be used for touching vibration
Dynamic feedback.For example, acting on the touch operation of different application (such as taking pictures, audio broadcasting etc.), different vibrations can be corresponded to
Feedback effects.The touch operation of 194 different zones of display screen is acted on, motor 191 can also correspond to different vibrational feedback effects.
Different application scenarios (such as: time alarm receives information, alarm clock, game etc.) different vibrational feedback effects can also be corresponded to
Fruit.Touch vibrational feedback effect can also be supported customized.
Indicator 192 can be indicator light, can serve to indicate that charged state, electric quantity change can be used for instruction and disappear
Breath, missed call, notice etc..
SIM card interface 195 is for connecting SIM card.SIM card can be by being inserted into SIM card interface 195, or from SIM card interface
195 extract, and realization is contacting and separating with terminal 100.Terminal 100 can support that 1 or N number of SIM card interface, N are greater than 1
Positive integer.SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc..The same SIM card interface 195
It can be inserted into multiple cards simultaneously.The type of multiple cards may be the same or different.SIM card interface 195 can also be compatible with
Different types of SIM card.SIM card interface 195 can also be with compatible external storage card.Terminal 100 passes through SIM card and network interaction,
Realize the functions such as call and data communication.In some embodiments, terminal 100 uses eSIM, it may be assumed that embedded SIM card.eSIM
Card can cannot separate in terminal 100 with terminal 100.
Currently, in the prior art, when user's inconvenience receives calls on mobile terminals, user can be by as follows
Mode forward call: 1, user opens mobile terminal calling forwarding function in advance, and user is allowed to fill in the phone number for answering terminal
The telephone number for answering terminal can be sent to the network side equipment of mobile radio communication by code, mobile terminal, and network side equipment can
To bind together the number of mobile terminal with the telephone number for answering terminal, when network side equipment is received for mobile whole
When the call request at end, network side equipment can will be directed to the call request of mobile terminal, be transferred to and answer in terminal, so as to
The phone of mobile terminal is made to allow answer terminal and answer.But in this way, once user opens mobile terminal calling forwarding function
Later, all phones for making mobile terminal, can all be transferred to and answer terminal, if user wants that continuing to use mobile terminal answers
Phone when but forgetting shutdown call forwarding function, can miss many phones, cause inconvenience to user.
2, user can be connected, when mobile terminal connects by mobile terminal by bluetooth or Wi-Fi technology with earphone or speaker
When hearing incoming call, mobile terminal, which can be defaulted, is transferred to earphone or speaker for incoming call and voice communication, and by earphone or
The sound of speaker acquisition user.When user needs with mobile terminal to converse, mobile terminal needs to receive user's selection
Input, can switch back into call on mobile terminal, many and diverse operating procedure be caused to user, and switch voice in user
During call, user is easy to miss part dialog context, causes inconvenience to user.
In view of the above-mentioned problems, realizing present applicant proposes a kind of voice communication method and receiving voice incoming call in terminal
Later, if terminal it is occupied or it is overtime do not answer, terminal can from the terminal that other have delivery value in family lan,
According to the speech capability parameter of each terminal (such as speech capability priority m, voice frequency n, user voice energy value x, Yong Huwei
Set value y, equipment state value s etc.), determine optimal terminal of answering, and incoming call is transferred to this with voice communication and is answered
In terminal.In this way, avoiding user by the transfer of incoming call and voice communication between each call terminal in family and missing
Voice incoming call on call terminal, improves user experience.
A kind of network architecture provided by the embodiments of the present application is described below.
Referring to figure 2., Fig. 2 shows 200 schematic diagrames of a kind of network architecture provided in the embodiment of the present application.Such as Fig. 2 institute
Show, which includes multiple terminals.Wherein, which may include smart phone 201, smartwatch 202, intelligence
Speaker 203, PC 204, smart television 205, tablet computer 206 etc., the application is not intended to be limited in any this.
Wherein, multiple terminal can have delivery value, and multiple terminal can be received in the following way and be exhaled
Cry and talk about all: 1, multiple terminal can receive circuit commutative field (the circuit switched in mobile radio communication
The domain domain, CS) incoming call or call.2, multiple terminal can receive the IP multimedia subsystem in mobile radio communication
Incoming call or call in system (IP multimedia subsystem, IMS) based on VoLTE technology.3, multiple terminal can
To receive incoming call or the call in internet (Internet) based on voip technology.
The mode that multiple terminal can be connected by wired or wireless fidelity (wireless fidelity, WiFi)
It is connected to local area network (local area network, LAN).The structure of terminal in the network architecture 200 can refer to upper
Terminal 100 shown in FIG. 1 is stated, details are not described herein.For example multiple terminal is connected to the same router.
It in one possible implementation, further include having maincenter equipment 207 in the local area network of the network architecture 200, it should
Maincenter equipment 207 can in the network architecture 200 multiple terminals (such as terminal 201, terminal 202, terminal 203, terminal 204,
Terminal 205 and terminal 206.) connection.The maincenter equipment 207 can be router, gateway, smart machine controller etc..The maincenter
Equipment 207 may include having memory, processor and transceiver, and memory can be used for being stored with the respective language of multiple terminal
Sound ability parameter (such as speech capability priority m, voice frequency n, vocal print energy value x, user location value y, equipment state value s
Deng).Processor can be used for when some terminal for being connected to local area network needs to transfer incoming call, from the respective voice of multiple terminals
In ability parameter, determine to answer terminal.Transceiver can be used for being communicated with the multiple terminal for being connected to local area network.
In one possible implementation, the network architecture 200 further includes server 208, wherein the server
208 can be the server in smart home cloud network, and quantity is not limited to one, can be multiple, be not limited thereto.Its
In, which may include memory, processor and transceiver, wherein memory can be used for being stored with multiple end
Hold respective speech capability parameter (such as speech capability priority m, voice frequency n, vocal print energy value x, user location value y, if
Standby state value s etc.).Transceiver can be used for being communicated with terminal each in local area network.Processor can be used for handling in local area network
The acquisition request of data of each terminal, and indicate that transceiver issues the respective speech capability parameter of multiple terminals to each terminal.
It is mentioned below with reference to the above-mentioned network architecture 200 shown in Fig. 2 and application scenarios, specific explanations the embodiment of the present application
A kind of voice communication method supplied.In the embodiment of the present application, the terminal for receiving voice incoming call can be referred to as first terminal,
Second terminal can be referred to as by answering terminal.
In application scenes, there are multiple terminals for having delivery value, such as intelligently in the local area network in family
Mobile phone, smartwatch, intelligent sound box, PC, smart television and Intelligent flat etc..User can pass through any terminal (example
Such as smart phone) voice incoming call is received, when user does not answer for a long time, or to carry out electric equipment occupied (such as just
In call), user may missed call, cause inconvenience to the user.Therefore, it is logical to provide a kind of voice for the embodiment of the present application
Letter method, after terminal 1 (such as smart phone) receives incoming call, when the time-out of terminal 1 do not receive user answer operation or
When the terminal occupied (such as being busy now), terminal 1 can be according to the speech capability parameters of other terminals (such as language
Sound ability priority m, voice frequency n, vocal print energy value x, user location y, equipment state value s etc.), determine connecing for the incoming call
It listens terminal (such as smart television), and electrotransfer in future is to answering on terminal (such as smart television).This way it is possible to avoid user
The incoming call in terminal is missed, user experience is improved.
Referring to figure 3., Fig. 3 shows a kind of voice communication method provided in the embodiment of the present application.Wherein, in local area network
It may include N number of terminal for having delivery value, N is the integer greater than 2.Local area network refers to N number of terminal for having delivery value
All connect the computer network constituted on a router.Wherein, multiple terminals for having delivery value in local area network can
There is the same user account with binding.In the embodiment shown in fig. 3, any one terminal for receiving incoming call can be said to end
End 1.Such as smart phone, when receiving incoming call, smart phone can be referred to as terminal 1, when smart television receives incoming call,
Smart television can be referred to as terminal 1, not be limited in any way herein.As shown in figure 3, this method comprises:
S301, terminal 1 receive incoming call.
Wherein, which can be sent a telegram here with finger speech sound.The terminal (initiating the terminal of calling) of contact person can pass through movement
CS dials in domain voice call to terminal 1 in communication network, and the terminal of contact person can also pass through IMS network in mobile communications network
The voice call based on VoLTE technology is dialed to terminal 1, the terminal of contact person can also be dialed by internet (Internet)
Based on the voice call of voip technology to terminal 1.
S302,1 output incoming of terminal are reminded.
Terminal 1 can be reminded after the phone that the terminal for receiving contact person is dialed with output incoming.Wherein, this comes
It may include following at least one that electricity, which is reminded: the tinkle of bells is reminded, mechanical oscillation prompting and caller identification remind that (such as terminal 1 is aobvious
Show the contact method etc. of screen display contact person).
It is determined whether to enable call transfer functions for S303, terminal 1, if so, other in S304, terminal 1 and local area network
Each terminal establishes connection.If it is not, then other each terminals in terminal 1 and local area network do not establish connection.
Wherein, terminal 1 before receiving the incoming call of contact person, can receive the setting input of user, in response to user
Setting input, terminal 1 can open or close call transfer function.In this way, terminal 1 can shift according to the demand of user
The incoming call received improves the experience of user.It is to be appreciated that call transfer function is the incoming call for receiving this terminal
Or voice communication is produced in other terminals, is carried out call reminding, and/or the voice of acquisition user by other terminals, is played and
The voice of electricity side (i.e. contact person).When terminal 1 opens call transfer function, terminal 1 receives incoming call and needs to turn incoming call
Out, then the incoming call that terminal 1 receives produces in other terminals, call reminding is carried out by other terminals, when in other terminals
Detect user's incoming call answering, voice communication is transferred in other terminals by terminal 1.When terminal 1 opens call transfer function,
Terminal 1 is carrying out voice communication and is needing to produce call, then voice communication is transferred in other terminals by terminal 1.
After the judgement of terminal 1 has been switched on call transfer function, terminal 1 can be with each terminal of other in local area network
(terminal 2, terminal 3 ..., terminal N) establish connection.Wherein, which can be the connection based on ICP/IP protocol, in the base
Under the connection of ICP/IP protocol, incoming call (including calling and voice communication) can be transferred to by terminal 1 based on voip technology
Connect equipment.The connection can be Wi-Fi direct.The connection can also be the connection established by router.If two equipment branch
When holding Bluetooth function, which can also be bluetooth connection, etc..In this way, terminal 1 determine need shift incoming call after, can
With timely electrotransfer in future to interconnecting device (such as terminal 2), the excessive time is waited without user, reduces transfer incoming call institute's band
The time delay come.
In one possible implementation, if terminal 1 is therefrom to be pivoted on standby or server to obtain other each terminals
Speech capability parameter when, terminal 1 only with interconnecting device can establish connection after determining to answer equipment.In this way, at end
Connection is established with interconnecting device in 1, end, reduces radio resource consumption.
S305, terminal 1 judge whether time-out is not picked up or whether terminal 1 is occupied incoming call, if so, S306, end
End 1 obtains the speech capability parameter of other each terminals in local area network.
After other each terminals in terminal 1 and local area network establish connection, whether terminal 1 can first judge terminal 1
Through occupied, if so, other the available each terminals of terminal 1 (terminal 2, terminal 3 ..., terminal N) speech capability ginseng
Number, if unoccupied, whether it is more than specified time threshold value (such as 10s) that terminal 1 may determine that after the incoming call received
Do not receive user answers operation.If being more than that specified time threshold value is not answered, terminal 1 obtain other each terminals (terminal 2,
Terminal 3 ..., terminal N) speech capability parameter.Wherein, speech capability parameter includes: speech capability priority m, call frequency
Rate n, vocal print energy value x, user location value y, equipment state value s.Wherein:
1, speech capability priority m is used to indicate the communication effect of terminal, and the communication effect of terminal is better, and speech capability is excellent
First grade is bigger,.For example, the speech capability priority m of terminal 1 can be 1, the speech capability priority n of terminal 2 can be 0.5,
Then indicate that the communication effect of terminal 1 is better than the communication effect of terminal 2.The speech capability priority of terminal is true by the type of terminal
Fixed.Illustratively, the corresponding relationship of terminal type and terminal speech ability value can be as shown in following table 1:
Table 1
The speech capability value of smart phone Terminal Type can be 1 it can be seen from upper table 1, the language of tablet computer Terminal Type
Sound ability value can be 0.8, and the speech capability value of intelligent sound box Terminal Type can be 0.6, the voice energy of smart television Terminal Type
Force value can be 0.5, and the speech capability value of smartwatch Terminal Type can be 0.4, the speech capability value of PC Terminal Type
It can be 0.3.Content shown in above-mentioned table 1 is used only for explaining the application, should not constitute restriction.
2, voice frequency n is used to indicate that terminal to connect the frequent degree of voice communication, and voice frequency n is bigger, and expression terminal connects
Logical voice communication number is more frequent.Wherein, voice frequency n can be all terminals in the talk times and local area network of the terminal
The ratio between total talk times.Wherein, any terminal in local area network can periodically (such as the period can be one week, one month
Etc.) send talk times give other terminals.Terminal can determine striking out after receiving the respective talk times of other terminals
Total talk times of all terminals in the net of domain.Illustratively, it is assumed that have 6 terminals, respectively smart phone, plate in local area network
Computer, intelligent sound box, smart television, smartwatch, PC.The talk times of this 6 terminals and voice frequency n can be as
Shown in the following table 2:
Table 2
Terminal | Talk times | Voice frequency n |
Smart phone | 80 | 0.4 |
Tablet computer | 20 | 0.1 |
Intelligent sound box | 20 | 0.1 |
Smart television | 60 | 0.3 |
Smartwatch | 10 | 0.05 |
PC | 10 | 0.05 |
The talk times of smart phone are 80 it can be seen from upper table 2, the talk times of tablet computer are 20, intelligent sound
The talk times of case are 20, and the talk times of smart television are 60, and the talk times of smartwatch are 10, the call of PC
Number is 10.Wherein, total talk times of this 6 terminals are 200 in local area network.Therefore, the voice frequency of smart phone is
0.4, the voice frequency of tablet computer is 0.1, and the voice frequency of intelligent sound box is 0.1, and the voice frequency of smart television is 0.3,
The voice frequency of smartwatch is 0.05, and the voice frequency of PC is 0.05.Content shown in above-mentioned table 2, is used only for solving
The application is released, restriction should not be constituted.
3, for vocal print energy value x for indicating that terminal receives the sound size of user, vocal print energy value is bigger, indicates the use
Family is closer to this terminal.Wherein, terminal can acquire sound in real time by microphone, and terminal is calculated according to collected sound and used
The vocal print energy value at family.Illustratively, it is assumed that have 6 terminals, respectively smart phone, tablet computer, intelligent sound in local area network
Case, smart television, smartwatch, PC.The collected user voice ability value x of this 6 terminals can be such as the following table 3 institute
Show:
Table 3
The vocal print energy value of smart phone is 0.23 it can be seen from upper table 3, the vocal print energy value of tablet computer is 0, intelligence
The vocal print energy value of energy speaker is 0.25, and the vocal print energy value of smart television is 0.55, and the vocal print energy value of smartwatch is
0.5, the vocal print energy value of PC is 0.Content shown in above-mentioned table 3 is used only for explaining the application, should not constitute restriction.
4, user location value y is for indicating terminal and the positional relationship of user.For example, when user location value y is 1, then table
Show user in the terminal surrounding.When user location value y is 0, then it represents that the terminal does not detect user around the terminal.Eventually
End can obtain the image of the terminal surrounding by camera, and detect from image whether user with the terminal is in family
Same position (such as all in bedroom) in type figure, if so, the user location value y of the terminal is 1;If it is not, the then terminal
User location value y is 0.Wherein, which can be the camera of terminal itself, can also be that connection is independent in a local network
Camera.When the camera is connection individual camera in a local network, camera is in the position (example for getting user
Such as in bedroom) after, the position (such as in bedroom) of user can be sent to each terminal in local area network.When the terminal does not have
When the ability of the standby position for obtaining user (such as camera has damaged or be not configured with camera), the user location y of the terminal
It is 0.5.Illustratively, it is assumed that have 6 terminals, respectively smart phone, tablet computer, intelligent sound box, intelligence electricity in local area network
Depending on, smartwatch, PC.Wherein, smart phone can be in master bedroom, and intelligent sound box can be in secondary room 1, tablet computer
Can be in secondary room 2, smart television and smartwatch can be at parlors.When user is in parlor, this 6 terminals are collected
User location value y can be as shown in table 4 below:
Table 4
Terminal | User location value y |
Smart phone | 0 |
Tablet computer | 0 |
Intelligent sound box | 0.5 |
Smart television | 1 |
Smartwatch | 1 |
PC | 0 |
The user location value y of smart phone is 0 it can be seen from upper table 4, indicates user not around smart phone.
The user location value y of tablet computer is 0, indicates user not around smart phone.The user location value y of smart television is 1,
Indicate user around smart television.The user location y of smartwatch is 1, indicates user around smartwatch.It is personal
The user location y of computer is 0, indicates user not around PC.Content shown in above-mentioned table 4 is used only for explaining this
Application, should not constitute restriction.
5, equipment state value s is for indicating whether terminal currently can be used for answering voice communication.When the terminal is currently at
Idle state, when can be used for answering voice communication, which is 1.When the terminal is currently at occupied state,
Such as the terminal, just in voice communication, which is 0.5.When the terminal is currently at down state, such as
The terminal is turned off call and is transferred to function, i.e. the terminal does not receive the call that the transfer of other terminals comes, and equipment state value s is
0.Wherein, terminal can receive the input of user, close call access function, and terminal can also close call when conversing
It is transferred to function, is not limited thereto.Illustratively, it is assumed that have 6 terminals in local area network, respectively smart phone, tablet computer,
Intelligent sound box, smart television, smartwatch, PC.The equipment state value s of this 6 terminals can be as shown in table 5 below:
Table 5
Terminal | Equipment state value s |
Smart phone | 1 |
Tablet computer | 0.5 |
Intelligent sound box | 1 |
Smart television | 1 |
Smartwatch | 1 |
PC | 0 |
The equipment state value s of smart phone is 1 it can be seen from above-mentioned table 5, i.e. expression smart phone is currently available for connecing
Listen voice communication.The equipment state value s of tablet computer is 0.5, i.e., expression tablet computer is currently that occupied state (such as is being led to
In words).The equipment state value s of intelligent sound box is 1, i.e. expression intelligent sound box is currently available for answering voice communication.Smart television
Equipment state value s be 1, i.e., expression smart television be currently available for answering voice communication.The equipment state value s of smartwatch is
1, i.e. expression smartwatch is currently available for answering voice communication.The equipment state value s of PC is 0, that is, indicates intelligent hand
Table is not currently available for answering voice communication.
In one possible implementation, the respective speech capability parameter of other terminals is stored in other terminal locals
On memory.For example, the speech capability parameter of terminal 2 is stored on the local memory of terminal 2, the speech capability of terminal 3 is joined
Number is stored on the local memory of terminal 3 ... ..., and the speech capability parameter of terminal N is stored in the memory of the local terminal N
On.Terminal 1 can send the acquisition instruction of speech capability parameter in local area network other terminals (terminal 2, terminal 3 ...,
Terminal N), other terminals (terminal 2, terminal 3 ..., terminal N) after the acquisition instruction for receiving speech capability parameter, can be with
Respective speech capability parameter is sent to terminal 1.
In one possible implementation, the respective speech capability parameter of other terminals is stored in the maincenter in local area network
In equipment.Respective speech capability parameter can be sent to maincenter equipment 207 by each terminal in local area network.When terminal 1 is sentenced
Surely receive incoming call be more than specified time threshold value do not receive user answer operation or terminal 1 it is occupied (such as
It is busy now), terminal 1 can send the acquisition instruction of speech capability parameter to maincenter equipment 207.Maincenter equipment 207 is connecing
After the acquisition instruction for receiving speech capability parameter, can by other terminals (terminal 2, terminal 3 ..., terminal N) speech capability
Parameter is sent to terminal 1.
In one possible implementation, the respective speech capability parameter of other terminals is stored in the clothes of smart home cloud
It is engaged on device 208.Each terminal in local area network can connect to server 208, and each terminal in local area network can will be respective
Speech capability parameter be sent to server 208.When terminal 1 judges that the incoming call received is more than that specified time threshold value does not receive
User answer operation or terminal 1 is occupied (such as being busy now), terminal 1 can to server 208 send language
The acquisition instruction of sound ability parameter.Server 208, can be by other terminals after the acquisition instruction for receiving speech capability parameter
(terminal 2, terminal 3 ..., terminal N) speech capability parameter be sent to terminal 1.
Be described below how terminal 1 to determine electrotransfer answers terminal.
S307, terminal 1 are determined to answer terminal according to the speech capability parameter of other each terminals.Wherein, this answers terminal
For receiving the incoming call of the transfer of terminal 1.
Wherein, speech capability parameter may include speech capability priority m, voice frequency n, vocal print energy value x, Yong Huwei
Set value y, equipment state value s.For the verbal description of speech capability parameter, previous embodiment can be referred to, details are not described herein.
In one possible implementation, terminal 1 can filter out setting of can currently conversing from other terminals
Standby (i.e. the terminal that equipment state value s is 1), for example, terminal 2, terminal 3 ..., terminal N, currently can be carried out conversing.Terminal
1 can first compare the size of the respective vocal print energy value x of other terminals.If terminal (terminal 2, end that can currently converse
End 3 ..., terminal N) in only a maximum terminal of vocal print energy value x (for example, terminal 2), then terminal 1 can be by the sound
The maximum terminal of sound energy value x (such as terminal 2) is determined as answering terminal.
If can currently converse terminal (terminal 2, terminal 3 ..., terminal N) in have multiple vocal print energy value x most
Big terminal (such as terminal 2, terminal 3, terminal 4 and terminal 5), then terminal 1 can more multiple vocal print energy value x it is maximum
The size of terminal (such as terminal 2, terminal 3, terminal 4 and terminal 5) respective user location value y.If multiple vocal print energy value x
Only have in maximum terminal (such as terminal 2, terminal 3, terminal 4 and terminal 5) maximum terminal of user location value y (such as
Terminal 2), then terminal 1 can be maximum by sound energy value x and the maximum terminal of user location value y (such as terminal 2) is determined as
Shift terminal.
If having multiple use in the maximum terminal of multiple vocal print energy value x (such as terminal 2, terminal 3, terminal 4 and terminal 5)
The maximum terminal of family positional value y (such as terminal 2, terminal 3 and terminal 4), then it is maximum can to compare sound energy value x for terminal 1
And the maximum terminal of user location value y (such as terminal 2, terminal 3 and terminal 4) respective voice frequency n size.If multiple sound
An only call frequency in line energy value x maximum and the maximum terminal of user location value y (such as terminal 2, terminal 3 and terminal 4)
The maximum terminal of rate n (such as terminal 2), then terminal can vocal print energy value x is maximum and user location y maximum and call
The maximum terminal of frequency n (such as terminal 2) is determined as shifting terminal.
If multiple vocal print energy value x maximum and the maximum terminal of user location y (such as terminal 2, terminal 3 and terminal 4)
In have multiple maximum terminals of voice frequency n (such as terminal 2 and terminal 3), then terminal 1 can compare sound energy value x most
Greatly and user location value y is maximum and the maximum terminal of voice frequency n (such as terminal 2 and terminal 3) respective speech capability is excellent
First grade m value size.If multiple vocal print energy value x maximum and user location value y maximum and the maximum terminal (example of voice frequency n
Such as terminal 2 and terminal 3) in an only maximum terminal of speech capability priority m (such as terminal 2), then terminal can should
Sound energy value x maximum and user location y maximum and voice frequency n maximum and the maximum terminal of speech capability priority m
(such as terminal 2) is determined as answering terminal.
If sound energy value x is maximum and user location y is maximum and voice frequency n is maximum and speech capability priority m most
Big terminal has multiple, then terminal 1 can randomly choose terminal as answering terminal from multiple terminal.
It is understood that terminal 1 can determine user according to the vocal print energy value x and/or user location value y of user
Position.Terminal 1 can according to one in user location, speech capability priority m, voice frequency n and equipment state value s or
It is multinomial, it determines to answer terminal.For example, terminal 1 can determine to answer terminal (such as terminal only according to vocal print ability value x
2).Wherein, terminal is answered as vocal print ability value x maximum one in other terminals in addition to terminal 1.In another example terminal 1 can
According to vocal print ability value x and voice frequency n, to determine to answer terminal (such as terminal 2).Wherein, answering terminal is except terminal 1
Except other terminals in vocal print ability x it is maximum and voice frequency n maximum one.
In one possible implementation, terminal 1 (1) can calculate the respective switching of other terminals according to the following formula
Ability value V.Wherein:
V=f (a*m, b*n, c*x, d*y) * s formula (1)
Wherein, m is speech capability priority, and n is voice frequency, and x is vocal print energy value, and y is user location value, and s is to set
Standby state value, a are the weight of speech capability priority, and b is the weight of voice frequency, and c is the weight of vocal print ability value, and d is to use
The weight of family positional value.F (z1, z2, z3, z4) is operation function, wherein operation function f (z1, z2, z3, z4) can sum
Function.That is,
V=(a*m+b*n+c*x+d*y) * s formula (2)
Terminal 1 calculate other terminals (terminal 2, terminal 3 ..., terminal N) after respective speech capability value V, can
Using by the maximum terminal of switchover capability value V (such as terminal 2) as answering terminal.
Wherein, above-mentioned speech capability parameter (speech capability priority m or voice frequency n or vocal print energy value x or user
It is bigger on the definitive result influence for answering terminal to set value y), weight is bigger.For example, being fixed sound really to terminal is answered: vocal print
Energy value x is greater than user location value y and is greater than speech capability priority m greater than voice frequency n, then c > d > b > a.One kind can
In the implementation of energy, the respective weight of speech capability parameter is variable.Any terminal in local area network can receive the defeated of user
Enter operation, resets the respective weight of speech capability parameter.
It is illustrative how to introduce according to speech capability parameter below by above-mentioned formula (2), it determines to answer terminal.
Illustratively, as shown in figure 4, can have 6 terminals for having delivery value in family lan, such as intelligent hand
Machine 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205, tablet computer 206.Wherein, intelligent hand
Machine 201 is located at the master bedroom position of family's floor plan, and smartwatch 202 is located at the parlor position of family's floor plan, intelligent sound
Case 203 is located at 1 position of secondary room of family's floor plan, and PC 204 is located at the study position of family's floor plan, intelligence electricity
It is located at the parlor position of family's floor plan depending on 205, tablet computer 206 is located at 2 position of secondary room of family's floor plan.Work as intelligence
When mobile phone 201 receives incoming call, user is at the parlor of family's floor plan, with position locating for smartwatch 202 and smart television 205
It sets identical.
Wherein, in family lan each terminal speech capability parameter (speech capability priority m, voice frequency n, vocal print
Energy value x, user location value y, equipment state value s) can be as shown in table 6 below:
Table 6
The speech capability priority m of smart phone 201 it can be seen from upper table 61It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.23, the user location value y of smart phone 2011It is 0, intelligent hand
The equipment state value s of machine 2011It is 0.5.The speech capability priority m of smartwatch 2022It is 0.4, the call of smartwatch 202
Frequency n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.4, the user location value y of smartwatch 2022It is 1, intelligence
The equipment state value s of wrist-watch 2022It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the call of intelligent sound box 203
Frequency n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.25, the user location value y of intelligent sound box 2033It is 0.5, intelligence
The equipment state value s of energy speaker 2033It is 1.The speech capability priority m of PC 2044It is 0.3, PC 204 leads to
Voice frequency rate n4It is 0.05, the vocal print energy value x of PC 2044It is 0.2, the user location value y of PC 2044It is 0, it is a
The equipment state value s of people's computer 2044It is 0.The speech capability priority m of smart television 2055It is 0.5, smart television 205 leads to
Voice frequency rate n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.55, the user location value y of smart television 2055It is 1, intelligence
The equipment state value s of energy TV 2055It is 1.The speech capability priority m of tablet computer 2066It is 0.8, tablet computer 206 leads to
Voice frequency rate n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.2, the user location value y of tablet computer 2066It is 0, plate
The equipment state value s of computer 2066It is 0.5.Above-mentioned table 6 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
When smart phone 201 receives incoming call, after getting the speech capability parameter of other each terminals, smart phone 201
The switchover capability value V of other each terminals can be calculated by above-mentioned formula (2).For example, the switchover capability of smartwatch 202
Value V2It is 0.45, the switchover capability value V of intelligent sound box 2033It is 0.305, the switchover capability value V of PC 2044It is 0, intelligence
The switchover capability value V of TV 2055It is 0.585, the switchover capability value V of tablet computer 2066It is 0.1.Due to smart television 205
Switchover capability value V5It is 0.585, in other terminal (smartwatch 202, intelligent sound box 203, PCs 204, smart television
205 and tablet computer 206) in switchover capability value it is maximum, smart phone 201 can determine that smart television 205 is to answer terminal.
Above-mentioned example is used only for explaining the application, should not constitute restriction, any appliance during specific implementation, in family lan
Incoming call can be received, will not repeat them here.
It is again illustrative, as shown in figure 5, can have 6 terminals for having delivery value in family lan, such as intelligently
Mobile phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205, tablet computer 206.Wherein, intelligence
Mobile phone 201 is located at the master bedroom position of family's floor plan, and smartwatch 202 is located at the parlor position of family's floor plan, intelligence
Speaker 203 is located at 1 position of secondary room of family's floor plan, and PC 204 is located at the study position of family's floor plan, intelligence
TV 205 is located at the parlor position of family's floor plan, and tablet computer 206 is located at 2 position of secondary room of family's floor plan.Work as intelligence
Can mobile phone 201 when receiving incoming call, user can be locating for each terminal at the balcony of family's floor plan and in family lan
Position is different from.Wherein, each terminal in family lan can not all get vocal print energy value x and user location value y.
Wherein, in family lan each terminal speech capability parameter (speech capability priority m, voice frequency n, vocal print
Energy value x, user location value y, equipment state value s) can be as shown in table 7 below:
Table 7
The speech capability priority m of smart phone 201 it can be seen from upper table 71It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0, the user location value y of smart phone 2011It is 0, smart phone
201 equipment state value s1It is 0.5.The speech capability priority m of smartwatch 2022It is 0.4, the call frequency of smartwatch 202
Rate n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0, the user location value y of smartwatch 2022It is 0, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0, the user location value y of intelligent sound box 2033It is 0, intelligent sound box 203
Equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency n of PC 2044For
0.05, the vocal print energy value x of PC 2044It is 0, the user location value y of PC 2044It is 0, PC 204 is set
Standby state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency n of smart television 2055It is 0.3,
The vocal print energy value x of smart television 2055It is 0, the user location value y of smart television 2055It is 0, the equipment shape of smart television 205
State value s5It is 0.5.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency n of tablet computer 2066It is 0.1, puts down
The vocal print energy value x of plate computer 2066It is 0, the user location value y of tablet computer 2066It is 0, the equipment state of tablet computer 206
Value s6It is 0.5.Above-mentioned table 7 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
When smart phone 201 receives incoming call, after getting the speech capability parameter of other each terminals, smart phone 201
The switchover capability value V of other each terminals can be calculated by above-mentioned formula (2).For example, the switchover capability of smartwatch 202
Value V2It is 0.05, the switchover capability value V of intelligent sound box 2033It is 0.08, the switchover capability value V of PC 2044It is 0, intelligence electricity
Depending on 205 switchover capability value V5It is 0.055, the switchover capability value V of tablet computer 2066It is 0.05.Due to turning for intelligent sound box 203
Meet ability value V3It is 0.08, in other terminals (smartwatch 202, intelligent sound box 203, PC 204,205 and of smart television
Tablet computer 206) in switchover capability value it is maximum, smart phone 201 can determine that intelligent sound box 203 is to answer terminal.It is above-mentioned
Example is used only for explaining the application, should not constitute restriction, during specific implementation, any appliance in family lan all may be used
To receive incoming call, will not repeat them here.
In one possible implementation, terminal 1 can in above-mentioned steps S307, according to the voice energy of other each terminals
Force parameter before determining to answer terminal, receives the input operation (such as voice input etc.) of user, terminal 1 is received
The designated terminal that incoming call is transferred in local area network.Illustratively, as shown in fig. 6, can have 6 in family lan has call
The terminal of ability, for example, it is smart phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205, flat
Plate computer 206.Wherein, smart phone 201 is located at the master bedroom position of family's floor plan, and smartwatch 202 is located at family's house type
At the parlor position of figure, intelligent sound box 203 is located at 1 position of secondary room of family's floor plan, and PC 204 is located at family's house type
At the study position of figure, smart television 205 is located at the parlor position of family's floor plan, and tablet computer 206 is located at family's house type
At 2 position of secondary room of figure.When smart phone 201 receives incoming call, smart phone 201 can receive the voice input of user
(such as " small small skill of skill, be transferred to living-room TV ") is inputted in response to the voice, and smart phone 201 can come what is received
On electrotransfer to the smart television 205 for being in parlor.Above-mentioned example is used only for explaining the application, should not constitute restriction.
In one possible implementation, above-mentioned steps S306 and step S307, can be by maincenter equipment or server
To execute.That is, in maincenter equipment or the available local area network of server each terminal speech capability parameter.Maincenter equipment or service
Device can be determined to answer terminal according to the speech capability parameter of other each terminals in addition to terminal 1.Then, maincenter equipment
Or the mark for answering terminal (such as IP address of terminal 2) can be sent to terminal 1 by server.
It terminal 1 is described below will receive incoming call and be transferred to the process for answering terminal (terminal 2).
S308, terminal 1 send incoming call instruction to terminal 2.
Since terminal 1 and other terminals all pass through LAN connection, terminal 1 can be referred to incoming call by local area network
Order is sent to terminal 2.
S309,2 output incoming of terminal are reminded.
Terminal 2 instructs after receiving the incoming call instruction that terminal 1 is sent in response to the incoming call, and terminal 1, which can export, to be come
Electricity is reminded.Wherein, the call reminding that terminal 2 exports may include at least one: the tinkle of bells is reminded, mechanical oscillation are reminded and is come
Electric display alarm (such as terminal 2 show contact person on a display screen contact method etc.).
In one possible implementation, it exports to come simultaneously in two terminals in order to avoid the incoming call of same contact person
Electricity is reminded, and for terminal 2 after receiving the incoming call instruction of the transmission of terminal 1, terminal 2 can return to information of cabling us for confirmation to terminal 1, eventually
End 1 can stop output incoming prompting after receiving and cabling us for confirmation information.
What S310, terminal 2 can receive user answers operation.In response to the operation of answering of user, S311, terminal 2 can be with
Return answers confirmation message to terminal 1.
2 output incoming of terminal prompting after, terminal 2 can receive user answer operation (such as click 2 screen of terminal
Answering button or clicking for upper display answers physical button in terminal 2), operation is answered in response to this, terminal 2 can return
Confirmation is answered to terminal 1.Confirmation is answered in response to this, voice can be transferred in terminal 2 by terminal 1.
Lower mask body introduction terminal 1 receive terminal 2 return answer confirmation message after, terminal 1 leads to voice
Words are transferred to the process for answering terminal (terminal 2).
S312, terminal 1 receive the voice data of contact person.
Wherein:
1, the domain CS voice communication: the terminal of contact person can acquire the sound of contact person, and by mobile radio communication
The domain CS and terminal 1 establish call connection, and voice signal is sent to terminal 1.
2, VoLTE voice communication: the terminal of contact person can acquire the sound of contact person, and by the sound of contact person, lead to
Voice compression algorithm is crossed, compressed encoding processing is carried out to the sound of contact person, generates the voice data of contact person.Then by voice
Data are packaged into VoP, and by the IMS in mobile radio communication, the VoP of contact person is sent to terminal 1.
3, VoIP voice communication: the terminal of contact person can acquire the sound of contact person, pass through voice compression algorithm, distich
It is the sound progress compressed encoding processing of people, generates the voice data of contact person, then pass through the related protocols such as IP agreement for language
Sound data are packaged into VoP, and the VoP of contact person is sent to terminal 1 by Internet.
The voice data of contact person is sent to terminal 2 by S313, terminal 1.
Wherein, when the voice communication that terminal 1 shifts is the voice communication of the domain CS, terminal 1 is in the sound letter for receiving contact person
After number, the voice signal of contact person can be carried out compressed encoding processing, generate the language of contact person by voice compression algorithm
Sound data, and voice data is packaged into VoP by related protocols such as IP agreements.Then, terminal 1 passes through local area network
The VoP of contact person is sent to terminal 2.
When the voice communication that terminal 1 shifts is VoLTE voice communication or VoIP voice communication, terminal 1 is receiving connection
Be people VoP after, the VoP of contact person can be transmitted to by terminal 2 by local area network.
S314, terminal 2 play the voice data of the contact person after receiving the voice data of contact person.
Terminal 2, can be from the voice data of the contact person after receiving the VoP of contact person of the transmission of terminal 1
Bao Zhong, gets the voice data of contact person, and plays the voice data of the contact person.
S315, terminal 2 acquire sound by microphone, generate the voice data of user.
The voice data of user is sent to terminal 1 by S316, terminal 2.
After step S311, the return of terminal 2 answer confirmation to terminal 1, terminal 2 can be used by microphone continuous collecting
The sound at family and the sound of ambient enviroment.Terminal 2 can be by the collected sound of microphone (sound and surrounding including user
Ambient sound), by voice compression algorithm, compressed encoding processing is carried out to collected sound, generates the voice data of user,
And the voice data of user is packaged into VoP.Then, terminal 2 is sent the VoP of user by local area network
To terminal 1.
After S317, terminal 1 receive the voice data of the user of the transmission of terminal 2, the voice data of user is sent to
The terminal of contact person.
Wherein,
When the voice communication that terminal 1 shifts is the voice communication in the domain CS, terminal 1 is in the user for receiving the transmission of terminal 2
Voice data after, the voice data of user is converted into the voice signal of user, and the voice signal of user is passed through into movement
The domain CS in communication network is sent to the terminal of contact person.Sound of the terminal of contact person in the user for receiving the transmission of terminal 1
After signal, sound and the broadcasting of user can be parsed from the voice signal of user.
When the voice communication that terminal 1 shifts is VoLTE voice communication, terminal 1 is receiving the user's of the transmission of terminal 2
After voice data, the terminal of contact person can be transmitted to by the voice data of user by IMS.The terminal of contact person is connecing
After the voice data for receiving user, the voice data of the user can be played.
When the voice communication that terminal 1 shifts is VoIP voice communication, terminal 1 is receiving the user's of the transmission of terminal 2
After voice data, the terminal of contact person can be transmitted to by the voice data of user by Internet.The terminal of contact person
After receiving the voice data of user, the voice data of the user can be played.
There is no sequencing between above-mentioned steps S312-S314 and S315-S317, it is also similar in following embodiments.
In some possible implementations, above-mentioned steps S313 and S316 can come via maincenter equipment or server into
Row forwarding.
In application scenes, there are multiple terminals for having delivery value, such as intelligently in the local area network in family
Mobile phone, smartwatch, intelligent sound box, PC, smart television and Intelligent flat etc..User passes through any terminal (such as intelligence
Energy mobile phone) and contact person's progress voice communication.Since many terminal mobilities in family LAN are poor, can not be moved according to user
Dynamic, when user walks about at home, the voice communication effect of terminal can be deteriorated.Therefore, the embodiment of the present application provides a kind of language
Sound communication means, for user when being conversed by terminal 1 (such as smart phone) and contact person, terminal 1 can be according to other ends
Speech capability parameter (such as speech capability priority m, voice frequency n, vocal print energy value x, user location y, the SOT state of termination at end
Value s etc.), that determines voice communication answers terminal (such as smart television), and voice communication is transferred to answer terminal (such as
Smart television) on, in this way, the communication effect of user and contact person can be promoted during user moves indoors.
Fig. 7 is please referred to, Fig. 7 shows a kind of voice communication method provided in the embodiment of the present application.Wherein, in local area network
It may include N number of terminal for having delivery value, N is the integer greater than 2, and in the embodiment shown in fig. 7, any one is being conversed
Terminal can be said to terminal 1, such as when smart phone and Affiliate sessions are used in user, smart phone can be with
Referred to as terminal 1, when smart television and Affiliate sessions is used in user, smart television can be referred to as terminal 1, herein
It is not limited in any way.As shown in fig. 7, this method comprises:
Other each terminals in S701, terminal 1 and local area network establish connection.
Terminal 1 can with each terminal of other in local area network (terminal 2, terminal 3 ..., terminal N) establish connection, wherein
The connection can be the connection based on ICP/IP protocol, and under the connection based on ICP/IP protocol, terminal 1 can be based on VoIP
Incoming call (including call and converse) is transferred to interconnecting device by technology.The connection can also be Wi-Fi direct.The connection can be with
It is the connection established by router.If two equipment support Bluetooth function, which can also be bluetooth connection.
In one possible implementation, if terminal 1 is therefrom to be pivoted on standby or server to obtain other each terminals
Speech capability parameter when, terminal 1 only with interconnecting device can establish connection after determining to answer equipment.In this way, at end
Connection is established with interconnecting device in 1, end.
S702, terminal 1 and the terminal of contact person carry out voice communication.
Wherein, the voice communication that the terminal of terminal 1 and contact person carry out can be the voice communication in the above-mentioned domain CS, can be with
It is above-mentioned VoLTE voice communication, can also be above-mentioned VoIP voice communication, details are not described herein.
S703, terminal 1 receive the call transfer operation of user.
Call transfer operation can be user click to dial switching control or user on the display screen of terminal 1
Voice command input (such as " the small small skill of skill, and then I walks for call ") etc..
Receive how terminal 1 is determined to answer terminal below.Wherein, following step S704 to step S705, can be with the period
Property (such as every 2 seconds) execute.
S704, the call transfer operation in response to user, terminal 1 obtain the speech capability parameter of each terminal in local area network.
Wherein, speech capability parameter includes: speech capability priority m, voice frequency n, vocal print energy value x, user location
Value y, equipment state value s.Particular content can be with reference to the step S306 in aforementioned embodiment illustrated in fig. 3, and details are not described herein.
S705, terminal 1 are determined to answer terminal according to the speech capability parameter of each terminal.This answers terminal for answering
The call carried out with the terminal of contact person.
Wherein, when terminal 1 determine answer terminal be terminal 1 itself when, terminal 1 can without call transfer.When
When terminal that terminal 1 was determined answer is other terminals in local area network, what terminal 1 will can be carried out with the terminal of contact person leads to
Words are forwarded to this and answer terminal, at this point, terminal 1 can be used for the terminal of transfer contact person and answer the voice number between terminal
According to.Wherein, terminal 1 determines that the detailed process for answering terminal can be with reference to the step S307 in 3 embodiments of earlier figures, herein
It repeats no more.
Illustratively, as shown in Figure 8 A, there can be 6 terminals for having delivery value in family lan, such as intelligently
Mobile phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205, tablet computer 206.Wherein, intelligence
Mobile phone 201 is located at the master bedroom position of family's floor plan, and intelligent sound box 203 is located at 1 position of secondary room of family's floor plan, personal
Computer 204 is located at the study position of family's floor plan, and smart television 205 is located at the parlor position of family's floor plan, plate
Computer 206 is located at 2 position of secondary room of family's floor plan.When user is conversed by smart phone 201 and contact person, use
Family can carry smartwatch 202.Since user carries smartwatch 202, smartwatch 202 can detecte with
The position at family and smartwatch 202 are in same position, i.e. user location value y2Perseverance is 1.
Wherein, in family lan each terminal speech capability parameter (speech capability priority m, voice frequency n, vocal print
Energy value x, user location value y, equipment state value s) can be as shown in table 8 below:
Table 8
The speech capability priority m of smart phone 201 it can be seen from upper table 81It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.6, the user location value y of smart phone 2011It is 1, smart phone
201 equipment state value s1It is 1.The speech capability priority m of smartwatch 2022It is 0.4, the voice frequency of smartwatch 202
n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.55, the user location value y of smartwatch 2022It is 1, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.35, the user location value y of intelligent sound box 2033It is 0, intelligent sound box
203 equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency of PC 204
n4It is 0.05, the vocal print energy value x of PC 2044It is 0.35, the user location value y of PC 2044It is 0, PC
204 equipment state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency of smart television 205
n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.2, the user location value y of smart television 2055It is 0, smart television
205 equipment state value s5It is 1.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency of tablet computer 206
n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.2, the user location value y of tablet computer 2066It is 0, tablet computer
206 equipment state value s6It is 1.Above-mentioned table 8 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
Smart phone 201 can calculate the switchover capability value V of other each terminals by above-mentioned formula (2).For example, intelligence
The switchover capability value V of energy mobile phone 2011It is 0.68, the switchover capability value V of smartwatch 2022It is 0.525, intelligent sound box 203 turns
Meet ability value V3It is 0.255, the switchover capability value V of PC 2044It is 0, the switchover capability value V of smart television 2055For
0.21, the switchover capability value V of tablet computer 2066It is 0.2.Due to the switchover capability value V of smart phone 2011It is 0.68, at each end
It holds (smart phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205 and tablet computer 206)
In switchover capability value it is maximum, smart phone 201 can determine that smart phone 201 itself is to answer terminal.Above-mentioned example is only
For explaining the application, restriction should not be constituted.
As shown in Figure 8 B, for smart phone 201 after the call transfer operation for receiving user, user carries intelligent hand
Table 202 can walk about to the corridor between study and secondary room.At this point, in family lan each terminal speech capability parameter (language
Sound ability priority m, voice frequency n, vocal print energy value x, user location value y, equipment state value s) can be as shown in table 9 below:
Table 9
The speech capability priority m of smart phone 201 it can be seen from upper table 91It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.3, the user location value y of smart phone 2011It is 0, smart phone
201 equipment state value s1It is 1.The speech capability priority m of smartwatch 2022It is 0.4, the voice frequency of smartwatch 202
n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.6, the user location value y of smartwatch 2022It is 1, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.35, the user location value y of intelligent sound box 2033It is 0, intelligent sound box
203 equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency of PC 204
n4It is 0.05, the vocal print energy value x of PC 2044It is 0.35, the user location value y of PC 2044It is 0, PC
204 equipment state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency of smart television 205
n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.2, the user location value y of smart television 2055It is 0, smart television
205 equipment state value s5It is 1.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency of tablet computer 206
n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.2, the user location value y of tablet computer 2066It is 0, tablet computer
206 equipment state value s6It is 1.Above-mentioned table 9 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
When smart phone 201 receives incoming call, after getting the speech capability parameter of other each terminals, smart phone 201
The switchover capability value V of other each terminals can be calculated by above-mentioned formula (2).For example, the switchover capability of smart phone 201
Value V1It is 0.33, the switchover capability value V of smartwatch 2022It is 0.55, the switchover capability value V of intelligent sound box 2033It is 0.255, it is a
The switchover capability value V of people's computer 2044It is 0, the switchover capability value V of smart television 2055It is 0.21, the switching energy of tablet computer 206
Force value V6It is 0.2.Due to the switchover capability value V of smartwatch 2023Be 0.55, in a local network each terminal (smart phone 201,
Smartwatch 202, intelligent sound box 203, PC 204, smart television 205 and tablet computer 206) in switchover capability value most
Greatly, smart phone 201 can determine that smartwatch 202 is to answer terminal.Then, smart phone 201 can be by call transfer extremely
On smartwatch 202.Above-mentioned example is used only for explaining the application, should not constitute restriction.
As shown in Figure 8 C, for smart phone 201 after the call transfer operation for receiving user, user carries intelligent hand
Table 202 can walk about to parlor.Wherein, user and smartwatch 202 and smart television 205 are in parlor, the intelligence in parlor together
The available position to user of energy TV 205, i.e. the user location value y of smart television 2055It is 1.At this point, family lan
In each terminal speech capability parameter (speech capability priority m, voice frequency n, vocal print energy value x, user location value y, equipment
State value s) can be as shown in the following table 10:
Table 10
The speech capability priority m of smart phone 201 it can be seen from upper table 101It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.3, the user location value y of smart phone 2011It is 0, smart phone
201 equipment state value s1It is 1.The speech capability priority m of smartwatch 2022It is 0.4, the voice frequency of smartwatch 202
n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.6, the user location value y of smartwatch 2022It is 1, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.35, the user location value y of intelligent sound box 2033It is 0, intelligent sound box
203 equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency of PC 204
n4It is 0.05, the vocal print energy value x of PC 2044It is 0.35, the user location value y of PC 2044It is 0, PC
204 equipment state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency of smart television 205
n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.6, the user location value y of smart television 2055It is 1, smart television
205 equipment state value s5It is 1.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency of tablet computer 206
n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.2, the user location value y of tablet computer 2066It is 0, tablet computer
206 equipment state value s6It is 1.Above-mentioned table 10 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
When smart phone 201 receives incoming call, after getting the speech capability parameter of other each terminals, smart phone 201
The switchover capability value V of other each terminals can be calculated by above-mentioned formula (2).For example, the switchover capability of smart phone 201
Value V1It is 0.33, the switchover capability value V of smartwatch 2022It is 0.55, the switchover capability value V of intelligent sound box 2033It is 0.255, it is a
The switchover capability value V of people's computer 2044It is 0, the switchover capability value V of smart television 2055It is 0.61, the switching energy of tablet computer 206
Force value V6It is 0.2.Due to the switchover capability value V of smart television 2025Be 0.61, in a local network each terminal (smart phone 201,
Smartwatch 202, intelligent sound box 203, PC 204, smart television 205 and tablet computer 206) in switchover capability value most
Greatly, smart phone 201 can determine that smart television 205 is to answer terminal.Then, smart phone 201 can be by call transfer extremely
Smart television 205.Above-mentioned example is used only for explaining the application, should not constitute restriction.
Illustratively, as shown in Figure 9 A, there can be 6 terminals for having delivery value in family lan, such as intelligently
Mobile phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205, tablet computer 206.Wherein, intelligence
Mobile phone 201 is located at the master bedroom position of family's floor plan, and smartwatch 202 is located at the parlor position of family's floor plan, intelligence
Speaker 203 is located at 1 position of secondary room of family's floor plan, and PC 204 is located at the study position of family's floor plan, intelligence
TV 205 is located at the parlor position of family's floor plan, and tablet computer 206 is located at 2 position of secondary room of family's floor plan.When with
When family is conversed by smart phone 201 and contact person, the user location value y of smart phone 2011It is 1.
Wherein, in family lan each terminal speech capability parameter (speech capability priority m, voice frequency n, vocal print
Energy value x, user location value y, equipment state value s) can be as shown in table 11 below:
Table 11
The speech capability priority m of smart phone 201 it can be seen from upper table 111It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.6, the user location value y of smart phone 2011It is 1, smart phone
201 equipment state value s1It is 1.The speech capability priority m of smartwatch 2022It is 0.4, the voice frequency of smartwatch 202
n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.2, the user location value y of smartwatch 2022It is 0, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.35, the user location value y of intelligent sound box 2033It is 0, intelligent sound box
203 equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency of PC 204
n4It is 0.05, the vocal print energy value x of PC 2044It is 0.35, the user location value y of PC 2044It is 0, PC
204 equipment state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency of smart television 205
n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.2, the user location value y of smart television 2055It is 0, smart television
205 equipment state value s5It is 1.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency of tablet computer 206
n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.2, the user location value y of tablet computer 2066It is 0, tablet computer
206 equipment state value s6It is 1.Above-mentioned table 11 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
Smart phone 201 can calculate the switchover capability value V of other each terminals by above-mentioned formula (2).For example, intelligence
The switchover capability value V of energy mobile phone 2011It is 0.68, the switchover capability value V of smartwatch 2022It is 0.15, intelligent sound box 203 turns
Meet ability value V3It is 0.255, the switchover capability value V of PC 2044It is 0, the switchover capability value V of smart television 2055For
0.21, the switchover capability value V of tablet computer 2066It is 0.2.Due to the switchover capability value V of smart phone 2011It is 0.68, at each end
It holds (smart phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205 and tablet computer 206)
In switchover capability value it is maximum, smart phone 201 can determine that smart phone 201 itself is to answer terminal.Above-mentioned example is only
For explaining the application, restriction should not be constituted.
As shown in Figure 9 B, after the call transfer operation for receiving user, user's smart phone 201 can walk about to book
Corridor between room and secondary room.At this point, each terminal in family lan can not all get the position of user, i.e. user location
Y is 0.Wherein, in family lan each terminal speech capability parameter (speech capability priority m, voice frequency n, vocal print energy
Magnitude x, user location value y, equipment state value s) can be as shown in table 12 below:
Table 12
The speech capability priority m of smart phone 201 it can be seen from upper table 121It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.2, the user location value y of smart phone 2011It is 0, smart phone
201 equipment state value s1It is 1.The speech capability priority m of smartwatch 2022It is 0.4, the voice frequency of smartwatch 202
m2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.2, the user location value y of smartwatch 2022It is 0, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.45, the user location value y of intelligent sound box 2033It is 0, intelligent sound box
203 equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency of PC 204
n4It is 0.05, the vocal print energy value x of PC 2044It is 0.45, the user location value y of PC 2044It is 0, PC
204 equipment state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency of smart television 205
n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.2, the user location value y of smart television 2055It is 0, smart television
205 equipment state value s5It is 1.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency of tablet computer 206
n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.2, the user location value y of tablet computer 2066It is 0, tablet computer
206 equipment state value s6It is 1.Above-mentioned table 12 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
Smart phone 201 can calculate the switchover capability value V of other each terminals by above-mentioned formula (2).For example, intelligence
The switchover capability value V of energy mobile phone 2011It is 0.28, the switchover capability value V of smartwatch 2022It is 0.15, intelligent sound box 203 turns
Meet ability value V3It is 0.305, the switchover capability value V of PC 2044It is 0, the switchover capability value V of smart television 2055For
0.21, the switchover capability value V of tablet computer 2066It is 0.2.Due to the switchover capability value V of intelligent sound box 2033It is 0.305, each
Terminal (smart phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205 and tablet computer
206) the switchover capability value in is maximum, and smart phone 201 can determine that intelligent sound box 203 is to answer terminal.Then, smart phone
201 can be transferred to voice communication on intelligent sound box 203.Above-mentioned example is used only for explaining the application, should not constitute restriction.
As shown in Figure 9 C, when intelligent sound box 203 answers call, user can walk about to parlor.Wherein, user and intelligence
Wrist-watch 202 and smart television 205 are in parlor together, the available position to user of the smart television 205 in parlor, i.e., intelligently
The user location value y of wrist-watch 2022It is 1, the user location value y of smart television 2055It is 1.At this point, each terminal in family lan
Speech capability parameter (speech capability priority m, voice frequency n, vocal print energy value x, user location value y, equipment state value s)
It can be as shown in table 13 below:
Table 13
The speech capability priority m of smart phone 201 it can be seen from upper table 131It is 1, the call frequency of smart phone 201
Rate n1It is 0.4, the vocal print energy value x of smart phone 2011It is 0.2, the user location value y of smart phone 2011It is 0, smart phone
201 equipment state value s1It is 1.The speech capability priority m of smartwatch 2022It is 0.4, the voice frequency of smartwatch 202
n2It is 0.05, the vocal print energy value x of smartwatch 2022It is 0.5, the user location value y of smartwatch 2022It is 1, smartwatch
202 equipment state value s2It is 1.The speech capability priority m of intelligent sound box 2033It is 0.6, the voice frequency of intelligent sound box 203
n3It is 0.1, the vocal print energy value x of intelligent sound box 2033It is 0.3, the user location value y of intelligent sound box 2033It is 0, intelligent sound box
203 equipment state value s3It is 1.The speech capability priority m of PC 2044It is 0.3, the voice frequency of PC 204
n4It is 0.05, the vocal print energy value x of PC 2044It is 0.3, the user location value y of PC 2044It is 0, PC
204 equipment state value s4It is 0.The speech capability priority m of smart television 2055It is 0.5, the voice frequency of smart television 205
n5It is 0.3, the vocal print energy value x of smart television 2055It is 0.6, the user location value y of smart television 2055It is 1, smart television
205 equipment state value s5It is 1.The speech capability priority m of tablet computer 2066It is 0.8, the voice frequency of tablet computer 206
n6It is 0.1, the vocal print energy value x of tablet computer 2066It is 0.35, the user location value y of tablet computer 2066It is 0, tablet computer
206 equipment state value s6It is 1.Above-mentioned table 13 is used only for explaining the application, should not constitute restriction.
Wherein, illustratively, the corresponding weighted value a of speech capability priority m can be 0.1, the corresponding power of voice frequency n
Weight values b can be 0.2, and the corresponding weighted value c of vocal print energy value x can be 0.5, and the corresponding weighted value d of user location value y can be with
It is 0.2.
Smart phone 201 can calculate the switchover capability value V of other each terminals by above-mentioned formula (2).Wherein, intelligence
The switchover capability value V of energy mobile phone 2011It is 0.28, the switchover capability value V of smartwatch 2022It is 0.5, the switching of intelligent sound box 203
Ability value V3It is 0.23, the switchover capability value V of PC 2044It is 0, the switchover capability value V of smart television 2055It is 0.61, puts down
The switchover capability value V of plate computer 2066It is 0.275.Due to the switchover capability value V of smart television 2055It is 0.61, in each terminal (intelligence
Can mobile phone 201, smartwatch 202, intelligent sound box 203, PC 204, smart television 205 and tablet computer 206) in turn
Ability value maximum is connect, smart phone 201 can determine that smart television 205 is to answer terminal.Then, smart phone 201 can incite somebody to action
Voice communication is transferred on smart television 205.Above-mentioned example is used only for explaining the application, should not constitute restriction.
In one possible implementation, terminal 1 is after determining to answer terminal, it can be determined that answers turning for terminal
Connect whether ability value V than the switchover capability value V of current talking terminal is higher by specified threshold (such as 0.2), if so, terminal 1 will
Voice communication is transferred to this and answers in terminal.Illustratively, such as specified threshold can be 0.2.Answer terminal (such as terminal 2)
Switchover capability value be 0.61, the switchover capability value V of current talking terminal (such as terminal 3) is 0.23, answers the switching of terminal
Difference of the ability value V (i.e. 0.61) than the switchover capability value V (i.e. 0.23) of current talking terminal is 0.38, and difference is higher than specified threshold
It is worth (i.e. 0.2).Therefore, voice communication can be transferred to and answer in terminal by terminal 1.This way it is possible to avoid the frequency of verbal system
Numerous switching.
In one possible implementation, terminal 1 can detecte the sound of user after determining to answer terminal
Line energy, vocal print energy specified vocal print energy threshold (such as 10dB) below and continue regular hour (such as 0.5 second) when,
The transfer call of terminal 1 is to answering in terminal.In this way, during being spoken due to user, in the ending of every a word, the vocal print of user
Energy is minimum, when the vocal print energy of user is less than certain threshold value, switched and transferred call, can to avoid in transfer communication process,
The language that call terminal collects user is imperfect.
The process of voice communication transfer is described below.
S706, terminal 1 receive the voice data of contact person.
Particular content can refer to the step S312 of aforementioned embodiment illustrated in fig. 3, and details are not described herein.
The voice data of contact person is sent to terminal 2 by S707, terminal 1.
Particular content can refer to the step S313 of aforementioned embodiment illustrated in fig. 3, and details are not described herein.
S708, terminal 2 play the voice data of the contact person after receiving the voice data of contact person.
Particular content can refer to the step S314 of aforementioned embodiment illustrated in fig. 3, and details are not described herein.
S709, terminal 2 acquire sound by microphone, and generate the voice data of user.
Particular content can refer to the step S315 of aforementioned embodiment illustrated in fig. 3, and details are not described herein.
S710, terminal 2 give the audio data transmitting of user to terminal 1.
Particular content can refer to the step S316 of aforementioned embodiment illustrated in fig. 3, and details are not described herein.
S711, terminal 1 send terminal of the voice data to contact person of user.
Particular content can refer to the step S317 of aforementioned embodiment illustrated in fig. 3, and details are not described herein.
In some possible implementations, above-mentioned steps S704 and step S705 can be by maincenter equipment or servers
To execute.That is, in maincenter equipment or the available local area network of server each terminal speech capability parameter.Maincenter equipment or service
Device can be determined to answer terminal according to the speech capability parameter of other each terminals.Then, maincenter equipment or server can incite somebody to action
The mark (such as IP address of terminal 2) for answering terminal is sent to terminal 1.
In some possible implementations, above-mentioned steps S707 and S710 can come via maincenter equipment or server into
Row forwarding.
In application scenes, terminal 1 can the terminal with contact person A converse, at this moment, terminal 1 again can
To receive the incoming call of the terminal of contact person B.Due to terminal 1 can not simultaneously with the terminal of more than two contact persons simultaneously into
Row call, therefore, in one possible implementation, terminal 1 can pass through the incoming call of the terminal of contact person B above-mentioned
The step of answering terminal is determined in embodiment illustrated in fig. 3, determines to answer terminal (such as terminal 2).Also, terminal 1 can incite somebody to action
It is forwarded to and is answered in terminal (such as terminal 2) from the incoming call of the terminal of contact person B, and by answering terminal and contact person
The terminal of B is conversed.In one possible implementation, terminal 1 can turn the voice communication of the terminal with contact person A
It moves to and answers in terminal (such as terminal 2), and answer the incoming call of the terminal from contact person B by terminal 1.Wherein, 1 turn of terminal
The process for moving calling or call can refer to earlier figures 3 or embodiment illustrated in fig. 7, and details are not described herein.This way it is possible to avoid leakage
Electricity is fetched, the experience of user is improved.
There can be multiple terminals for having delivery value in application scenes, in local area network, meanwhile, in local area network also
There can be maincenter equipment, which has routing function, when the terminal that terminal 1 receives contact person is dialled by Internet
When the VoIP voice call beaten, VoP of the terminal of terminal 1 and contact person in communication process is required by this
Be pivoted standby transfer.When terminal 1 it is occupied or it is overtime do not answer the VoIP voice call that contact person's terminal is dialed when, be pivoted in this
It is standby to acquire the speech capability parameter of each terminal in local area network, and determine to answer terminal (example from each terminal of local area network
Such as terminal 2), the VoP of the VoIP voice call is forwarded to this and answers terminal (such as terminal 2).In this way, in passing through
Standby calculate that be pivoted answers terminal, it is possible to reduce terminal computational burden reduces time delay when diverting call or call.
Referring to FIG. 10, Figure 10 is a kind of voice communication method provided by the embodiments of the present application, this method comprises:
S1001, maincenter equipment and each terminal establish connection.
Wherein.Maincenter equipment can respectively in local area network each terminal (terminal 1, terminal 2 ..., terminal N) establish connect
It connects.The connection can be wireless (such as Wi-Fi connection) or wired connection.Connect if each terminal is established with router
It connects, then can skip this step.
S1002, maincenter equipment receive the calling indication message for issuing terminal 1.
Maincenter equipment can receive the calling indication message that the terminal of contact person relies on Internet to send, wherein
The calling indication message includes the address information (IP address of the terminal of contact person) and recipient address of call transmitter
Information (such as IP address of terminal 1).The calling indication message is used to indicate the prompting of 1 output incoming of terminal.
Calling indication message is transmitted to terminal 1 by S1003, maincenter equipment.
The calling indication message can be sent to terminal 1 according to recipient address information by maincenter equipment.
S1004,1 output incoming of terminal are reminded.
Terminal 1 is after the calling indication message for receiving maincenter device forwards, in response to the calling indication message, terminal
1 can be reminded with output incoming.Wherein, it may include following at least one: the tinkle of bells which, which may include the call reminding,
It reminds, (such as terminal 1 shows contact method of contact person etc. on a display screen) is reminded in mechanical oscillation prompting and caller identification.
It is determined whether to enable call transfer functions for S1005, terminal 1.If so, S1006, terminal 1 judge whether incoming call surpasses
Shi Wei is picked up or whether terminal 1 is occupied, requests to give maincenter equipment if so, S1007, terminal 1 send call transfer.If
When terminal 1 does not open call transfer function, 1 output incoming of terminal is reminded.
Wherein, terminal 1 before receiving the incoming call of contact person, can receive the setting input of user, in response to user
Setting input, terminal 1 can open or close call transfer function.In this way, terminal 1 can shift according to the demand of user
The incoming call received improves the experience of user.
After the judgement of terminal 1 has turned on call transfer function, terminal 1 can first judge whether terminal 1 is occupied,
If so, terminal 1 can send call transfer request to maincenter equipment.If unoccupied, terminal 1, which may determine that, to be received
Incoming call after whether be more than specified time threshold value (such as 10s) do not receive user answer operation, if more than specified time
Threshold value is not answered, then terminal 1 can send call transfer request to maincenter equipment.
In one possible implementation, the terminal in local area network can be reported to after opening call transfer function
Maincenter equipment.Therefore, above-mentioned steps S1005 and step S1006 can be executed by maincenter equipment.That is, maincenter equipment may determine that
Whether terminal 1 opens call transfer function, if so, maincenter equipment judges whether time-out is not picked up or terminal 1 is incoming call
It is no occupied, if so, maincenter equipment execution step S1008, maincenter equipment obtain the speech capability parameter of each terminal.
S1008, maincenter equipment obtain the speech capability parameter of each terminal.
After maincenter equipment receives the call transfer request of the transmission of terminal 1, requested in response to the call transfer, maincenter
Equipment can according to obtain each terminal in local area network (terminal 1, terminal 2 ..., terminal N) speech capability parameter.Wherein, language
Sound ability parameter includes: speech capability priority m, voice frequency n, vocal print energy value x, user location value y, equipment state value s.
It, can be no longer superfluous herein with reference to the step S306 in aforementioned embodiment illustrated in fig. 3 for illustrating for speech capability parameter
It states.
S1009, maincenter equipment determine to answer terminal according to the speech capability parameter of other each terminals and (such as answer terminal
For terminal 2).
Wherein, maincenter equipment, can be according to other each terminal (terminals after receiving the speech capability parameter of each terminal
2 ..., terminal N) speech capability parameter, determine to answer equipment (such as terminal 2).Maincenter equipment is determined to answer equipment
Process can refer to terminal 1 in the step S307 in aforementioned embodiment illustrated in fig. 3 and determine to answer the process of equipment, herein no longer
It repeats.
S1010, maincenter equipment send incoming call instruction to terminal 2.
S1011, maincenter equipment send end of calling and instruct to terminal 1.
S1012, terminal 2 are instructed in response to the incoming call received, and output incoming is reminded.
Wherein, it may include following at least one which, which may include the call reminding: the tinkle of bells is reminded, machinery shakes
It is dynamic to remind and (such as terminal 2 show contact person on a display screen contact method etc.) is reminded in caller identification.
S1013, terminal 1 terminate output incoming and remind in response to the incoming call END instruction received.
In this way, S1010 to step S1013 through the above steps, can to avoid same contact person incoming call in two terminals
Upper output incoming simultaneously is reminded.
S1014, terminal 2 receive user and answer operation.S1015, the return of terminal 2 answer confirmation and give maincenter equipment.
2 output incoming of terminal prompting after, terminal 2 can receive user answer operation (such as click 2 screen of terminal
Answering button or clicking for upper display answers physical button in terminal 2), operation is answered in response to this, terminal 2 can return
It answers confirmation and gives maincenter equipment.
S1016, maincenter equipment receive terminal 2 return answer confirmation after, receive the voice data of contact person.
Maincenter equipment receive the return of terminal 2 answer confirmation after, can be by Internet to the end of contact person
The voice data of end request contact person.The terminal of contact person upon receipt of the request, will acquire the sound of contact person, raw
The voice data of contact person is issued into maincenter equipment at the voice data of contact person, and in the form of data packet.
The voice data of contact person is sent to terminal 2 by S1017, maincenter equipment.
Maincenter equipment, can be by the voice data of contact person with data packet after receiving the voice data of contact person
Form is transmitted to terminal 2.
S1018, terminal 2 play the voice data of contact person after receiving the voice data of contact person.
Terminal 2, can be from the voice of the contact person after receiving the VoP of contact person of maincenter equipment transmission
In data packet, the voice data of contact person is got, and plays the voice data of the contact person.
S1019, terminal 2 acquire sound by microphone, generate the voice data of user.
The voice data of user is sent to maincenter equipment by S1020, terminal 2.
After step S1015, the return of terminal 2 answer confirmation to maincenter equipment, terminal 2 can persistently be adopted by microphone
Collect the sound of user and the sound of ambient enviroment.Terminal 2 can by the collected sound of microphone (sound including user and
Ambient enviroment sound), by voice compression algorithm, compressed encoding processing is carried out to collected sound, generates the voice of user
Data, and the voice data of user is packaged into VoP.Then, terminal 2 passes through local for the VoP of user
It is sent to maincenter equipment.
The voice data of user is transmitted to the terminal of contact person by S1021, maincenter equipment.
Maincenter equipment, can be by the voice data of user after receiving the VoP of user of the transmission of terminal 2
Packet is transmitted to the terminal of contact person by Internet, and the terminal of contact person, can be with after receiving the VoP of user
The voice data of user is parsed from the VoP of user, and plays the voice data of the user.
There is no sequencing between above-mentioned steps S1016-S1018 and S1019-S1021, it is also similar in following embodiments.
When above-mentioned voice communication is shifted, if terminal 1 and terminal 2 establish connection, terminal 2 can be transferred directly to by terminal 1,
I.e. after terminal 1 receives the voice data of contact person, it is sent to terminal 2, terminal 2 is sent to after collecting the voice data of user
Terminal 1 is sent to the terminal of contact person by terminal 1.Above-mentioned voice communication transfer can also have server generation real for maincenter equipment
It is existing.
Be pivoted the standby router that can not be in local area network among the above, can be other terminals other than terminal 1.
Another network architecture provided by the embodiments of the present application is described below.
1, Figure 11 shows 1100 schematic diagram of another network architecture provided by the embodiments of the present application referring to Figure 1.Wherein,
It include multiple terminals in the network architecture 1100.Wherein, which may include: smart phone 201, smartwatch 202, intelligence
Energy speaker 203, PC 204, smart television 205, tablet computer 206 etc., the application is not intended to be limited in any this.In the net
The structure of terminal in network framework 1100 can refer to above-mentioned terminal 100 shown in FIG. 1, and details are not described herein.
Wherein, multiple terminal can have delivery value, and multiple terminal can receive in the following way to be come
Electricity is with words all: 1, multiple terminal can receive circuit commutative field (the circuit switched in mobile radio communication
The domain domain, CS) incoming call or call.2, multiple terminal can receive IP multimedia in mobile radio communication
Incoming call or call in system (IPmultimedia subsystem, IMS) based on VoLTE technology.3, multiple terminal
It can receive the incoming call in internet (Internet) based on voip technology or call.
Multiple terminal all passes through the server 208 in smart home cloud network in Internet connection.The server 208
It can be the server in smart home cloud network, quantity is not limited to one, can be multiple, is not limited thereto.Wherein,
The server 208 may include memory, processor and transceiver, wherein memory can be used for being stored with multiple terminal
Respective speech capability parameter (such as speech capability priority m, voice frequency n, vocal print energy value x, user location value y, equipment
State value s etc.).Transceiver can be used for being communicated with each terminal.The acquisition data that processor can be used for handling each terminal are asked
It asks, and indicates that transceiver issues the respective speech capability parameter of multiple terminals to each terminal.
Below with reference to the network architecture 1100 and application scenarios shown in above-mentioned Figure 11, the embodiment of the present application is specifically introduced
A kind of voice communication method provided.
Can there are multiple terminals for having delivery value, such as smart phone, intelligent hand in application scenes, in family
Table, intelligent sound box, PC, smart television and Intelligent flat etc..These terminals for having delivery value can pass through
The server of smart home cloud in Internet connection.User can receive voice by any terminal (such as smart phone)
Incoming call when user does not answer for a long time, or comes that electric equipment is occupied (such as being busy now), and user may leak
Electricity is fetched, is caused inconvenience to the user.Therefore, the embodiment of the present application provides a kind of voice communication method, terminal 1 (such as it is intelligent
Mobile phone) receive incoming call after, when the time-out of terminal 1 do not receive user answer operation or the terminal it is occupied (such as
It is busy now) when, server can be according to the speech capability parameters of other terminals (such as speech capability priority m, call frequency
Rate n, vocal print energy value x, user location y, equipment state value s etc.), determine the incoming call answer terminal (such as intelligence electricity
Depending on), and the incoming call in terminal 1 is transferred to and is answered on terminal (such as smart television).This way it is possible to avoid user misses terminal
On incoming call, improve user experience.
Figure 12 is please referred to, Figure 12 shows a kind of voice communication method provided in the embodiment of the present application.Wherein, server
N number of terminal for having delivery value of upper connection, N are the integer greater than 2, wherein N number of terminal for having delivery value is taking
The same account is bound on business device.In the embodiment shown in fig. 12, any one terminal for receiving incoming call can be said to end
End 1.Such as smart phone, when receiving incoming call, smart phone can be referred to as terminal 1, when smart television receives incoming call,
Smart television can be referred to as terminal 1, not be limited in any way herein.As shown in figure 12, this method comprises:
S1201, server and each terminal establish connection.
Wherein, each terminal can pass through server in Internet connection.
S1202, terminal 1 receive incoming call.
Wherein, which can be sent a telegram here with finger speech sound.The terminal of contact person can be dialed in domain by CS in mobile communications network
Voice call can also be dialed by IMS network in mobile communications network based on VoLTE technology to terminal 1, the terminal of contact person
Voice call to terminal 1, the terminal of contact person can also dial the language based on voip technology by internet (Internet)
Sound phone is to terminal 1.
S1203,1 output incoming of terminal are reminded.
Terminal 1 can be reminded after the phone that the terminal for receiving contact person is dialed with output incoming.Wherein, this comes
It may include following at least one that electricity, which is reminded: the tinkle of bells is reminded, mechanical oscillation prompting and caller identification remind that (such as terminal 1 is aobvious
Show the contact method etc. of screen display contact person).
It is determined whether to enable call transfer functions for S1204, terminal 1, if so, S1205, terminal 1 judge whether time-out not
It answers or whether occupied, requests if so, S1206, terminal 1 send call transfer to server.If terminal 1 is not opened logical
Forwarding function is talked about, then is reminded by 1 output incoming of terminal, and operation, incoming call answering are answered by the reception user of terminal 1.
Wherein, terminal 1 before receiving the incoming call of contact person, can receive the setting input of user, in response to user
Setting input, terminal 1 can open or close call transfer function.In this way, terminal 1 can shift according to the demand of user
The incoming call received improves the experience of user.
Wherein, whether occupied terminal 1 can first judge terminal 1, if so, terminal 1 can send call transfer
It requests to server.If unoccupied, terminal 1 may determine that whether receiving incoming call later be more than specified time threshold value
(such as 10s) is requested if so, terminal 1 can send call transfer to server.
S1207, server obtain the speech capability parameter of each terminal.
After server receives the call transfer request of the transmission of terminal 1, requested in response to the call transfer, server
Can according to obtain each terminal (terminal 1, terminal 2 ..., terminal N) speech capability parameter.Wherein, speech capability parameter packet
It includes: speech capability priority m, voice frequency n, vocal print energy value x, user location value y, equipment state value s.For speech capability
Parameter illustrates, can be with reference to the step S306 in aforementioned embodiment illustrated in fig. 3, and details are not described herein.
S1208, server determine to answer terminal and (such as answer terminal according to the speech capability parameters of other each terminals
For terminal 2).
Wherein, server, can be according to other each terminal (terminals after receiving the speech capability parameter of each terminal
2 ..., terminal N) speech capability parameter, determine to answer equipment.Server determines that the process for answering equipment can refer to
Terminal 1 is determined to answer the process of equipment in step S307 in aforementioned embodiment illustrated in fig. 3, and details are not described herein.
It terminal 1 is described below will receive incoming call and be transferred to the process for answering terminal (terminal 2).
S1209, server send incoming call instruction to terminal 2.
S1210,2 output incoming of terminal are reminded.
Wherein, it may include following at least one which, which may include the call reminding: the tinkle of bells is reminded, machinery shakes
It is dynamic to remind and (such as terminal 2 show contact person on a display screen contact method etc.) is reminded in caller identification.
S1211, server send incoming call END instruction to terminal 1.
S1212, terminal 1 terminate output incoming and remind.
In this way, S1010 to step S1013 through the above steps, can to avoid same contact person incoming call in two terminals
Upper output incoming simultaneously is reminded.
S1213, terminal 2 receive user and answer operation.S1214, the return of terminal 2 answer confirmation to server.
2 output incoming of terminal prompting after, terminal 2 can receive user answer operation (such as click 2 screen of terminal
Answering button or clicking for upper display answers physical button in terminal 2), operation is answered in response to this, terminal 2 can return
Confirmation is answered to server.
S1215, server forwarding answer confirmation to terminal 1.
After server forwarding answers confirmation to terminal 1, confirmation is answered in response to this, terminal 1 can turn voice communication
It moves in terminal 2.
There is no sequencing between above-mentioned steps S1209-S1210 and S1211-S1212, it is also similar in other embodiments.
Lower mask body introduces terminal 1 and voice communication is transferred to the process for answering terminal (terminal 2).
S1216, terminal 1 receive the voice data of contact person.
Wherein:
1, the domain CS voice communication: the terminal of contact person can acquire the sound of contact person, and by mobile radio communication
The domain CS and terminal 1 establish call connection, and voice signal is sent to terminal 1.
2, VoLTE voice communication: the terminal of contact person can also acquire the sound of contact person, and by the sound of contact person,
By voice compression algorithm, compressed encoding processing is carried out to the sound of contact person, generates the voice data of contact person.Then by language
Sound data are packaged into VoP, and by the IMS in mobile radio communication, the VoP of contact person is sent to terminal
1。
3, VoIP voice communication: the terminal of contact person can also acquire the sound of contact person, right by voice compression algorithm
The sound of contact person carries out compressed encoding processing, generates the voice data of contact person, then will by related protocols such as IP agreements
Voice data is packaged into VoP, and the VoP of contact person is sent to terminal 1 by Internet.
The voice data of contact person is sent to server by S1217, terminal 1.
Wherein, when the voice communication that terminal 1 shifts is the voice communication of the domain CS, terminal 1 is in the sound letter for receiving contact person
After number, the voice signal of contact person can be carried out compressed encoding processing, generate the language of contact person by voice compression algorithm
Sound data, and voice data is packaged into VoP by related protocols such as IP agreements.Then, terminal 1 passes through
The VoP of contact person is sent to server by Internet.
When the voice communication that terminal 1 shifts is VoLTE voice communication or VoIP voice communication, terminal 1 is receiving connection
Be people VoP after, the VoP of contact person can be transmitted to by server by Internet.
The voice data of contact person is sent to terminal 2 by S1218, server.
S1219, terminal 2 play the voice data of contact person.
Terminal 2, can be from the voice data of the contact person after receiving the VoP of contact person of the transmission of terminal 1
Bao Zhong, gets the voice data of contact person, and plays the voice data of the contact person.
S1220, terminal 2 acquire sound by microphone, generate the voice data of user.S1221, terminal 2 are by user's
Voice data is sent to server.
After step S1214, the return of terminal 2 answer confirmation to server, terminal 2 can pass through microphone continuous collecting
The sound of user and the sound of ambient enviroment.Terminal 2 can be by the collected sound of microphone (sound and week including user
Enclose ambient sound), by voice compression algorithm, compressed encoding processing is carried out to collected sound, generates the voice number of user
According to, and the voice data of user is packaged into VoP.Then, terminal 2 passes through Internet for the voice data of user
Packet is sent to server.
The voice data of user is transmitted to terminal 1 by S1222, server.
S1223, terminal 1 send terminal of the voice data to contact person of user.
Wherein,
When the voice communication that terminal 1 shifts is the voice communication in the domain CS, terminal 1 is in the user for receiving server transmission
VoP after, the voice data of user can be parsed from the VoP of user, and by the voice of user
The voice signal of user and is sent to connection by the domain CS in mobile communications network at the voice signal of user by data conversion
The terminal of people, the terminal of contact person, can be from the voice signals of user after receiving the voice signal of user of the transmission of terminal 1
In parse sound and the broadcasting of user.
When the voice communication that terminal 1 shifts is VoLTE voice communication, terminal 1 is in the user for receiving server transmission
VoP after, the terminal of contact person, the end of contact person can be transmitted to by the VoP of user by IMS
End can parse the voice data of user after receiving the VoP of user from the VoP of user, and
Play the voice data of the user.
When the voice communication that terminal 1 shifts is VoIP voice communication, terminal 1 is receiving the user's of server transmission
After VoP, it can be transmitted to the terminal of contact person by the VoP of user by Internet, contact person's
Terminal can parse the voice data of user after receiving the VoP of user from the VoP of user,
And play the voice data of the user.
When above-mentioned voice communication is shifted, if terminal 1 and terminal 2 establish connection, terminal 2 can be transferred directly to by terminal 1,
I.e. after terminal 1 receives the voice data of contact person, it is sent to terminal 2, terminal 2 is sent to after collecting the voice data of user
Terminal 1 is sent to the terminal of contact person by terminal 1.Above-mentioned voice communication transfer can also replace server real by maincenter equipment
It is existing.
In some possible implementations, terminal 1 (receiving the terminal of voice incoming call) can carry out contact person
It is answered on electrotransfer to terminal 2, after terminal 2 answers the incoming call of contact person, terminal 1 can also periodically acquire speech capability ginseng
Number is determined new to answer terminal (such as terminal 3) according to the rule in above-described embodiment.It is determining new to answer terminal
After (such as terminal 3), terminal 1 voice communication can be transferred to it is new answer in terminal (such as terminal 3), be not then transferred to end
On end 2.
In some possible implementations, the incoming call of contact person can be transferred in terminal 2 and answer by terminal 1, terminal 2
After the incoming call for answering contact person, terminal 2 or terminal 1 can receive the switch over operation of user, in the switching behaviour for receiving user
After work, the available speech capability parameter of terminal 1 is determined new to answer terminal (example according to the rule in above-described embodiment
Such as terminal 3).Determine it is new answer terminal (such as terminal 3) after, voice communication can be transferred to new answer by terminal 1
In terminal (such as terminal 3), it is not then transferred in terminal 2.
In some possible implementations, the incoming call of contact person is transferred in terminal 2 by terminal 1, and terminal 2 has exported
Call reminding, but 2 time-out of terminal is not answered.Terminal 1 can choose from other terminals in addition to terminal 1 and terminal 2, according to
Speech capability parameter is determined new to answer terminal (such as terminal 3) according to the rule in above-described embodiment.It is determining newly
Answer terminal (such as terminal 3) after, terminal 1 can future electrotransfer answer in terminal (such as terminal 3) to new.
Above-mentioned terminal 1 could alternatively be maincenter equipment or server.
In above-mentioned the embodiment of the present application, the speech capability parameter value of each terminal be can store in each terminal,
Can store on maincenter equipment, server (such as each terminal periodic by speech capability parameter value be reported to maincenter equipment,
Server).It in the embodiment of the present application, can be by receiving the terminal, maincenter equipment or server of incoming call, according to each terminal
Speech capability parameter, that determines electrotransfer or cloud call transfer answers terminal, can be by receiving when executing call transfer
The terminal of incoming call, which is directly given to, answers terminal, can also be given to via maincenter equipment or server and answer terminal, and end is answered in determination
Several schemes at end can carry out different combinations with several schemes of switching incoming call, be not limited thereto.Respectively scheme in the application
Show that the content that embodiment is not described in detail can refer to other illustrated embodiments.
In embodiments herein, N number of terminal for having delivery value can not can be and tie up in a local area network
The N number of terminal or this N number of terminal for having delivery value of fixed same account are associated by other means.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before
Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the range of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.
Claims (14)
1. a kind of voice communication method characterized by comprising
First terminal receives voice incoming call;
It is described when the first terminal determines that voice incoming call time-out is not answered or the first terminal is busy now
First terminal obtains the user location that multiple terminals report;Wherein, any terminal in the multiple terminal and first end
End is different;
The user location that the first terminal is reported according to the multiple terminal, determines second from the multiple terminal
Terminal;Wherein, the second terminal described in the multiple terminal and user distance are nearest;
Voice incoming call is transferred in the second terminal and answers by the first terminal.
2. the method according to claim 1, wherein the first terminal obtains the user position that multiple terminals report
It sets, specifically includes:
The first terminal obtains the vocal print energy value for the user that multiple terminals report;The higher expression of vocal print energy value reports
The terminal of the vocal print energy value is closer apart from user;
The user location that the first terminal is reported according to the multiple terminal, determines second from the multiple terminal
Terminal specifically includes:
The vocal print ability value that the first terminal is reported according to the multiple terminal determines that second is whole from the multiple terminal
End;Wherein, the vocal print energy value highest of the second terminal described in the multiple terminal.
3. the method according to claim 1, wherein the method also includes:
The first terminal obtains the voice frequency that the multiple terminal reports;The voice frequency is the call time of the terminal
Several and the ratio between the first terminal and total talk times of the multiple terminal;
When there is multiple nearest apart from user terminals in the multiple terminal, first terminal is according to the voice frequency, from institute
It states in multiple terminals nearest apart from user, determines the second terminal;Wherein, at the multiple end nearest apart from user
Second terminal voice frequency described in end is maximum.
4. according to the method described in claim 3, it is characterized in that, the method also includes:
The first terminal obtains the speech capability priority that the multiple terminal reports;The speech capability priority is by terminal
Device type determine;
When there is the maximum terminal of multiple voice frequencies in the multiple terminal nearest apart from user, first terminal is according to
Speech capability priority in the maximum terminal of nearest and voice frequency, determines that described second is whole apart from user from the multiple
End;Wherein, the multiple apart from user in the maximum terminal of nearest and voice frequency, the speech capability of the second terminal is excellent
First grade highest.
5. the method according to claim 1, wherein voice incoming call is transferred to second by the first terminal
Terminal is answered, and is specifically included:
The voice data for the contact person that the terminal that the first terminal receives contact person is sent, and receive the second terminal
The voice data of the user of transmission;
The voice data of the contact person is sent to the second terminal by the first terminal, and by the voice number of the user
According to the terminal for being sent to the contact person.
6. according to the method described in claim 5, it is characterized in that, receiving the terminal transmission of contact person in the first terminal
The voice data of contact person, and before receiving the voice data for the user that the second terminal is sent, the method also includes:
The first terminal sends incoming call instruction to the second terminal;It is defeated that the incoming call instruction is used to indicate the second terminal
Call reminding out;
What the first terminal received that the second terminal sends answers confirmation;
The voice data for the contact person that the terminal that the first terminal receives contact person is sent, and receive the second terminal and send
User voice data, specifically include:
Confirmation is answered in response to described, the first terminal receives the voice data of the contact person of the terminal transmission of contact person, and
Receive the voice data for the user that the second terminal is sent.
7. the method according to claim 1, wherein voice incoming call is transferred to institute in the first terminal
It states before being answered in second terminal, the method also includes:
The first terminal and the second terminal establish connection.
8. a kind of voice communication method characterized by comprising
First terminal receives voice incoming call;
It is described when the first terminal judges that voice incoming call time-out is not answered or the first terminal is busy now
First terminal obtains user location, voice frequency, speech capability priority and the equipment state that multiple terminals report;Wherein, institute
Any terminal stated in multiple terminals is different from the first terminal;
User location, voice frequency, speech capability priority and the equipment that the first terminal is reported according to the multiple terminal
State value determines second terminal from the multiple terminal;
Voice incoming call is transferred in the second terminal and answers by the first terminal.
9. a kind of voice communication method characterized by comprising
When first terminal carries out voice communication with the terminal of contact person, the first terminal receives the call transfer of user
Operation;
It is operated in response to the call transfer, the first terminal obtains the user location that multiple terminals report;Wherein, described more
Any terminal in a terminal is different from the first terminal;
The user location that the first terminal is reported according to the multiple terminal, determines second from the multiple terminal
Terminal;Wherein, the second terminal described in the multiple terminal and user distance are nearest;
The voice communication is transferred to the second terminal by the first terminal.
10. a kind of terminal, comprising: memory, transceiver and at least one processor are stored with program generation in the memory
Code, the memory, the transceiver and at least one described processor communication, processor operation said program code with
The terminal is instructed to execute the method for any of claims 1-9.
11. a kind of computer program product, which is characterized in that when the computer program product is run on computers, make
It obtains the electronic equipment and executes such as the described in any item methods of claim 1-9.
12. a kind of computer storage medium, comprising: computer instruction, when the computer instruction is run on an electronic device,
So that the electronic equipment executes such as the described in any item methods of claim 1-9.
13. a kind of voice communication method characterized by comprising
Maincenter equipment receives the call transfer request that the first terminal is sent;
In response to the call transfer request that the first terminal is sent, the maincenter equipment obtains the user position that multiple terminals report
It sets;Any terminal in the multiple terminal is different from the first terminal;
The user location that the maincenter equipment is reported according to the multiple terminal determines that second is whole from the multiple terminal
End;Wherein, the second terminal described in the multiple terminal and user distance are nearest;
The maincenter equipment sends call-in reporting to the second terminal, and the call-in reporting comes for second terminal output
Electricity is reminded.
14. a kind of voice communication method characterized by comprising
Server receives the call transfer request of first terminal transmission;
It is requested in response to the call transfer, the server obtains the user location that multiple terminals report;The multiple terminal
In any terminal it is different from the first terminal;
The user location that the server is reported according to the multiple terminal determines second terminal from the multiple terminal;
Wherein, the second terminal described in the multiple terminal and user distance are nearest;
The server sends call-in reporting to the second terminal, and the call-in reporting is used for the second terminal output incoming
It reminds.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910517494.3A CN110191241B (en) | 2019-06-14 | 2019-06-14 | Voice communication method and related device |
PCT/CN2020/095751 WO2020249062A1 (en) | 2019-06-14 | 2020-06-12 | Voice communication method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910517494.3A CN110191241B (en) | 2019-06-14 | 2019-06-14 | Voice communication method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110191241A true CN110191241A (en) | 2019-08-30 |
CN110191241B CN110191241B (en) | 2021-06-29 |
Family
ID=67721888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910517494.3A Active CN110191241B (en) | 2019-06-14 | 2019-06-14 | Voice communication method and related device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110191241B (en) |
WO (1) | WO2020249062A1 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110737337A (en) * | 2019-10-18 | 2020-01-31 | 向勇 | human-computer interaction system |
CN111445612A (en) * | 2020-04-02 | 2020-07-24 | 北京声智科技有限公司 | Unlocking method, control equipment, electronic equipment and access control system |
CN111786963A (en) * | 2020-06-12 | 2020-10-16 | 青岛海尔科技有限公司 | Method and device for realizing communication process, storage medium and electronic device |
CN111988426A (en) * | 2020-08-31 | 2020-11-24 | 深圳康佳电子科技有限公司 | Communication method and device based on voiceprint recognition, intelligent terminal and storage medium |
WO2020249062A1 (en) * | 2019-06-14 | 2020-12-17 | 华为技术有限公司 | Voice communication method and related device |
WO2021103955A1 (en) * | 2019-11-30 | 2021-06-03 | 华为技术有限公司 | Calling method and apparatus |
CN112929481A (en) * | 2019-11-20 | 2021-06-08 | Oppo广东移动通信有限公司 | Incoming call processing method and device, electronic equipment and computer readable storage medium |
WO2021139690A1 (en) * | 2020-01-09 | 2021-07-15 | 京东方科技集团股份有限公司 | Session establishment method, apparatus, and related device |
CN113296729A (en) * | 2021-06-01 | 2021-08-24 | 青岛海尔科技有限公司 | Prompt message broadcasting method, device and system, storage medium and electronic device |
WO2021175254A1 (en) * | 2020-03-05 | 2021-09-10 | 华为技术有限公司 | Call method, system and device |
CN113411759A (en) * | 2020-02-29 | 2021-09-17 | 华为技术有限公司 | Voice call transfer method and electronic equipment |
CN113572731A (en) * | 2021-06-18 | 2021-10-29 | 荣耀终端有限公司 | Voice communication method, personal computer and terminal |
CN113595866A (en) * | 2021-06-21 | 2021-11-02 | 青岛海尔科技有限公司 | Method and device for establishing audio and video call among multiple devices |
CN113923305A (en) * | 2021-12-14 | 2022-01-11 | 荣耀终端有限公司 | Multi-screen cooperative communication method, system, terminal and storage medium |
WO2022105444A1 (en) * | 2020-11-19 | 2022-05-27 | Oppo广东移动通信有限公司 | Notification reminding method and apparatus, and terminal and storage medium |
WO2023103462A1 (en) * | 2021-12-08 | 2023-06-15 | 荣耀终端有限公司 | Distributed call conflict processing method and system, and electronic device and storage medium |
US11979516B2 (en) | 2020-01-22 | 2024-05-07 | Honor Device Co., Ltd. | Audio output method and terminal device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102752479A (en) * | 2012-05-30 | 2012-10-24 | 中国农业大学 | Scene detection method of vegetable diseases |
CN104581665A (en) * | 2014-12-17 | 2015-04-29 | 广东欧珀移动通信有限公司 | Call transfer method and device |
CN105101131A (en) * | 2015-06-18 | 2015-11-25 | 小米科技有限责任公司 | Method and device for answering incoming call |
CN105228118A (en) * | 2015-09-28 | 2016-01-06 | 小米科技有限责任公司 | Call transferring method, device and terminal equipment |
US20160234213A1 (en) * | 2013-09-23 | 2016-08-11 | Samsung Electronics Co., Ltd. | Apparatus and method by which user device in home network system transmits home-device-related information |
CN105959191A (en) * | 2016-07-01 | 2016-09-21 | 上海卓易云汇智能技术有限公司 | Control method of smart home system for intelligently answering incoming calls and system thereof |
CN106713682A (en) * | 2016-11-25 | 2017-05-24 | 深圳市国华识别科技开发有限公司 | Call transfer method and system |
CN106817683A (en) * | 2017-04-12 | 2017-06-09 | 北京奇虎科技有限公司 | Display methods, the apparatus and system of transinformation of sending a telegram here |
CN108735216A (en) * | 2018-06-12 | 2018-11-02 | 广东小天才科技有限公司 | A kind of voice based on semantics recognition searches topic method and private tutor's equipment |
CN108900502A (en) * | 2018-06-27 | 2018-11-27 | 佛山市云米电器科技有限公司 | It is a kind of based on home furnishings intelligent interconnection communication means, system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8798603B2 (en) * | 2008-07-14 | 2014-08-05 | Centurylink Intellectual Property Llc | System and method for providing emergency call forwarding services |
CN104427137A (en) * | 2013-08-29 | 2015-03-18 | 鸿富锦精密工业(深圳)有限公司 | Telephone device, server and automatic call forwarding method |
CN104468962B (en) * | 2013-09-24 | 2018-06-01 | 联想(北京)有限公司 | The processing method and electronic equipment of a kind of call request |
CN106941660A (en) * | 2016-01-05 | 2017-07-11 | 中兴通讯股份有限公司 | A kind of call transferring method, apparatus and system |
CN105721728A (en) * | 2016-02-16 | 2016-06-29 | 上海斐讯数据通信技术有限公司 | Call forwarding method based on WiFi and intelligent terminal |
CN106535149B (en) * | 2016-11-25 | 2020-05-05 | 深圳市国华识别科技开发有限公司 | Terminal automatic call forwarding method and system |
CN110191241B (en) * | 2019-06-14 | 2021-06-29 | 华为技术有限公司 | Voice communication method and related device |
-
2019
- 2019-06-14 CN CN201910517494.3A patent/CN110191241B/en active Active
-
2020
- 2020-06-12 WO PCT/CN2020/095751 patent/WO2020249062A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102752479A (en) * | 2012-05-30 | 2012-10-24 | 中国农业大学 | Scene detection method of vegetable diseases |
US20160234213A1 (en) * | 2013-09-23 | 2016-08-11 | Samsung Electronics Co., Ltd. | Apparatus and method by which user device in home network system transmits home-device-related information |
CN104581665A (en) * | 2014-12-17 | 2015-04-29 | 广东欧珀移动通信有限公司 | Call transfer method and device |
CN105101131A (en) * | 2015-06-18 | 2015-11-25 | 小米科技有限责任公司 | Method and device for answering incoming call |
CN105228118A (en) * | 2015-09-28 | 2016-01-06 | 小米科技有限责任公司 | Call transferring method, device and terminal equipment |
CN105959191A (en) * | 2016-07-01 | 2016-09-21 | 上海卓易云汇智能技术有限公司 | Control method of smart home system for intelligently answering incoming calls and system thereof |
CN106713682A (en) * | 2016-11-25 | 2017-05-24 | 深圳市国华识别科技开发有限公司 | Call transfer method and system |
CN106817683A (en) * | 2017-04-12 | 2017-06-09 | 北京奇虎科技有限公司 | Display methods, the apparatus and system of transinformation of sending a telegram here |
CN108735216A (en) * | 2018-06-12 | 2018-11-02 | 广东小天才科技有限公司 | A kind of voice based on semantics recognition searches topic method and private tutor's equipment |
CN108900502A (en) * | 2018-06-27 | 2018-11-27 | 佛山市云米电器科技有限公司 | It is a kind of based on home furnishings intelligent interconnection communication means, system |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020249062A1 (en) * | 2019-06-14 | 2020-12-17 | 华为技术有限公司 | Voice communication method and related device |
CN110737337A (en) * | 2019-10-18 | 2020-01-31 | 向勇 | human-computer interaction system |
CN112929481B (en) * | 2019-11-20 | 2022-04-12 | Oppo广东移动通信有限公司 | Incoming call processing method and device, electronic equipment and computer readable storage medium |
CN112929481A (en) * | 2019-11-20 | 2021-06-08 | Oppo广东移动通信有限公司 | Incoming call processing method and device, electronic equipment and computer readable storage medium |
WO2021103955A1 (en) * | 2019-11-30 | 2021-06-03 | 华为技术有限公司 | Calling method and apparatus |
US11949805B2 (en) | 2019-11-30 | 2024-04-02 | Huawei Technologies Co., Ltd. | Call method and apparatus |
WO2021139690A1 (en) * | 2020-01-09 | 2021-07-15 | 京东方科技集团股份有限公司 | Session establishment method, apparatus, and related device |
US11979516B2 (en) | 2020-01-22 | 2024-05-07 | Honor Device Co., Ltd. | Audio output method and terminal device |
CN114245328A (en) * | 2020-02-29 | 2022-03-25 | 华为技术有限公司 | Voice call transfer method and electronic equipment |
CN113411759B (en) * | 2020-02-29 | 2023-03-31 | 华为技术有限公司 | Voice call transfer method and electronic equipment |
CN113411759A (en) * | 2020-02-29 | 2021-09-17 | 华为技术有限公司 | Voice call transfer method and electronic equipment |
WO2021175254A1 (en) * | 2020-03-05 | 2021-09-10 | 华为技术有限公司 | Call method, system and device |
CN111445612A (en) * | 2020-04-02 | 2020-07-24 | 北京声智科技有限公司 | Unlocking method, control equipment, electronic equipment and access control system |
CN111786963A (en) * | 2020-06-12 | 2020-10-16 | 青岛海尔科技有限公司 | Method and device for realizing communication process, storage medium and electronic device |
CN111988426A (en) * | 2020-08-31 | 2020-11-24 | 深圳康佳电子科技有限公司 | Communication method and device based on voiceprint recognition, intelligent terminal and storage medium |
CN111988426B (en) * | 2020-08-31 | 2023-07-18 | 深圳康佳电子科技有限公司 | Communication method and device based on voiceprint recognition, intelligent terminal and storage medium |
WO2022105444A1 (en) * | 2020-11-19 | 2022-05-27 | Oppo广东移动通信有限公司 | Notification reminding method and apparatus, and terminal and storage medium |
CN113296729A (en) * | 2021-06-01 | 2021-08-24 | 青岛海尔科技有限公司 | Prompt message broadcasting method, device and system, storage medium and electronic device |
CN113572731A (en) * | 2021-06-18 | 2021-10-29 | 荣耀终端有限公司 | Voice communication method, personal computer and terminal |
CN113595866A (en) * | 2021-06-21 | 2021-11-02 | 青岛海尔科技有限公司 | Method and device for establishing audio and video call among multiple devices |
WO2023103462A1 (en) * | 2021-12-08 | 2023-06-15 | 荣耀终端有限公司 | Distributed call conflict processing method and system, and electronic device and storage medium |
CN113923305A (en) * | 2021-12-14 | 2022-01-11 | 荣耀终端有限公司 | Multi-screen cooperative communication method, system, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110191241B (en) | 2021-06-29 |
WO2020249062A1 (en) | 2020-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110191241A (en) | A kind of voice communication method and relevant apparatus | |
CN110191442A (en) | A kind of Bluetooth connecting method, equipment and system | |
CN110322878A (en) | A kind of sound control method, electronic equipment and system | |
CN110138937B (en) | Call method, device and system | |
CN110364151A (en) | A kind of method and electronic equipment of voice wake-up | |
CN109782944A (en) | A kind of response method and electronic equipment of touch screen | |
CN110119295A (en) | A kind of display control method and relevant apparatus | |
CN110506416A (en) | A kind of method and terminal of terminal switching camera | |
CN110381197A (en) | Many-one throws the processing method of screen sound intermediate frequency data, apparatus and system | |
CN110445978A (en) | A kind of image pickup method and equipment | |
CN110012154A (en) | A kind of control method and electronic equipment of the electronic equipment with Folding screen | |
CN109710080A (en) | A kind of screen control and sound control method and electronic equipment | |
CN110336720A (en) | Apparatus control method and equipment | |
CN110489215A (en) | The treating method and apparatus of scene is waited in a kind of application program | |
CN110381282A (en) | A kind of display methods and relevant apparatus of the video calling applied to electronic equipment | |
CN110531864A (en) | A kind of gesture interaction method, device and terminal device | |
CN110347269A (en) | A kind of sky mouse mode implementation method and relevant device | |
CN109976626A (en) | A kind of switching method and electronic equipment of application icon | |
CN109559270A (en) | A kind of image processing method and electronic equipment | |
CN110401767B (en) | Information processing method and apparatus | |
CN110012130A (en) | A kind of control method and electronic equipment of the electronic equipment with Folding screen | |
CN110032307A (en) | A kind of moving method and electronic equipment of application icon | |
CN110336910A (en) | A kind of private data guard method and terminal | |
CN110198362A (en) | A kind of method and system for adding smart home device in contact person | |
CN110087012A (en) | A kind of control method and electronic equipment of camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |