CN118118593A - Conversation method and electronic equipment - Google Patents

Conversation method and electronic equipment Download PDF

Info

Publication number
CN118118593A
CN118118593A CN202211521752.3A CN202211521752A CN118118593A CN 118118593 A CN118118593 A CN 118118593A CN 202211521752 A CN202211521752 A CN 202211521752A CN 118118593 A CN118118593 A CN 118118593A
Authority
CN
China
Prior art keywords
user
information
mobile phone
voice
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211521752.3A
Other languages
Chinese (zh)
Inventor
柯胜强
尹旭贤
耿杰
张宁
邓淇天
王宏观
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211521752.3A priority Critical patent/CN118118593A/en
Priority to PCT/CN2023/127971 priority patent/WO2024114233A1/en
Publication of CN118118593A publication Critical patent/CN118118593A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a communication method and electronic equipment, which are applied to the technical field of terminals and can improve the communication experience of users and communication efficiency. The method comprises the following steps: displaying a call interface, wherein the call interface comprises an identifier of a second user, the second user is a user using second electronic equipment, and the call is between the second electronic equipment and the first electronic equipment; transmitting first voice information of a first tone to the second electronic device, wherein the first voice information is determined by the first electronic device; receiving first information input by a first user through the call interface; according to the first information, second voice information of a second tone is sent to the second electronic equipment; the first timbre is different from the second timbre.

Description

Conversation method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a call method and an electronic device.
Background
Audio communication has become a frequently used communication mode in daily life and work of people. When a user cannot or is inconvenient to answer a call (such as the user is in a meeting, working, driving) and does not want to miss the call, intelligent communication can be performed by means of artificial intelligence (ARTIFICIAL INTEL L IGENCE, AI) technology. However, current AI intelligent call technology suffers from poor user experience.
Disclosure of Invention
The application provides a communication method and electronic equipment, which can improve the communication experience of a user and communication efficiency.
In order to achieve the above object, the embodiment of the present application provides the following technical solutions:
In a first aspect, the present application provides a call method, applied to a first electronic device, including: displaying a call interface, wherein the call interface comprises an identifier of a second user, the second user is a user using the second electronic equipment, and the call is between the second electronic equipment and the first electronic equipment; transmitting first voice information of a first tone to the second electronic device, the first voice information being determined by the first electronic device; receiving first information input by a first user through a call interface; according to the first information, sending second voice information of a second tone to second electronic equipment; the first timbre is different from the second timbre.
Therefore, the first electronic equipment can play the voice corresponding to the automatic reply content and the voice corresponding to the manual reply content to the conversation counterpart by using the voices with different tone colors, so that the conversation counterpart can distinguish the manual reply information and the automatic reply information, and the conversation experience of the user and the conversation efficiency are improved.
Illustratively, the identification of the second user in the call interface may be as contact A in call interface 701 as shown in FIG. 7A. The call is between the second electronic equipment (mobile phone 2) and the first electronic equipment (mobile phone 1); the mobile phone 1 transmits the first voice information automatically replied by the smart AI of the first tone color (tone color 1) to the mobile phone 2. And receiving first information input by the first user through the call interface. For example, the first information is information in a dialog 7064 as shown in (c) of fig. 7B; transmitting second voice information of a second tone (tone 2) to the second electronic device according to the first information; the first timbre is different from the second timbre.
In one possible implementation, before sending the first voice information to the second electronic device, the method further includes: and displaying first prompt information, wherein the first prompt information is used for prompting the first electronic equipment to be in an automatic reply mode.
Illustratively, as shown in fig. 7B (a), a word of "auto answer mode" (first prompt message) is displayed in the interface 701 to prompt the user that the mobile phone 1 is in the auto answer mode. Thus, the user can intuitively understand that the call mode of the mobile phone 1 at this time is the automatic reply mode. And the conversation experience of the user is improved.
In one possible implementation, before sending the second voice information to the second electronic device, the method further includes: and displaying second prompt information, wherein the second prompt information is used for prompting the first electronic equipment to be in a manual reply mode.
Illustratively, as shown in fig. 7B (B), a word of "manual answer mode" (second prompt message) is displayed in the interface 701 to prompt the user that the mobile phone 1 is in the manual answer mode.
In one possible implementation manner, displaying the second prompt information specifically includes: responding to the first operation, displaying second prompt information, wherein the second prompt information is used for prompting that the first electronic equipment is in a manual reply mode; the first operation includes an input operation by the first user for a call interface.
In one possible implementation, the first operation includes an operation of a text input box of the call interface by the first user. For example, the first operation may be an operation of manually replying by the user, such as clicking a cursor of the mobile phone to the text input box 707, clicking the text input box 707 by the user, calling up a keyboard of the mobile phone, clicking the first word by the user, and so on, which is not limited by the embodiment of the present application.
Therefore, the intelligent AI automatic reply mode and the user manual reply mode in the first electronic equipment can be adaptively switched, the user does not need to click a control for switching the modes, and the conversation efficiency is improved.
In one possible implementation manner, after receiving the first information input by the first user through the call interface, the method further includes: displaying first information on a call interface by using a first user interface UI effect; after sending the first voice information of the first tone color to the second electronic device, the method further includes: displaying second information on the call interface by using a second user interface UI effect, wherein the second information is text information corresponding to the first voice information; wherein the first user interface UI effect is different from the second user interface UI effect.
Therefore, the mobile phone 1 can display the automatically replied text content and the manually inputted text content through different interfaces (user interface, UI), so that the automatically replied text content and the manually inputted text replied text content are more clearly distinguished, the user can check and understand the call content, and the call experience of the user and the call efficiency are improved.
As shown in fig. 7B (c), the content automatically replied by the smart AI of the mobile phone 1 is displayed in the text box 7061 of the black background in the font format of the songbody, and the content manually replied by the user is displayed in the text box 7064 of the black background in the font format of the book.
In one possible implementation, sending, to the second electronic device, second voice information of a second tone according to the first information, includes: if the operation of the first user for the sending control of the call interface in the first duration is detected, sending second voice information of a second tone to the second electronic equipment according to the first information; the first time length is less than a first threshold.
Illustratively, if the operation of the transmission control (the second operation) is received within the first time period, the user completes the input within the first time period and transmits the input to the mobile phone 2. For example, as shown in fig. 7B (c), the interface display module 312 displays the text information input by the user in the text box 7064 in the mobile phone interface 701, and the mobile phone 1 converts the text information into voice information of tone color 2 through the personalized text-to-voice module 313 and sends the voice information to the mobile phone 2.
In one possible implementation, the method further includes: and if the operation of the first user on the sending control of the call interface in the first time period is not detected, sending first voice prompt information to the second electronic equipment, wherein the first voice prompt information is used for prompting the first user to input the first information. Therefore, the first electronic equipment can remind the calling party, and the user can manually input the reply information to prevent the calling party from hanging up the telephone because the calling party cannot get the reply for a long time.
For example, if the operation of sending the control (the second operation) is not received within the first duration, the mobile phone 1 sends the first voice prompt. The first voice prompt may be displayed in an interface as shown in fig. 8. For example, the interface display module 312 may display the first voice prompt in the text box 7063 as in (a) of fig. 8. The first voice prompt message is converted into voice message of tone 1 through the universal text-to-voice module 314 and sent to the mobile phone 2.
In one possible implementation, the method further includes: if the operation of the first user on the sending control of the call interface in the second time period is not detected, sending fourth voice information of the first tone to the second electronic equipment, wherein the second time period is longer than the first time period; the fourth voice information is determined by the first electronic device. Therefore, the manual reply mode can be automatically switched to the automatic reply mode, a user is not required to click a control for switching the mode, and the conversation efficiency is improved.
For example, as shown in fig. 9 (b), in the second duration, if the user does not click on the send control of the call interface of the mobile phone 1, the mobile phone 1 is switched from the manual reply mode to the automatic reply mode, and then the fourth voice information of the first tone is sent to the mobile phone 2, where the content of the fourth voice information may be as shown in a text box 7066 in fig. 9 (b).
In one possible implementation, the method further comprises at least one of: displaying text information corresponding to the second voice information on the call interface; displaying text information corresponding to the first voice prompt information on a call interface; and displaying text information corresponding to the fourth voice information on the call interface. Therefore, the call records can be displayed in the mobile phone call interface in the form of characters, so that a user can conveniently check the call records.
For example, as shown in fig. 8 (b), the text information of the text box 7061 is text information corresponding to the first voice prompt. As shown in fig. 9 (b), the text information in the text box 7064 is text information corresponding to the second voice information. The text information in the text box 7066 is text information corresponding to the fourth voice information.
In one possible implementation, the method further includes: receiving third voice information from the second electronic device; the call interface also comprises a first control associated with the third voice information; detecting the operation of a first user on a first control, and starting a voice playing function. Thus, the user can play the message of the call counterpart by voice, and the call experience of the user is improved.
As shown in fig. 10 (b), after the mobile phone 1 receives the voice information 3 of the contact a (an example of the third voice information) from the mobile phone 2, the voice information 3 of the contact a may be played according to the play requirement of the user of the mobile phone 1. The user may click on text box 7062 in reply to handset 2 of contact a in dialog box 706. In response to the user clicking on the text box 7062 (an example of the first control), the mobile phone 1 may play the voice information 3 replied to by the mobile phone 2 corresponding to the content in the text box 7062.
In one possible implementation manner, in a case where the first information is voice information, the method further includes: responding to the operation of the first user on a second control associated with the voice information in the call interface, and converting the voice information into corresponding text information; and displaying the text information on the call interface. Thus, the user can not only call through text input, but also call through voice input, and the interestingness of the call is improved.
For example, as shown in fig. 11 (a), the mobile phone 1 may input the first information by voice. As in fig. 11 (b), the user can make a voice input by pressing the voice control 707' long. Thereafter, after the user has long pressed the voice control 707 'shown in fig. 11 (b), the voice control 707' may be converted to a form as shown in fig. 11 (c), indicating that the user is inputting voice. As in (c) of fig. 11, the user inputs "how you want to do so, what do you have? ". The content of the user's voice input may be displayed in dialog 706 in the form of a voice box 7067 in which the duration of the voice content is 5s. Alternatively, in response to the user clicking on the voice box 7067, the mobile phone 1 may play the voice information. As in fig. 11 (d), the user long presses the voice box 7067, can convert the voice information into text information, display in the text box 7068,
In one possible implementation, the method further includes: displaying a first interface, wherein the first interface comprises tone setting options; and setting the first tone and/or the second tone in response to the first user operating the tone setting option.
For example, as shown in fig. 12B, the mobile phone 1 may preset the first tone to be "known female voice" and the second tone to be "my voice". The first tone color is different from the second tone color, so that the communication counterpart can conveniently distinguish the automatically replied content from the manually replied content. Alternatively, the first tone color and the second tone color may be the same, which is not particularly limited in the embodiment of the present application.
In a second aspect, the present application provides a call method, applied to a second electronic device, including: displaying a call interface, wherein the call interface comprises an identifier of a first user, the first user is a user using the first electronic equipment, and the call is between the second electronic equipment and the first electronic equipment; receiving first voice information of a first tone color from a first electronic device; receiving second voice information of a second tone color from the first electronic device; wherein the first tone color is different from the second tone color. The second electronic equipment receives the voice information with different tone colors, so that the user of the second electronic equipment can distinguish the manually replied information and the automatically replied information, and the conversation experience of the user and the conversation efficiency are improved.
In one possible implementation, the method further includes: and receiving first voice prompt information sent by the first electronic equipment, wherein the first voice prompt information is used for prompting the first user to input first information, and the first information is text information or voice information corresponding to the second voice information. Thus, the user of the second electronic device can know that the user of the first electronic device is manually inputting the reply information, and the user of the second electronic device is prevented from hanging up the telephone because the user cannot get the reply for a long time.
In one possible implementation, the method further includes: and sending third voice information to the first electronic equipment, wherein the third voice information is voice information input by a second user, and the second user is a user using the second electronic equipment.
In a third aspect, the present application provides an electronic device, comprising: a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform the method of any one of the first aspect and the method of any one of the second aspect.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method of any one of the first aspect and the method of any one of the second aspect.
In a fifth aspect, the present application provides a chip system comprising at least one processor and at least one interface circuit, the at least one interface circuit being adapted to perform a transceiving function and to send instructions to the at least one processor, the at least one processor performing the method according to any of the first aspect and the method according to any of the second aspect when the at least one processor executes the instructions.
In a sixth aspect, the application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspect and the method of any of the second aspect.
In a seventh aspect, embodiments of the present application provide a circuitry comprising processing circuitry configured to perform the method of the first aspect or any implementation of the first aspect and the method of any of the second aspects.
The technical effects corresponding to the second aspect to the seventh aspect and any implementation manner of the second aspect to the seventh aspect may be referred to the above first aspect and any implementation manner of the first aspect, and the technical effects corresponding to any implementation manner of the second aspect and the second aspect are not repeated here.
Drawings
Fig. 1 is a schematic diagram of a call interface according to an embodiment of the present application;
fig. 2A is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2B is a schematic structural diagram of another electronic device according to an embodiment of the present application;
Fig. 2C is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present application;
fig. 4A is a schematic diagram of an interface of an electronic device according to an embodiment of the present application;
FIG. 4B is a schematic diagram of another electronic device interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another electronic device interface according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another electronic device interface according to an embodiment of the present application;
FIG. 7A is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 7B is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 10 is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 11 is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 12A is a schematic illustration of another interface provided by an embodiment of the present application;
FIG. 12B is a schematic illustration of another interface provided by an embodiment of the present application;
FIG. 13 is a schematic view of another interface provided by an embodiment of the present application;
FIG. 14 is a schematic view of another interface provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 16 is a schematic illustration of another interface provided by an embodiment of the present application;
FIG. 17 is a flow chart of a call method according to an embodiment of the present application;
FIG. 18 is a schematic diagram of another call interface according to an embodiment of the present application;
FIG. 19 is a schematic diagram of another interface call provided by an embodiment of the present application;
FIG. 20 is a schematic diagram of another module according to an embodiment of the present application;
Fig. 21 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The following describes a call method provided by the embodiment of the present application in detail with reference to the accompanying drawings.
The terms "comprising" and "having" and any variations thereof, as used in the description of embodiments of the application, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The AI intelligent call technology can realize the functions of recording and screen recording in the call process, automatically generating transcription text by the voice of the opposite party and displaying the transcription text in the screen of the electronic equipment, automatically replying by the electronic equipment and displaying replying characters in the screen, manually inputting character replying by a user and displaying replying characters in the screen, automatically replying, manually inputting the content voice of the character replying and playing the content voice of the character replying to the opposite party in the call, and the like. And the electronic equipment can be switched at any time between automatic reply and manual text input reply.
In the related art, when a user uses the intelligent AI technology to answer a call automatically, a call partner cannot recognize the played reply content, whether the content is automatically replied by the electronic device or manually input text reply content. And when the user uses an automatic reply mode and a manual character input reply mode to switch, the user needs to manually switch, the switching is inconvenient, and the user experience is poor.
As shown in fig. 1 (a), for the mobile phone intelligent AI call interface 301, the mobile phone intelligent AI call interface 301 includes an automatic answer button 302, a phone caption button, a manual answer button 303, and a dialog box 304. When the user's mobile phone automatically connects to the phone through the smart AI, the phone call mode is an automatic answer mode, and the word of "automatic answer mode" is displayed in the dialog box 304. The intelligent AI can automatically reply to the other party. If the user wants to switch to the manual answer mode, as in fig. 1 (b), the user needs to click the manual answer button 303 in the mobile phone interface by his finger, and in response to the user clicking the manual answer button 303, the mobile phone call mode is switched from the automatic answer mode to the manual answer mode, and the "manual answer mode" word is displayed in the dialog box 304. And a text input box 305 and a text send button 306 are displayed below the dialog box. As in (c) of fig. 1, the user inputs the text "what is something? After "send button 306 is clicked. In response to the user clicking the send button 306, the handset sends the text "what is there? As in (d) of fig. 1, the manually returned content and the automatically returned content of the smart AI are displayed in the dialog box in the same text box of the UI display effect. The counterpart cannot find out the call mode has been switched from automatic answer to manual answer according to the sound of answering.
In order to solve the above problems, the embodiment of the application provides a call method and electronic equipment. In the method, when the electronic equipment uses the intelligent AI to communicate, the voice corresponding to the automatic reply content and the voice corresponding to the manual reply content can be played to the communication counterpart by using voices with different tone colors. The electronic device may also display the automatically replied text content and the text content that the user manually inputs the text reply through different interfaces (UIs). And the intelligent AI automatic reply mode and the manual reply mode in the electronic equipment can be adaptively switched, so that the user conversation experience and conversation efficiency are improved.
In one possible implementation, the embodiment of the present application may be applied to a system composed of a plurality of electronic devices, where the system may be as shown in fig. 2A. The system comprises a first electronic device 100 and a second electronic device 200. The first electronic device 100 and the second electronic device 200 may implement the above-described call method. The first electronic device 100 and the second electronic device 200 may be a personal computer (personal computer, PC), a mobile phone (mobi le phone), a tablet (Pad), a notebook, a desktop, a notebook, a computer with a transceiving function, a virtual reality (virtual real ity, VR) terminal device, an augmented reality (augmented real ity, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned (SELF DRIVING), a wireless terminal in telemedicine (remote medium), a wireless terminal in smart grid (SMART GRID), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (SMART CI TY), a wireless terminal in smart home (smart home), a wearable device, a vehicle-mounted device, or the like. The embodiment of the application does not limit the specific form of the electronic equipment.
In some embodiments, the first electronic device 100 may interact with the second electronic device 200 through an operator network, such as through a 4th generation (4th generation,4G) mobile communication system, such as a long term evolution (long term evolution, LTE) system, a fifth generation (5th generat ion,5G) mobile communication system, such as a new radio, NR, system, and future communication systems, such as a sixth generation (6th generat ion,6G) mobile communication system, etc., to implement a call.
In other embodiments, the first electronic device 100 may interact with the second electronic device 200 over a non-carrier network to enable a call. Alternatively, the non-operator network may include, but is not limited to, a wireless fidelity (WIRELESS FIDEL ITY, wi-Fi) network. Embodiments of the present application are not limited to a particular type and standard of carrier network as well as non-carrier networks.
Alternatively, the second electronic device 200 may initiate a call to the first electronic device 100, and the first electronic device 100 may also initiate a call to the second electronic device 200.
Fig. 2B is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application. The electronic device may be the first electronic device and/or the second electronic device described above. The electronic device comprises at least one processor 201, communication lines 202, a memory 203 and at least one communication interface 204. Wherein the memory 203 may also be included in the processor 201.
The processor 201 may be a central processing unit (central processing uni t, CPU), but may also be other general purpose processors, digital signal processors (DIGI TAL S IGNAL processors, DSP), application Specific Integrated Circuits (ASIC), field programmable gate arrays (fi eld programmable GATE ARRAY, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Communication line 202 may include a pathway to transfer information between the aforementioned components.
A communication interface 204 for communicating with other devices. In the embodiment of the present application, the communication interface may be a module, a circuit, a bus, an interface, a transceiver, or other devices capable of implementing a communication function, for communicating with other devices. Alternatively, when the communication interface is a transceiver, the transceiver may be a separately provided transmitter that is operable to transmit information to other devices, or a separately provided receiver that is operable to receive information from other devices. The transceiver may also be a component that integrates the functions of transmitting and receiving information, and embodiments of the present application are not limited to the specific implementation of the transceiver.
The memory 203 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM, EPROM), an electrically erasable programmable ROM (ELECTRICAL LY EPROM, EEPROM), or a flash memory, among others. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STAT IC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHL INK DRAM, SLDRAM) and direct memory bus random access memory (direct rambus RAM, DR RAM) or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory may be stand alone and be coupled to the processor 201 via a communication line 202. Memory 203 may also be integrated with processor 201.
The memory 203 is used for storing computer-executable instructions for implementing the scheme of the present application, and is controlled to be executed by the processor 201. The processor 201 is configured to execute computer-executable instructions stored in the memory 203, thereby implementing a carrier wave transmission method provided in the following embodiments of the present application.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application code, instructions, computer programs, or other names, and the embodiments of the present application are not limited in detail.
In a particular implementation, as one embodiment, processor 201 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 1.
In a particular implementation, as one embodiment, an electronic device may include multiple processors, such as processor 201 and processor 205 in FIG. 2B. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (mult i-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The electronic device may be a general-purpose device or a special-purpose device, and the embodiment of the application is not limited to the type of electronic device.
It should be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the first electronic device. In other embodiments of the application, the first electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In the embodiment of the application, the first electronic device and the second electronic device are described as mobile phones, but the form, the function and the like of the first electronic device are not limited. Referring to fig. 2C, a schematic structural diagram of a mobile phone according to an embodiment of the present application is provided. The method in the following embodiments may be implemented in a mobile phone having the above-described hardware structure.
As shown in fig. 2C, the cellular phone may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a camera 193, a display 194, and the like. Optionally, the mobile phone may further include a mobile communication module 150, etc.
It should be understood that the structure illustrated in this embodiment is not limited to a specific configuration of the mobile phone. In other embodiments, the handset may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (appl ication processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGI TAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can be a neural center and a command center of the mobile phone. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED C ircuit, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUI T SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/TRANSMI TTER, UART) interface, a mobile industry processor interface (mobi le industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a USB interface, and the like.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 can also supply power to the mobile phone through the power management module 141 while charging the battery 142. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 may also receive input from the battery 142 to power the handset.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
When the handset includes the mobile communication module 150, the mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noi SE AMPL IFI ER, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110. In some embodiments of the present application, the handset 1 and the handset 2 may communicate through the mobile communication module 150.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network), bluetooth (BT), global navigation satellite system (global navigat ion SATEL L I TE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared (IR), etc. applied to a mobile phone. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. In some embodiments of the present application, the first electronic device and the second electronic device may communicate via the wireless communication module 160.
In some embodiments, the antenna 1 and the mobile communication module 150 of the handset are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the handset can communicate with a network and other devices through wireless communication technology. The wireless communication techniques can include the Global System for Mobile communications (global system for mobi le communicat ions, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code divi sion multiple access, CDMA), wideband code division multiple access (wideband code divi sion mult IPLE ACCESS, WCDMA), time division code division multiple access (time-divi sion code divi sion mult IPLE ACCESS, TD-SCDMA), long term evolution (long term evolut ion, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global pos it ioning system, GPS), a global navigation satellite system (global navigation SATELL I TE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELL ITE SYSTEM, BDS), a quasi zenith satellite system (quas i-zeni TH SATEL L I TE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELL ITE based augmentation systems, SBAS).
The cell phone implements display functions through the GPU, the display 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (l iquid CRYSTAL DI SPLAY, LCD), an organic light emitting diode (organic l ight-emitting diode, OLED), an active matrix organic light emitting diode (act ive-matrix organic L IGHT EMI TT ING diode, AMOLED), a flexible light emitting diode (flex l ight-emitting diode, FLED), mini led, microLed, micro-oLed, quantum dot L IGHT EMITTING diodes (QLED), or the like. In some embodiments, the handset may include 1 or N display screens 194, N being a positive integer greater than 1. In some embodiments of the present application, the display 194 displays a call interface as shown in fig. 7A, 7B, etc. during a call.
The cell phone may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like. In some embodiments, the handset may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect to an external memory card to extend the memory capabilities of the handset. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the handset (e.g., audio data, phonebook, etc.), etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The handset may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. In some embodiments of the present application, during a call, the audio module 170 of the first electronic device may convert the sound collected by the microphone 170C into corresponding voice information. The voice information from the second electronic device may also be converted to audio information.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal 100 can listen to music or to handsfree calls through the speaker 170A. In some embodiments of the present application, after the electronic device receives the voice information from the peer device during the call, the voice information may be played through the speaker 170A or the earpiece.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal 100 receives a telephone call or voice message, it is possible to receive voice by approaching the receiver 170B to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The terminal 100 may be provided with at least one microphone 170C. In other embodiments, the terminal 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal 100 may be further provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the source of sound, implement directional recording functions, etc. In some embodiments of the present application, during a call, the first electronic device may collect a sound of a user through the microphone 170C, and the sound is processed by the audio module 170, the processor, etc. to obtain corresponding voice information. The electronic device may send the voice information to the peer device.
Fig. 3 is a schematic diagram of another structure of an electronic device according to an embodiment of the present application. The electronic device may be the first electronic device and/or the second electronic device described above. The electronic device includes a call module 310, a semantic understanding module 311, an interface display module 312, a personalized text-to-speech module 313, a general text-to-speech module 314, a speech-to-text module 315, a text input module 316, a personalized timbre selection module 317, a personalized timbre generation module 318, and the like.
Optionally, the call module 310 may be configured to monitor a call state such as an incoming call, and call with the second electronic device.
The semantic understanding module 311 is configured to understand the call information of the second electronic device from the call module 310, and generate reply information according to a preset reply template and/or the understood semantic.
The interface display module 312 is configured to display interface information of the first electronic device. For example, the condition of the call interface of the first electronic device, the call mode information of the first electronic device, the information automatically replied by the first electronic device according to the understanding of the semantic understanding module 311, the information manually replied by the user received by the first electronic device, the reply information of the second electronic device, and the like. Optionally, the call mode includes an automatic answer mode or a manual answer mode.
The personalized text-to-speech module 313 is configured to convert text information received by the first electronic device into speech information with personalized tone. For example, the text information manually replied by the user on the interface display module is converted into the voice information with personalized tone.
The universal text-to-speech module 314 is configured to convert text information received by the first electronic device into speech information with a universal tone. For example, the text information automatically replied by the intelligent AI displayed on the interface display module is converted into the voice information of the universal AI tone.
The voice-to-text module 315 is configured to convert voice information from the second electronic device into text information, and display the text information in the interface display module of the first electronic device.
The text input module 316, the first electronic device receives text information input by the user through the text input module 316, and displays the text information in the interface display module of the first electronic device. The text input module 316 is further configured to trigger the first electronic device to enter a user auto-reply mode. For example, when the first electronic device detects that the user clicks the text input module 316, the first electronic device is triggered to enter the user auto-reply mode. Optionally, the text input module 316 is further configured to detect whether the user has completed text input.
A personalized timbre selection module 317 for the first electronic device to select a personalized timbre.
The personalized timbre generating module 318 is configured to generate a personalized timbre by the first electronic device, so as to select the personalized timbre selecting module 317.
Fig. 3 illustrates only exemplary functional modules and connection relationships between modules in an electronic device, and it should be understood that an electronic device may include more or fewer modules than illustrated, or may combine certain modules, or may split certain modules, or may have different arrangements of modules, connection relationships. The illustrated modules may be implemented in hardware, software, or a combination of software and hardware.
The following description will be given by taking a call scenario between the mobile phone 1 of the user and the mobile phone 2 of the contact a as an example.
As shown in fig. 4A, the handset 1 may display a main interface 401. The main interface 401 may be an interface displayed after the mobile phone 1 is turned on, but not limited to, and may also be other interfaces, such as a background interface, for example, a negative screen. The main interface 401 may include various applications (appl ication, APP) installed on the mobile phone 1. For example, icons of APP of the mobile phone 1, such as a clock, a calendar, a gallery, a memo, an application mall, a setting, a music player, a calculator, sports health, weather, a camera, a phone, information, an address book 402, and the like, and, for example, APP of a third party, such as a WeChat, a payment device, a network game, and the like, may be included.
In some examples, handset 1 may initiate a call. If the user wants to initiate a call with another user, he can click on the address book 402 in the main interface 401 of the mobile phone 1. In response to this operation, the handset 1 may jump from the main interface 401 to the contact selection interface 403, as shown in fig. 4B. Icons of the contacts may be included in the contact selection interface 403, for example, including "contact a"404, "contact B"405, "contact C", and so on, so that the user may select a contact that needs to be called in the contact selection interface 403. Alternatively, if the user wishes to initiate a call with contact A, the contact may be selected in contact selection interface 403 and the initiate call 406 clicked. In response to this operation, handset 1 may initiate a call to handset 2 of the "contact a" 404.
In some examples, handset 1 may receive a call. If the user receives an incoming call from another user (e.g., contact a), the cell phone 1 interface may display an interface 501 as shown in fig. 5. Alternatively, interface 501 may include the name of incoming user contact A, an answer button 502, a hang-up button 503, a reply button 504, and a smart AI answer button 505.
In one scenario, if the user can answer the call, then the answer button 502 shown in FIG. 5 is clicked directly. As shown in fig. 6, in response to the user clicking the answer button 502, the mobile phone 1 may display a call interface 601, and the user may make a call with the contact a. Optionally, the call interface 601 may include an icon of the selected contact a, and may further include an icon related to a call, for example, including: call duration, record, wait, video call, mute, contacts, speaker, hang-up 602, more buttons 603, etc. Thereafter, if the user wishes to end the call, hang-up button 602 may be clicked. In response to this operation, the handset 1 can end the call with the handset 2 of contact a.
In another scenario, if the user is not convenient to answer the call, the smart AI answer button 505 in the cell phone interface 501 shown in figure 5 may be clicked. In response to the user clicking the smart AI answer button 505, the handset 1 may perform a smart AI call. For example, the mobile phone 1 may automatically communicate with the mobile phone 2 through the intelligent AI technology, or the mobile phone 1 may receive reply information input by the user in the mobile phone 1 and convert the information into voice to send to the mobile phone 2.
Fig. 7A shows an example in which the mobile phone 1 performs a smart AI call. In response to the user clicking the smart AI answer button 502 of the interface 501 shown in figure 5, the handset 1 answers the call from the handset 2. During the call, as shown in fig. 7A (a), the mobile phone 1 may display the intelligent AI call interface 701. Optionally, the interface 701 includes icons of contact a, and may also include icons related to a call, such as including: call duration, hang-up phone button 702, microphone button 703, speaker button 704, back call button 705, dialog 706, text input box 707, text send button 708, voice input button 709, card 710, and the like.
Optionally, the icon of contact a, the call duration, the hang-up phone button 702, the microphone button 703, the speaker button 704, and the back call button 705 are located in the card 710, and the card 710 may be expanded to hover over the interface 701 or may be contracted to hover over the interface 701. As shown in fig. 7A (a), the card 710 expands to hover over the interface 701 and the user can click on the blank of the expanded card 710. In response to a user clicking on the blank of the expanded card 710, the card 710 may collapse and hover over the interface 701, as shown in fig. 7A (b). Alternatively, the user may click on the blank of the collapsed card 710, and the card 710 may be expanded to hover over the interface 701 in response to the user clicking on the blank of the collapsed card 710.
In the embodiment of the present application, the text automatically replied by the smart AI of the mobile phone 1, the text manually replied by the user, and the text converted by the voice of the user (contact a) of the mobile phone 2 may be displayed in the text box of the dialog box 706. Alternatively, the user may manually enter text to be replied to contact A in text entry box 706.
For example, in some scenarios, after handset 1 answers the call through the smart AI, an "auto-answer mode" word may be displayed in dialog 706 of interface 701, indicating that handset 1 is now in the smart AI auto-answer mode. The handset 1 may determine itself what text is needed to reply to contact a and display the automatically replied text in dialog 706. For example, as in (a) of fig. 7B, the mobile phone 1 may display the content "you good, i am an intelligent AI assistant of the owner" of the mobile phone 1 in the text box 7061 in the dialog box 706, he is now inconvenient to take a call, what you have can say with me. The mobile phone 1 converts the automatically replied text into voice information of tone 1, and sends the voice information of tone 1 to the mobile phone 2 of the contact A, the mobile phone 2 receives the voice information and plays the voice information by using tone 1, the contact A can hear the voice information of tone 1, i are intelligent AI assistants of the owner, he is inconvenient to receive calls at present, and something you can speak with me.
Optionally, the content of the automatic reply may be a reply template preset by the user, or a reply content automatically generated by the intelligent AI according to the semantics of the reply message of the calling party. For example, after the user can set the intelligent AI to answer the incoming call, the user first replies to the reply template preset by the contact a. For example, the template is "you good, i am an intelligent AI assistant for the owner, he is inconvenient to take a call now, what you have can say with i am. And then, automatically generating reply content according to the semantics of the reply message of the contact A. For example, if contact a replies "is there a few points? The intelligent AI can reply that the contact a "is now Beijing time 8-point integer" according to the time in the mobile phone 1.
Alternatively, after receiving the call voice of the contact a from the mobile phone 2, the mobile phone 1 may recognize the call voice of the contact a and convert the voice content into text and display the text in the dialog 706 of the interface 701. For example, the mobile phone 1 recognizes that the voice content of the call of the contact a is "feed, is the phone is not connected at the first place? The mobile phone 1 can convert the voice content into the text "feed, is the phone not connected at first? ". As in (a) of fig. 7B, the mobile phone 1 may display the converted text information in the text box 7062 of the dialog box 706. As such, the user of handset 1 can view the talk content of contact a through dialog 706.
Optionally, text box 7061 and text box 7062 are displayed in a different UI in dialog box 706 to distinguish the content replied by the user of mobile phone 1 from the content replied by contact a of mobile phone 2. Illustratively, the content replied to by the user handset 1 is displayed in the text box 7061 of the black background in the font format of the songbee, and the content replied to by the contact a handset 2 is displayed in the text box 7062 of the white background in the font format of the songbee. The user can conveniently check and understand the conversation content.
In other scenarios, if the user of the mobile phone 1 wants to reply to the contact a manually, the user of the mobile phone 1 may directly input text manually in the text input box of the call interface to reply. As in (a) of fig. 7B, after viewing the call content "feed, surreptitious call" of contact a, the user of the mobile phone 1 wishes to manually input information to reply to contact a, and then can input information that wants to reply to contact a (e.g. "how you want, what do you have in a meeting. Alternatively, the mobile phone 1 triggers the mobile phone 1 to switch from the automatic reply mode to the manual reply mode in response to an operation (one example of the first operation) manually replied by the user. For example, as shown in fig. 7B (a), when the user detects that the user clicks the text input box 707, exhales the keyboard of the mobile phone, and prepares to input text, the mobile phone 1 may automatically switch to the manual answer mode, and may display the word "manual answer mode" in the dialog box 706, which indicates that the mobile phone 1 is in the manual answer mode. The first operation may also be an operation of the mobile phone that a cursor is applied to the text input box 707, the user applies the first word, etc., which is not limited in the embodiment of the present application.
In some examples, such as in fig. 7B (B), the duration of the manual user input is less than a first threshold (e.g., 10 s), and after the user has entered the reply text, the reply text may be sent (one example of a second operation). For example, click on the send button 708 to trigger the handset 1 to send the entered information into the dialog 706. As in (c) of fig. 7B, the mobile phone 1 may display the content of the manual reply "how you want what you want, what do you have what is i in a meeting? ". The mobile phone 1 converts the manually replied text into voice information of tone 2, and sends the voice information of tone 2 to the mobile phone 2 of the contact A. The mobile phone 2 receives the voice information and plays the voice information using tone 2, and the contact a can hear the voice information of tone 2, "how you want, what do you have? ". The duration of the manual input by the user may be the time difference from when the user clicks the text input box 707 to when he or she clicks the send button 708 after inputting the completion information.
Optionally, the text box 7064 and the text box 7061 are displayed in the dialog box 706 with different UI effects (such as different fonts, different text box shapes, and colors) to distinguish the content automatically replied by the smart AI of the mobile phone 1 from the content manually replied by the user. For example, the fonts of the text frame automatically replied by the intelligent AI and the text frame manually replied by the user are regular script, the background color of the text frame automatically replied by the intelligent AI is white, and the background color of the text frame manually replied by the user is green. For another example, the text box automatically replied by the intelligent AI is a box, and the text box manually replied by the user is bubble-shaped, etc. Thus, when the user views the call content again, the user is easy to understand, and the use experience of the user is improved. Illustratively, the content automatically replied by the smart AI of the mobile phone 1 is displayed in the text box 7061 of the black background in the font format of the songbee, and the content manually replied by the user is displayed in the text box 7064 of the black background in the font format of the script.
It should be understood that the above-mentioned text box automatically replied by the intelligent AI and the text box manually replied by the user are only examples, and the text box automatically replied by the intelligent AI and the text box manually replied by the user may also include other UI effects, which are not limited by the present application.
In other examples, if the duration of the manual input by the user is greater than the first threshold (for example, 10 s), as shown in fig. 8 (a), the mobile phone 1 may automatically reply to the prompt message: "the owner is manually inputting content, the content is longer, please later. And displays the prompt in a dialog box 7063 in dialog box 706. Optionally, the hint information is displayed in font format of the Song's body in text box 7063 in a black background. The mobile phone 1 converts the prompt information into voice information of tone 1, and sends the voice information of tone 1 to the mobile phone 2 of the contact A, the mobile phone 2 receives the voice information and plays the voice information by using tone 1, and the contact A can hear the voice prompt information of tone 1, namely, the mobile phone owner is manually inputting the content, the content is longer, and please later. Therefore, the contact A can be reminded, and the user can be prevented from hanging up the phone because the contact A cannot get a reply for a long time when the user inputs the reply information manually.
Optionally, if the time of manual input by the user is longer than the second threshold (for example, 20 s), the mobile phone 1 may automatically reply to the mobile phone 2 again that the owner is manually inputting the content, and the content is longer, please be later. The number of automatic reminding times is not particularly limited in the embodiment of the application.
Alternatively, if the user completes the manually entered information within a third threshold (e.g., 60 s) time and clicks the send button 708, the manually returned information is sent to the dialog 706. As in fig. 8 (b), the handset 1 may display the content of the manual reply "how you want what you want, what you do what is in a meeting? ". The mobile phone 1 converts the manually replied text into voice information of tone 2, and sends the voice information of tone 2 to the mobile phone 2 of the contact a, the mobile phone 2 receives the voice information and plays the voice information by using tone 2, and the contact a can hear the voice information of tone 2 "how you want to do so, how do you have what is in a meeting? ".
According to the scheme, the mobile phone 2 can play the information automatically replied by the intelligent AI of the mobile phone 1 and the information manually replied by the user of the mobile phone 1 by using different colors, so that the user (such as the contact A) of the mobile phone 2 can distinguish whether the content replied by the mobile phone 1 is the content automatically replied by the intelligent AI or the content manually replied by the user. When the contact A judges that the content replied by the mobile phone 1 is the content replied manually by the user through tone, the contact A can know that the user is currently talking with the user instead of talking with the intelligent AI, thereby improving the talking efficiency and the talking experience of both parties.
Alternatively, the mobile phone 1 continues to recognize the talking voice of the contact a of the mobile phone 2, and converts the voice content into text and displays the text in the dialog 706 of the interface 701. For example, the mobile phone 1 recognizes that the call voice content of the contact a is "you want to get away" and the mobile phone 1 can convert the voice content into the text "you want to get away". As shown in fig. 9 (a), the mobile phone 1 may display the converted text information in the text box 7065 of the dialog box 706. As such, the user of handset 1 may continue to view the talk content of contact a via dialog 706.
In another embodiment, the user clicks text entry box 707 and exhales the handset keypad for a first threshold (e.g., 10S) time. However, text input is not yet completed for a long time, for example, the content of manual input is not completed within a third threshold (for example, 60 s), and the sending button 708 is clicked, so that the manually replied information is sent to the dialog box 706, or the user does not click the text input box 707 within the first threshold time, and exhales the keyboard of the mobile phone, so as to avoid that the opposite party waits for a long time, the mobile phone 1 can be automatically switched from the manual reply mode to the intelligent AI automatic reply mode. Wherein the third threshold is greater than the first threshold.
As shown in fig. 9 (b), after receiving the message "you want to play" from the mobile phone 2, the mobile phone 1 user does not reply to the message manually for a long time. To avoid the contact a of the mobile phone 2 waiting for a long time, the mobile phone 1 may automatically switch to the auto-reply mode in order to quickly respond to the mobile phone 2. In some examples, the mobile phone 1 may display the word "auto answer mode" again in the dialog 706 of the interface 701, which indicates that the mobile phone 1 is in the intelligent AI auto answer mode at this time, and the mobile phone 1 may determine the text that needs to be replied to the contact a according to the content replied to the mobile phone 2, and display the text that needs to be replied to automatically in the dialog 706. For example, as shown in fig. 9 (b), the mobile phone 1 may display the automatically replied content "feedback down time place" according to the content of the text box 7065 in the text box 7066 in the dialog box 706, i help you record. The mobile phone 1 converts the automatically replied text into voice information of tone 1, and sends the voice information of tone 1 to the mobile phone 2 of the contact A, the mobile phone 2 receives the voice information and plays the voice information by using tone 1, and the contact A can hear the voice information of tone 1 as 'feedback time and place' and I help you record. The intelligent AI assistant of handset 1 may then continue to communicate with contact a until the call is ended. Or the user may then intervene again in the call and enter the manual answer mode.
Optionally, in some scenarios, if the user wants to listen to the voice information of the contact a during the intelligent AI call, the mobile phone 1 may play the voice information of the contact a alone. Illustratively, as in fig. 10 (a), the user clicks a speaker button 704 in the interface 701, and in response to the user clicking the speaker button 704, the mobile phone 1 may turn on a function of playing the sound of the other party. As shown in fig. 10 (b), after the function of playing the voice of the other party is started, the information of the contact a received by the mobile phone 1 can be played in real time.
Alternatively, in other scenarios, if speaker button 704 in interface 701 is turned off, fig. 10 (c), the user may click on text box 7062 in dialog box 706 that contact a replies to. In response to the user clicking the text box 7062, the mobile phone 1 can play the voice information of the contact a corresponding to the content in the text box 7062.
It should be understood that the voice information playing manner of the contact a may be other, for example, the user may press the text box 7062 for a long time, and in response to the user pressing the text box 7062 for a long time, the mobile phone 1 may play the voice information of the contact a corresponding to the content in the text box 7062. The embodiments of the present application are not limited in this regard.
In one embodiment, the user may voice input the content to be replied to. Illustratively, as in fig. 11 (a), the user clicks the microphone button 703 in the interface 701, and in response to the user clicking the microphone button 703, the user can turn on the microphone of the mobile phone 1, thereby performing voice input. As in fig. 11 (b), the user can click the voice input button 709. In response to the user clicking on the voice input button 709, the position of the text input box 707 presents a voice control 707', through which the user can make a voice input. For example, a user can enter speech by pressing the speech control 707' long. Illustratively, after the user has long pressed the voice control 707 'shown in fig. 11 (b), the voice control 707' may be converted to a form as shown in fig. 11 (c), indicating that the user is inputting voice. As in (c) of fig. 11, the user inputs "how you want to do so, what do you have? ". The content of the user's voice input may be displayed in dialog 706 in the form of a voice box 7067 in which the duration of the voice content is 5s. Alternatively, in response to the user clicking on the voice box 7067, the mobile phone 1 may play the voice information.
Alternatively, the user may press the voice frame long, convert the voice into text and display the text in the text frame. As shown in fig. 11 (d), the user presses the voice box 7067 for a long time to convert voice information into text information, a text box 7068 corresponding to the voice box 7067 is displayed in the dialog box 706, and the text "how you want to do so, how do you have what is in a meeting? ". Alternatively, the mobile phone 1 may send the voice information input by the voice of the user to the mobile phone 2 of the contact a, and the mobile phone 2 receives the voice information and plays the voice information using the tone 2, and the contact a can hear the voice information of the tone 2, "how you want to do so, how do you have what is in a meeting? ".
Optionally, the mobile phone 1 may also play all information of the contact a of the mobile phone 2 and the user of the mobile phone 1 during the call. The embodiments of the present application are not limited in this regard.
In one embodiment, if the user wants to hang up with contact a, the user may click on the hang-up button 702 in the handset 1 interface 701, and in response to the user clicking on the hang-up button 702, handset 1 may end the call with handset 2.
If the user wants to talk with contact a in voice, the user can click a return talk button 705 in the handset 1 interface 701, and in response to the user clicking the return talk button 705, handset 1 can talk with handset 2 in normal voice. Illustratively, handset 1 may display a conversation interface 601 such as that shown in fig. 6, and interface 601 may not include a dialog 706 such as that shown in fig. 7B.
In one embodiment, the mobile phone 1 uses different tone colors to send the voice information automatically replied by the intelligent AI and the voice information manually replied by the user. Optionally, the mobile phone 1 may convert the text information automatically replied or manually replied by the user into the voice information with the corresponding tone color, and send the voice information to the mobile phone 2, and the mobile phone 2 receives the voice information and plays the voice information to the user (such as the contact a) of the mobile phone 2. For example, the mobile phone 1 sends the automatically replied text message "you good, i am the intelligent AI assistant of the owner, he is inconvenient to take a call now, what you have can be converted into the voice message of the corresponding tone 1 with i am's speaking" to the mobile phone 2. The mobile phone 2 receives the voice information of tone 1 and plays the voice information of tone 1, and the contact A can hear the voice information of tone 1, namely, you are intelligent AI assistants of the owner, and he is inconvenient to call at present, and what you can speak with me. The mobile phone 1 replies the manually-replied text information "how you want to do so, what do you have? The voice information converted into the corresponding tone color 2 is transmitted to the mobile phone 2. The mobile phone 2 receives the voice information of tone 2 and plays the voice information of tone 2, and the contact a can hear the voice information of tone 2, "how you want, what do you have? ". Therefore, the mobile phone 1 adopts tone color 1 to send the voice information automatically replied by the intelligent AI, and adopts tone color 2 to send the voice information manually replied by the user, so that the contact A can judge whether the received voice information is the information automatically replied by the intelligent AI or the information manually replied by the user through the voice information of different tone colors, thereby being convenient for improving the conversation efficiency.
Optionally, the user may define tone 1 corresponding to the voice information automatically replied by the intelligent AI and tone 2 corresponding to the voice information manually replied by the user in the mobile phone 1. Illustratively, tone 1 and tone 2 may be generated by a Text To Speech (TTS) technique of handset 1. The mobile phone 1 can also collect the tone of the user as tone 2 corresponding to the voice information manually replied by the user.
As shown in fig. 12A (a), the user may click on an icon of the setting application in the mobile phone 1 to trigger the mobile phone 1 to jump to the interface 801 shown in fig. 12A (b). Interface 801 displays WLAN controls, bluetooth controls, mobile network controls, intelligent AI controls 802, and the like.
The user may click on the intelligent AI control 802 of the interface 801, and in response to the user clicking on the intelligent AI control 802, the mobile phone 1 may jump into the interface 803 as shown in fig. 12A (c), the interface 803 displaying an intelligent AI call tone color selection control 804 and a plurality of controls regarding the intelligent AI. Such as an intelligent AI scene control 805, an intelligent AI search control, an intelligent AI suggestion control, and the like.
The user may click on the smart AI-call tone color selection button 804 in the interface 803, and in response to the user clicking on the smart AI-call tone color selection button 804, the handset 1 may jump to the interface 806. As in fig. 12A (d), the interface 806 is a smart AI call tone selection interface, and the interface 806 may include a smart AI tone selection button 807 and a user tone selection button 808. The user may click on the intelligent AI timbre selection button 807 of interface 803. In response to the user clicking on the smart AI tone selection button 807, the handset 1 may jump to the tone interface 809 as shown in fig. 12A (e). The timbre interface 809 can include an add sound button 812, an official sound control, a custom sound control.
Optionally, the official sound control is one or more. Wherein one official sound control corresponds to one sound type. By way of example, the official sound controls described above may include one or more of a knowledgeable female sound control, a natural child sound control, an clear male sound control.
Alternatively, the custom sound controls may include my sound control 811 as shown in fig. 12A (e) and/or sound controls of other users. Among other things, my sound control 811 may be a sound entered in advance for the user. Other users may include friends, relatives, etc., for example, the custom sound control includes the sound control of the other user, and the mobile phone 1 has entered the sound of the friend's minds, then the custom sound control may include the custom sound control of the minds as shown in (e) of fig. 12A.
Alternatively, a tone color that the intelligent AI automatically reverts to may be set. The user may select the awareness female sound control 810 from the official sound controls, and in response to the user clicking the awareness female sound control 810, the mobile phone 1 may set the awareness female sound to tone 1 automatically replied by the intelligent AI. As shown in fig. 12A (f), the selected sound may be displayed as a known female sound in an intelligent AI tone selection control 807 in the interface 806. Therefore, the user can intuitively see what the intelligent AI tone color is, and the user experience is improved.
It should be understood that the types of sounds listed for the official and custom sound controls described above are only one example, and that the official and custom sound controls may also include other types of sounds, neither of which is limiting of the application.
Similarly, the user's timbre may be set. As shown in fig. 12B (a), the user may click on the user tone color selection button 808 of the interface 806. In response to the user clicking the user tone color selection button 808, the mobile phone 1 may jump to the tone color interface 809 as shown in fig. 12B (B). The user may select my sound control 811 among the custom sound controls, and in response to the user clicking on my sound control 811, the handset 1 may set the my sound control 811 to the user's tone color 2. As shown in fig. 12B (c), the selected sound may be displayed as my sound in the user tone selection control 808 in the interface 806. Therefore, the user can intuitively see the tone color of the user, and the use experience of the user is improved.
Optionally, the mobile phone 1 may perform user-defined sound recording. As shown in fig. 13 (a), the user may click on the add sound button 812. In response to the user clicking on the add sound button 812, the mobile phone 1 jumps to the interface 813 shown in fig. 13 (b). Interface 813 includes a self recording control 814, an invite others recording control 815. Illustratively, the user may click on the own recording control 814, and in response to the user clicking on the own recording control 814, the mobile phone 1 may jump to the own recording interface 816 shown in fig. 13 (c). Interface 813 includes a "in self recording" prompt, record complete control 817. The user can input own voice in the modes of talking, speaking, singing and the like. Thereafter, the user may click on the record completion control 817. In response to the user clicking on the recording completion control 817, the mobile phone 1 can jump to an interface 809 shown in fig. 13 (d), and a "my sound" button 811 is displayed in the interface 809. In response to the user clicking the "my sound" button, the cell phone may play the sound recorded by the user. Similarly, the user may invite the minds to record the minds. The embodiments of the present application are not described in detail.
In an embodiment, the mobile phone 1 may be used to set a user call scenario, so that when a user is busy (e.g. the user is in a meeting), when the mobile phone 1 receives an incoming call, the mobile phone 1 may directly use the intelligent AI to answer the call, thereby avoiding missed call situations and improving the call experience of the incoming user.
For example, the setting procedure of the user scene may include: first the user can click on the smart AI scene button 803 in the interface 801 as shown in fig. 14 (a). In response to the user clicking on the smart AI scene button 803, the handset 1 can jump to the smart AI scene interface 811 as shown in fig. 14 (b). The intelligent AI scene interface 811 includes an in-meeting scene button 812, an in-call scene Jing Anniu, and the like.
Thereafter, the user may click on the in-meeting field Jing Anniu 812 in the smart AI scene interface 811. In response to the user clicking on the in-meeting scene Jing Anniu 812, the handset 1 can be set to the in-meeting scene for the user. The mobile phone 1 is set as the scene, and when receiving the incoming call of other users, the mobile phone can answer the call directly based on the intelligent AI. Therefore, even if the incoming call cannot be answered in time due to the fact that the user is in the conference, missed call of the incoming call is avoided, timeliness of answering the incoming call is guaranteed, the incoming call user is enabled to know the situation of the user at the moment, and conversation experience of both parties of the incoming call is improved.
In one embodiment, the intelligent AI auto-answer mode, the user manual answer mode, and the voice call mode can be freely switched.
Illustratively, as shown in fig. 15 (a), when the user's handset 1 receives an incoming call from the contact a's handset 2, the user may click the smart AI answer button 505 of the interface 501. In response to a user clicking the smart AI answer button 505, the handset 1 may pop up a talk mode option box 506 on the interface 501. The talk mode options box 506 includes an intelligent AI reply option 507 and a manual reply option 508. In some examples, if the user clicks on the smart AI reply option 507, in response to the user clicking on the smart AI reply option 507, the handset 1 enters a smart AI auto-reply mode, displays an interface 701 as shown in fig. 15 (b), and displays an "auto-reply mode" word in a dialog 706 in the interface 701. In other examples, if the user clicks on the manual reply option 508, in response to the user clicking on the manual reply option 508, the mobile phone 1 enters the user manual reply mode, displays the interface 701 as shown in fig. 15 (c), and displays the "manual reply mode" word in the dialog 706 in the interface 701.
As another example, as shown in fig. 16, the user may click the more button 603 in the voice call interface 601, and in response to the user clicking the more button 603, the handset 1 may pop up the call mode option box 604 on the interface 601. The talk mode options box 604 includes an intelligent AI reply option 605 and a manual reply option 606. Alternatively, if the user clicks on the smart AI reply option 605, in response to the user clicking on the smart AI reply option 605, the mobile phone 1 enters the smart AI automatic reply mode, displays an interface 701 as shown in fig. 15 (b), and displays an "automatic reply mode" word in a dialog 706 in the interface 701. If the user clicks the manual reply option 606, in response to the user clicking the manual reply option 606, the mobile phone 1 enters the user manual reply mode and may display an interface 701 as shown in fig. 15 (c). The "manual answer mode" word is displayed in dialog 706 in interface 701. Thus, the mobile phone 1 can freely switch among the intelligent AI automatic reply mode, the manual reply mode of the user and the voice call mode. In some scenes, when a user is inconvenient to answer an incoming call, normal running of the call can be ensured, and when the user is convenient to answer the incoming call, the user can intervene in the call in time, so that the high efficiency of the call is ensured, and the personalized call requirement of the user is met.
The following is a schematic flow chart of a call method provided in the embodiment of the present application. As shown in fig. 17, the method includes a plurality of steps S101 to S110:
s101, the mobile phone 1 receives the call from the mobile phone 2.
Illustratively, the user receives calls from other users (e.g., contact A). The mobile phone 1 is a mobile phone of a user, and the mobile phone 2 is a mobile phone of a contact A.
S102, displaying an incoming call interface by the mobile phone 1.
Illustratively, after the handset 1 receives a call from the handset 2, an interface 501 as shown in fig. 5 may be displayed.
Alternatively, as shown in FIG. 5, the user may click the RECEIPT button 502 directly to answer the call. In response to the user clicking the answer button 502, the mobile phone 1 may display a call interface 601 as shown in fig. 6, and the user may make a call with contact a.
Or alternatively, if the user is not convenient to answer the call, the smart AI answer button 505 in the cell phone interface 501 shown in figure 5 may be clicked.
Alternatively, as shown in fig. 15 (a), the user may click on the smart AI reply option 507, and the mobile phone 1 enters the smart AI automatic reply mode, displaying an interface 701 as shown in fig. 15 (b), and displaying an "automatic reply mode" word in a dialog 706 in the interface 701.
Or the user may click on the manual reply option 508, the mobile phone 1 enters the user manual reply mode, displays the interface 701 as shown in fig. 15 (c), and displays the "manual reply mode" word in the dialog 706 in the interface 701.
S103, the mobile phone 1 displays an AI call interface according to the answering instruction of the user.
Alternatively, if the user is not convenient to answer the call, the smart AI answer button 505 in the cell phone interface 501 shown in figure 5 may be clicked. In response to the user clicking the smart AI-answering button 505, the mobile phone 1 can answer a call from the mobile phone 2 through a smart AI call, and an AI call interface such as that shown in fig. 7A (a) or fig. 7A (b) can be displayed.
S104, the mobile phone 1 receives reply information input by the user on the AI call interface.
Optionally, the mobile phone 1 may receive a reply message manually replied by the user. Illustratively, as shown in fig. 7B (B), the user of the handset 1 may input reply information intended to be replied to contact a in an input box 805. Such as "how you want to do so, what do you have in a meeting? ".
As another example, as shown in fig. 11, in some scenarios, the user may enter an automatically replied reply message in speech. And a reply message automatically replied to by the voice input is displayed in the dialog 706 in the form of a voice box 7067.
Alternatively, the smart AI of the handset 1 may reply to the message automatically. For example, as shown in fig. 7B (a), the content of the smart AI auto-reply of the mobile phone 1 may be "you good, i am the smart AI assistant of the owner, he is inconvenient to take a call now, what you have can say with me". The handset 1 may display the content in text box 7061 in dialog box 706.
And S105, when the manual reply condition is met, the mobile phone 1 sends voice information 2 of tone color 2 to the mobile phone 2 according to the reply information input by the user.
In some embodiments, as shown in fig. 7B (B), the user clicks the text input box 707, exhales the keypad of the mobile phone, and prepares to input text, which means that the user wants to manually input reply information, the mobile phone may display a "manual reply mode" word in the dialog box 706, indicating that the mobile phone 1 is in the user manual reply mode at this time. In the manual answer mode, the user may input a voice or text to the mobile phone 1, which may be used as a reply message.
Taking the example that the reply message input by the user is text message, as shown in (c) of fig. 7B, by way of example, the content of the manual reply that the mobile phone 1 can display in the text box 7064 in the dialog box 706, "how you want to do so, i am in a meeting, what is you do? "to voice information 2 of tone color 2, and transmitting voice information 2 of tone color 2 to mobile phone 2 of contact a. As shown in fig. 7B (c), the content manually replied to by the user is displayed in the text box 7064 of the black background in the font format of the clerical script. Thus, when the user views the call content again, the user is easy to understand, and the use experience of the user is improved. After that, the mobile phone 2 receives the voice information 2 and plays the voice information 2 of tone 2, and the contact a can hear the voice information 2 of tone 2 "how you want, i am in a meeting, what is you? ".
Taking the example that the reply message input by the user is voice message. For example, as shown in fig. 11 (a), the mobile phone 1 may perform voice input. As in fig. 11 (b), the text input box 707 of the handset 1 may be positioned to present a voice control 707', and the user may make voice input through the voice control 707'. For example, a user can enter speech by pressing the speech control 707' long. Illustratively, after the user has long pressed the voice control 707 'shown in fig. 11 (b), the voice control 707' may be converted to a form as shown in fig. 11 (c), indicating that the user is inputting voice. As in (c) of fig. 11, the user inputs "how you want to do so, what do you have? ". The mobile phone 1 can use the voice information of the user as voice information 2 of tone color 2. Or the handset 1 may convert the voice information of the user into voice information 2 of tone 2. The handset 1 sends voice information 2 of tone 2 to the handset 2 of contact a. After that, the mobile phone 2 receives the voice information 2 and plays the voice information 2 of tone 2, and the contact a can hear the voice information 2 of tone 2 "how you want, i am in a meeting, what is you? ".
The voice of tone 2 is greatly different from the voice of tone 1 during automatic reply, so that the contact A of the mobile phone 2 can clearly distinguish the content of manual/automatic reply.
In other embodiments, the mobile phone 1 detects the duration of the manual input of the information by the user in the text box 706, and if the user finishes inputting in a period of time, the mobile phone 1 sends the voice information 2 of tone color 2 to the mobile phone 2 according to the reply information manually input by the user, which is regarded as meeting the manual reply condition. If the user does not complete the input within a period of time, the mobile phone 1 may send a prompt message to the mobile phone 2, where the mobile phone owner is manually inputting the content, the content is longer, and please be later, so as to avoid that the contact a of the mobile phone 2 hangs up because the reply of the mobile phone 1 is not received for a long time. The mobile phone 1 may continue to receive the reply information manually input by the user in the text box 706, and if the user does not complete the input for a long time, the mobile phone 1 may send the prompt information to the mobile phone 2 again until detecting that the user completes the input, and the mobile phone 1 may send the prompt information to the mobile phone 2 according to the reply information "how you want to do so, what do you have in the meeting? ", voice information 2 of tone 2 is transmitted to the mobile phone 2.
S106, the mobile phone 1 receives the voice information 3 from the mobile phone 2.
S107, the mobile phone 1 presents the voice information 3.
In some scenarios, after the mobile phone 1 receives the voice information 3 of the contact a from the mobile phone 2, the voice information 3 may be converted into corresponding text, and the text corresponding to the voice information 3 may be presented. Illustratively, as shown in fig. 7B (a), the mobile phone 1 recognizes that the voice information 3 of the contact a is "feed, is the phone not connected at the glance? ", and converts the voice message 3 into the text" feed, is the phone not received at the glance? ". Displayed in text box 7062 of dialog box 706. In this manner, the user of handset 1 can view the voice information 3 replied to by contact a through dialog 706.
Optionally, as shown in fig. 7B (a), the contact a phone 2 replies that the voice message 3 is displayed in the font format of the songbee in the text box 7062 with a white background. The user can conveniently check and understand the conversation content.
In other scenarios, after the mobile phone 1 receives the voice information 3 of the contact a from the mobile phone 2, the voice information 3 of the contact a may be played according to the playing requirement of the user of the mobile phone 1. Illustratively, as shown in fig. 10 (b), after the function of playing the voice of the other party is started, the user may click on the text box 7062 replied to the mobile phone 2 of the contact a in the dialog box 706. In response to the user clicking the text box 7062, the mobile phone 1 can play the voice information 3 replied by the mobile phone 2 corresponding to the content in the text box 7062.
S108, when the automatic reply condition is met, the intelligent AI of the mobile phone 1 automatically determines the voice information 1 of the tone color 1.
It should be understood that embodiments of the present application do not limit the order of execution between S104 and S108. After answering the call of the mobile phone 2, the mobile phone 1 may first enter an automatic reply mode, and then, according to the manual reply intention of the user, the mobile phone 1 may enter the manual reply mode. Or after answering the call of the mobile phone 2, the mobile phone 1 enters a manual reply mode according to the manual reply intention of the user, and then the mobile phone 1 enters the automatic reply mode when the automatic reply condition is met. In the embodiment of the application, the mobile phone 1 can adaptively switch between an automatic reply mode and a manual reply mode.
Taking the example that after the mobile phone 1 receives the call of the mobile phone 2, the mobile phone 1 enters the auto-answer mode, as shown in fig. 7B (a), in response to the user clicking the intelligent AI answer button 502 of the interface 501 shown in fig. 5, the mobile phone 1 receives the call from the mobile phone 2 through the intelligent AI, and the intelligent AI call interface 701 can be displayed. Optionally, in the smart AI call scenario, the mobile phone 1 may automatically enter the smart AI auto-answer mode first, and may display the word "auto-answer mode" in the dialog 706 of the interface 701, which indicates that the mobile phone 1 is in the smart AI auto-answer mode at this time. Then, according to the manual reply intention of the user, the mobile phone 1 can switch to the manual reply mode.
For example, as shown in fig. 7B (a), after the mobile phone 1 listens to the call of the mobile phone 2, the mobile phone 1 enters the auto-answer mode. The reply message automatically replied by the intelligent AI of the mobile phone 1 can be "you good, i am the intelligent AI assistant of the owner, he is inconvenient to take a call now, what you have can speak with me". The handset 1 may display the content in text box 7061 in dialog box 706 and convert the reply message to voice message 1 of tone 1. As shown in fig. 7B (c), the content of the smart AI auto-reply of the mobile phone 1 is displayed in the text box 7061 of the black background in the font format of the sonde.
Taking the example that the mobile phone 1 enters the manual reply mode according to the manual reply intention of the user after answering the call of the mobile phone 2, the user clicks the text input box 707 and exhales the mobile phone keyboard within the first threshold time. But does not complete the manually entered content within a third threshold (e.g., 60 s) time and clicks the send button 708 to send the manually replied message to the dialog 706. Or the user does not click the text input box 707 and exhales the mobile phone keyboard within the first threshold time, in order to avoid that the mobile phone 2 is in the manual reply mode for a long time, so that the mobile phone 1 does not receive the information manually replied by the user of the mobile phone 1 for a long time, the mobile phone 1 can be switched from the manual reply mode to the intelligent AI automatic reply mode. In some examples, handset 1 may again display the "auto answer mode" word in dialog 706 of interface 701, indicating that handset 1 is now in the smart AI auto answer mode.
Illustratively, as shown in fig. 9 (b), after receiving the message "you want to play" from the mobile phone 2, the mobile phone 1 user does not reply to the message manually for a long time. To avoid the contact a of the mobile phone 2 waiting for a long time, the mobile phone 1 may automatically switch to the auto-reply mode in order to quickly respond to the mobile phone 2. In some examples, handset 1 may display the auto-reply content "feedback down time place, i help you record" in text box 7066 in dialog box 706 according to the content of text box 7065, and convert the auto-reply text to voice message of tone 1.
S109, the mobile phone 1 transmits the voice information 1 of tone 1 to the mobile phone 2.
Illustratively, as shown in fig. 7B (a), the mobile phone 1 sends voice information 1 "you good, i are intelligent AI assistants of the owner, who now has inconvenience to answer the call, what you can say with me", of tone 1, to the mobile phone 2 of the contact a.
After that, the mobile phone 2 receives the voice information and plays the voice information by using tone 1, so that the contact a can hear the voice information of tone 1, "you good, i are intelligent AI assistants of the owner, he is inconvenient to take a call, what you have can speak with i.
As another example, as shown in fig. 9 (b), the mobile phone 1 sends the voice information 1 "feedback down time and place, i help you record" of tone 1 to the mobile phone 2 of the contact a.
After that, the mobile phone 2 receives the voice information and plays the voice information by using tone 1, and the contact a can hear the voice information "feedback time and place" of tone 1, i help you record.
In some embodiments, if the mobile phone 1 is in the automatic answer mode, the mobile phone 1 may acquire the user portrait that has been authorized to be obtained, and automatically reply to the mobile phone 2 according to the user portrait. Thus, the contact A of the mobile phone 2 can acquire the current state of the user according to the reply information of the mobile phone 1.
For example, as in fig. 18 (a), after the handset 1 answers the call through the smart AI, an "auto answer mode" word may be displayed in the dialog 706 of the interface 701. The mobile phone 1 can determine from the user portrait that the user is in a meeting, and can display the content of automatic reply in the text box 7160 in the dialog box 706, "i are intelligent AI assistants of the owner, he is now in a meeting, inconvenient to connect, what you can say with i). The mobile phone 1 can convert the automatically replied text into voice information of tone 1, and send the voice information of tone 1 to the mobile phone 2 of the contact A, the mobile phone 2 receives the voice information and plays the voice information by using tone 1, and the contact A can hear the voice information of tone 1, namely 'I are intelligent AI assistants of the owner, he is in a meeting now, is inconvenient to receive calls, and can know what you can speak with I' and the current state of the user of the mobile phone 1 according to the voice information, so that the call efficiency can be improved. For example, as shown in fig. 18 (a), the contact a of the mobile phone 2 may reply with the message "in meeting again" according to the current state of the user of the mobile phone 1. Handset 1 may display the reply in text box 7161 in dialog 706.
Thereafter, the user of the handset 1 may click on the text box 707 to trigger the handset 1 to switch to the manual answer mode. As in fig. 18 (b), in the manual answer mode, the mobile phone 1 may display the word "manual answer mode" in the dialog 706 of the interface 701. As shown in fig. 18 (c), the user of the mobile phone 1 can manually input "the latest job is more, the example is more" in the text box 707, and the mobile phone 1 can send the text input by the user to the text box 7162 of the dialog 706 to display.
In other embodiments, the information automatically replied to the mobile phone 2 by the mobile phone 1 is not limited to voice and text, and the information replied to the user by the mobile phone 1 is not limited to voice and text, for example, pictures, videos, expressions, business cards, and the like.
Illustratively, as in FIG. 19, the interface 701 may also include a control 711 that the user may click on and add pictures, business cards, audio, video waiting to send information. The mobile phone 1 transmits the information to be transmitted selected by the user to the mobile phone 2.
Fig. 3 above describes the module functions included in the electronic device, and fig. 20 shows a process of implementing a call method by mutually cooperating a plurality of modules.
As shown in fig. 20, the mobile phone 1 receives a call request from the mobile phone 2 through the call module 310, and displays the call request in the answer interface 501 of the mobile phone 1 shown in fig. 5 through the interface display module 312. If the user clicks the answer button 502 of the answer interface 501, the mobile phone 1 enters a voice call mode. If the user clicks the smart AI answer button 505 of the answer interface 501, the handset 1 defaults to enter the auto-answer mode.
Alternatively, the mobile phone 1 receives the first operation from the user, and may switch from the automatic reply mode to the manual reply mode. After the mobile phone 1 enters the manual reply mode, if the first operation from the user is not detected within a first period of time (for example, 10 s) or the second operation from the user is not detected within a second period of time (for example, 30 s), the mobile phone 1 may switch from the manual reply mode to the automatic reply mode. The second time period is longer than the first time period.
Alternatively, if the mobile phone 1 defaults to enter the auto-answer mode, the interface display module 312 displays the word "auto-answer mode" as shown in fig. 7B (a), which prompts the user that the mobile phone 1 is currently in the auto-answer mode, and the mobile phone 1 can automatically talk with the user (contact a) of the mobile phone 2. Thereafter, the mobile phone 1 may convert the voice information from the contact a into text information through the voice-to-text module 315, and display the text information in the interface 701 of the mobile phone 1 as shown in (a) of fig. 7B through the interface display module 312. For example, the text information is displayed in a text-first box 7062. The smart AI of the mobile phone 1 understands the information from the contact a through the semantic understanding module 311 and generates an automatic reply text. The mobile phone 1 converts the automatic reply text into voice information of tone 1 through the universal text-to-voice module 314 and sends the voice information to the mobile phone 2; the auto-reply text may be displayed in an interface as shown in fig. 9 (b). For example, the auto-reply text is displayed in a text box 7066 as in (b) of fig. 9.
Optionally, the mobile phone 1 receives the first operation from the user, the mobile phone 1 is in the automatic reply mode, and the interface display module 312 displays the word "manual reply mode" as shown in fig. 7B (a), so as to prompt the user that the mobile phone 1 is currently in the manual reply mode, and the user can manually talk with the user (contact a) of the mobile phone 2 through the mobile phone 1. The mobile phone 1 may convert the voice information from the contact a into text information through the voice-to-text module 315, and display the text information on the interface of the mobile phone 1 as shown in (a) of fig. 9 through the interface display module 312. For example, the text information is displayed in a text box 7065 as in fig. 9 (a). The user inputs the replied text information in the mobile phone 1 through the text input module 316, and detects whether the second operation is received within the second duration.
If the second operation is received in the first time period, the user completes the input in the first time period and sends the input to the mobile phone 2. For example, as shown in fig. 7B (c), the interface display module 312 displays the text information input by the user in the text box 7064 in the mobile phone interface 701, and the mobile phone 1 converts the text information into voice information of tone color 2 through the personalized text-to-voice module 313 and sends the voice information to the mobile phone 2.
If the second operation is not received within the first duration, the mobile phone 1 sends a first voice prompt message. The first voice prompt may be displayed in an interface as shown in fig. 8. For example, the interface display module 312 may display the first voice prompt in the text box 7063 as in (a) of fig. 8. The first voice prompt message is converted into voice message of tone 1 through the universal text-to-voice module 314 and sent to the mobile phone 2.
If the second operation is not received within the second time period, the mobile phone 1 is switched from the manual reply mode to the automatic reply mode. Wherein the second time period is longer than the first time period.
One or more of the interfaces described above are exemplary, and in other embodiments, other interface designs are possible.
It should be understood that some of the operations in the flow of the method embodiments described above are optionally combined and/or the order of some of the operations is optionally changed. The order of execution of the steps in each flow is merely exemplary, and is not limited to the order of execution of the steps, and other orders of execution may be used between the steps. And is not intended to suggest that the order of execution is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that details of processes involved in a certain embodiment herein are equally applicable to other embodiments in a similar manner, or may be used in combination between different embodiments.
Moreover, some steps in method embodiments may be equivalently replaced with other possible steps. Or some steps in method embodiments may be optional and may be deleted in some usage scenarios. Or other possible steps may be added to the method embodiments.
Moreover, the method embodiments described above may be implemented alone or in combination.
Further embodiments of the application provide an apparatus, which may be the second electronic device or the first electronic device or a component in the second electronic device (such as a chip system) as described above.
The apparatus may include: a display screen, a memory, and one or more processors. The display, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the electronic device shown in fig. 2C.
The core structure of the electronic device may be represented as the structure shown in fig. 21, and the electronic device includes: a processing module 151, an input module 152, a storage module 153, a display module 154, and a communication module 155.
The processing module 151 may include at least one of a Central Processing Unit (CPU), an application processor (Appl icat ion Processor, AP), or a communication processor (Communicat ion Processor, CP). The processing module 151 may perform operations or data processing related to control and/or communication of at least one of the other elements of the consumer electronic device. Optionally, the processing module 151 is configured to support the first electronic device 100 to execute S101-S109 in fig. 17.
The input module 152 is configured to obtain an instruction or data input by a user, and transmit the obtained instruction or data to other modules of the electronic device. Specifically, the input mode of the input module 152 may include touch, gesture, proximity screen, and the like, and may be voice input. For example, the input module may be a screen of the electronic device, acquire an input operation of a user, generate an input signal according to the acquired input operation, and transmit the input signal to the processing module 151. Optionally, the input module 152 is configured to obtain text information or voice information input by the user, and reference may be made to the input interface schematic diagrams shown in fig. 7B, 8 and 11.
The storage module 153 may include volatile memory and/or nonvolatile memory. The storage module is used for storing at least one relevant instruction or data in other modules of the user terminal equipment. Optionally, the storage module 153 is configured to store preset template reply information, personalized tone information, and call recording screen and recording information in the first electronic device 100.
The display module 154 may include, for example, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, a microelectromechanical system (MEMS) display, or an electronic paper display. For displaying user viewable content (e.g., text, images, video, icons, symbols, etc.). Optionally, the display module 154 is configured to display the content shown in fig. 7A on the first electronic device 100.
A communication module 155 for supporting the personal terminal to communicate with other personal terminals (via a communication network). For example, the communication module may be connected to a network via wireless communication or wired communication to communicate with other personal terminals or network servers. The wireless communication may employ at least one of cellular communication protocols, such as Long Term Evolution (LTE), long term evolution-advanced (LTE-a), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), universal Mobile Telecommunications System (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM). The wireless communication may include, for example, short-range communication. The short-range communication may include at least one of wireless fidelity (Wi-Fi), bluetooth, near Field Communication (NFC), magnetic Stripe Transmission (MST), or GNSS. Optionally, the communication module 155 is configured to support the first electronic device to communicate with the second electronic device, and for example, reference may be made to the system schematic shown in fig. 2B.
The apparatus shown in fig. 21 may also include more, fewer, or split portions of the components, or have other arrangements of components, as embodiments of the application are not limited in this respect.
Embodiments of the present application also provide a chip system, as shown in fig. 22, which includes at least one processor 161 and at least one interface circuit 162. The processor 161 and the interface circuit 162 may be interconnected by wires. For example, interface circuit 162 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 162 may be used to send signals to other devices (e.g., processor 161). The interface circuit 162 may, for example, read instructions stored in the memory and send the instructions to the processor 161. The instructions, when executed by the processor 161, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic equipment, the electronic equipment is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product which, when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the above method embodiment.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working processes of the above-described systems, devices and modules may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and the division of modules or units, for example, is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. The communication method is characterized by being applied to the first electronic equipment and comprising the following steps of:
displaying a call interface, wherein the call interface comprises an identifier of a second user, the second user is a user using second electronic equipment, and the call is between the second electronic equipment and the first electronic equipment;
transmitting first voice information of a first tone to the second electronic device, wherein the first voice information is determined by the first electronic device;
receiving first information input by a first user through the call interface;
According to the first information, second voice information of a second tone is sent to the second electronic equipment; the first timbre is different from the second timbre.
2. The method of claim 1, wherein prior to transmitting the first voice information to the second electronic device, the method further comprises:
and displaying first prompt information, wherein the first prompt information is used for prompting that the first electronic equipment is in an automatic reply mode.
3. The method of claim 1 or 2, wherein prior to sending the second voice information to the second electronic device, the method further comprises:
And displaying second prompt information, wherein the second prompt information is used for prompting that the first electronic equipment is in a manual reply mode.
4. The method of claim 3, wherein displaying the second prompt message specifically includes:
responding to the first operation, displaying second prompt information, wherein the second prompt information is used for prompting that the first electronic equipment is in a manual reply mode; the first operation includes an input operation of the first user for the call interface.
5. The method of claim 4, wherein the first operation comprises an operation of a text entry box for the conversation interface by the first user.
6. The method of any of claims 1-5, wherein after receiving the first information entered by the first user via the call interface, the method further comprises:
Displaying the first information on the call interface by using a first user interface UI effect;
After sending the first voice information of the first tone color to the second electronic device, the method further includes:
displaying second information on the call interface by using a second user interface UI effect, wherein the second information is text information corresponding to the first voice information;
Wherein the first user interface UI effect is different from the second user interface UI effect.
7. The method of any of claims 1-6, wherein transmitting second voice information of a second tone color to the second electronic device based on the first information comprises:
If the operation of the first user on the sending control of the call interface in the first duration is detected, sending the second voice information of the second tone to the second electronic equipment according to the first information; the first time length is less than a first threshold.
8. The method according to any one of claims 1-6, further comprising:
and if the operation of the first user on the sending control of the call interface in the first duration is not detected, sending first voice prompt information to the second electronic equipment, wherein the first voice prompt information is used for prompting that the first user is inputting the first information.
9. The method according to any one of claims 1-8, further comprising:
If the operation of the first user on the sending control of the call interface in the second time period is not detected, sending fourth voice information of the first tone to the second electronic equipment, wherein the second time period is longer than the first time period; the fourth voice information is determined by the first electronic device.
10. The method according to any one of claims 7-9, further comprising at least one of:
Displaying text information corresponding to the second voice information on the call interface;
Displaying text information corresponding to the first voice prompt information on the call interface;
And displaying text information corresponding to the fourth voice information on the call interface.
11. The method according to any one of claims 1-10, further comprising:
receiving third voice information from the second electronic device;
the call interface further comprises a first control associated with the third voice information;
detecting the operation of the first control by the first user, and starting a voice playing function.
12. The method according to any one of claims 1-11, wherein in case the first information is speech information, the method further comprises:
Responding to the operation of the first user on a second control associated with voice information in the call interface, and converting the voice information into corresponding text information;
And displaying the text information on the call interface.
13. The method according to claims 1-12, wherein the method further comprises:
Displaying a first interface, wherein the first interface comprises tone setting options;
And setting the first tone and/or the second tone in response to the operation of the first user on the tone setting option.
14. A method of communicating, applied to a second electronic device, comprising:
Displaying a call interface, wherein the call interface comprises an identifier of a first user, the first user is a user using first electronic equipment, and the call is between the second electronic equipment and the first electronic equipment;
receiving first voice information of a first tone color from the first electronic device;
Receiving second voice information of a second tone color from the first electronic device;
Wherein the first timbre is different from the second timbre.
15. The method of claim 14, wherein the method further comprises:
And receiving first voice prompt information sent by the first electronic equipment, wherein the first voice prompt information is used for prompting that the first user is inputting first information, and the first information is text information or voice information corresponding to the second voice information.
16. The method according to claim 14 or 15, characterized in that the method further comprises:
and sending third voice information to the first electronic equipment, wherein the third voice information is voice information input by a second user, and the second user is a user using the second electronic equipment.
17. An electronic device, comprising: a processor and a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform the method of any of claims 1-13 or 14-16.
18. A computer readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the method of any of claims 1-13 or 14-16.
CN202211521752.3A 2022-11-30 2022-11-30 Conversation method and electronic equipment Pending CN118118593A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211521752.3A CN118118593A (en) 2022-11-30 2022-11-30 Conversation method and electronic equipment
PCT/CN2023/127971 WO2024114233A1 (en) 2022-11-30 2023-10-30 Call method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211521752.3A CN118118593A (en) 2022-11-30 2022-11-30 Conversation method and electronic equipment

Publications (1)

Publication Number Publication Date
CN118118593A true CN118118593A (en) 2024-05-31

Family

ID=91207536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211521752.3A Pending CN118118593A (en) 2022-11-30 2022-11-30 Conversation method and electronic equipment

Country Status (2)

Country Link
CN (1) CN118118593A (en)
WO (1) WO2024114233A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131592B (en) * 2018-10-31 2021-09-28 奇酷互联网络科技(深圳)有限公司 Automatic reply method, communication equipment and device with storage function
CN110401777A (en) * 2019-08-02 2019-11-01 上海尊源通讯技术有限公司 A kind of AI phone secretary system based on communication terminal
CN111683175B (en) * 2020-04-22 2021-03-09 北京捷通华声科技股份有限公司 Method, device, equipment and storage medium for automatically answering incoming call
CN113726956A (en) * 2021-08-04 2021-11-30 北京小米移动软件有限公司 Incoming call answering control method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
WO2024114233A1 (en) 2024-06-06

Similar Documents

Publication Publication Date Title
CN110351422B (en) Notification message preview method, electronic equipment and related products
US20220304094A1 (en) Bluetooth Reconnection Method and Related Apparatus
CN112154640B (en) Message playing method and terminal
US11893359B2 (en) Speech translation method and terminal when translated speech of two users are obtained at the same time
CN110225176B (en) Contact person recommendation method and electronic device
CN112789934B (en) Bluetooth service query method and electronic equipment
CN114115770B (en) Display control method and related device
KR102669342B1 (en) Device Occupancy Methods and Electronic Devices
CN109327613B (en) Negotiation method based on voice call translation capability and electronic equipment
WO2023024852A1 (en) Short message notification method and electronic terminal device
CN113473013A (en) Display method and device for beautifying effect of image and terminal equipment
CN110955452B (en) Non-invasive interaction method and electronic equipment
CN113301544B (en) Method and equipment for voice intercommunication between audio equipment
CN114449333B (en) Video note generation method and electronic equipment
CN114640747A (en) Call method, related device and system
CN118118593A (en) Conversation method and electronic equipment
CN116055633A (en) Incoming call processing method, incoming call processing system, electronic equipment and storage medium
KR20050081600A (en) Method for capturing screen in mobile phone
CN113676902A (en) System and method for providing wireless internet access and electronic equipment
CN115942253B (en) Prompting method and related device
CN114449492B (en) Data transmission method and terminal equipment
CN113672187A (en) Data double-sided display method and device, electronic equipment and storage medium
WO2022183941A1 (en) Message reply method and device
CN117014843A (en) Mobile communication method and electronic equipment
CN118075395A (en) Call display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination