CN115174749A - Call forwarding method and device, electronic equipment and computer readable storage medium - Google Patents

Call forwarding method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115174749A
CN115174749A CN202210753561.3A CN202210753561A CN115174749A CN 115174749 A CN115174749 A CN 115174749A CN 202210753561 A CN202210753561 A CN 202210753561A CN 115174749 A CN115174749 A CN 115174749A
Authority
CN
China
Prior art keywords
information
call
party
calling
conversation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210753561.3A
Other languages
Chinese (zh)
Inventor
刘允锋
刘云
汤水生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Feitian Jingwei Technology Co ltd
Original Assignee
Beijing Feitian Jingwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Feitian Jingwei Technology Co ltd filed Critical Beijing Feitian Jingwei Technology Co ltd
Priority to CN202210753561.3A priority Critical patent/CN115174749A/en
Publication of CN115174749A publication Critical patent/CN115174749A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/58Arrangements for transferring received calls from one subscriber to another; Arrangements affording interim conversations between either the calling or the called party and a third party
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42034Calling party identification service
    • H04M3/42042Notifying the called party of information on the calling party

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application relates to a method and a device for call forwarding, electronic equipment and a computer readable storage medium, and relates to the technical field of communication. The method comprises the following steps: when a call request of a calling party is detected and a call transfer instruction of a called party is detected, acquiring and storing identity information and conversation information of both parties of a call, wherein the conversation information comprises: and when the call of the calling party and the call of the switching party is detected to be finished or a conversation acquisition request of the called party is detected, pushing the identity information of the calling party and the identity information of the switching party and the conversation information in a text format to the called party, wherein the conversation acquisition request is used for requesting the conversation information between the calling party and the switching party. The call forwarding method, the call forwarding device, the electronic equipment and the computer readable storage medium can save the time for the called party to know the call forwarding content, and improve the efficiency for the called party to know the call information between the calling party and the forwarding party.

Description

Call forwarding method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for call forwarding, an electronic device, and a computer-readable storage medium.
Background
Currently, with the rapid development of scientific technology, the role of Call forwarding in communication is more and more important, and Call forwarding (ECT) means that when a Call cannot be answered or does not want to answer the Call, the Call can be forwarded to other telephone numbers, and the Call is forwarded through the telephone numbers, so that the telephone numbers are forwarding parties, a calling party can transmit information through the telephone numbers, and the forwarding parties can transmit Call contents to a called party in a dictation mode.
The inventor finds out in the research process that: the forwarding party delivers the call content to the called party orally with low efficiency and low accuracy.
Disclosure of Invention
The present application is directed to a method, an apparatus, an electronic device and a computer-readable storage medium for call forwarding, which are used to solve at least one of the above technical problems.
The above object of the present invention is achieved by the following technical solutions:
in a first aspect, a method for call forwarding is provided, where the method includes:
when a call request of a calling party is detected and a call transfer instruction of a called party is detected, acquiring and storing identity information and conversation information of both parties of a conversation, wherein the identity information of both parties of the conversation comprises: caller's identity information and switching party's identity information, the said conversation information includes: calling call information and reply information corresponding to the transfer party;
when a preset condition is met, pushing the identity information of the two parties of the call and the dialogue information in the text format to a called user;
wherein the preset condition comprises at least one of the following conditions:
detecting that the call between the calling user and the transfer party is finished;
and detecting a conversation acquisition request of the called user, wherein the conversation acquisition request is used for requesting conversation information between the calling user and the transfer party.
In one possible implementation, when the call request of the calling subscriber is detected and the call transfer instruction of the called subscriber is detected, the method further includes:
acquiring a current contextual model set by a called user;
generating the opening white information of the current dialogue based on the currently set contextual model;
and pushing the opening information of the current conversation to the calling user.
In another possible implementation manner, when the call request of the calling user is detected and the call transfer instruction of the called user is detected, the method further includes:
obtaining the dialogue opening information set by the called user as the opening information of the current dialogue;
and pushing the opening information of the current conversation to the calling user.
In another possible implementation manner, if the presence information of the current dialog is in a text format, the pushing the presence information of the current dialog to the calling subscriber includes:
converting the opening information of the current conversation from a text format to a voice format;
and pushing the opening information of the current conversation in the voice format to the calling user.
In another possible implementation manner, the manner of detecting the call forwarding instruction of the called user includes any one of the following:
detecting that the current contextual model set by the called user is a preset contextual model;
detecting that the current contextual model set by the called user is a preset contextual model, and detecting a rejection instruction of the called user, wherein the rejection instruction of the called user is triggered based on the rejection operation of the called user for the current call;
and detecting the rejection instruction of the called user.
In another possible implementation, obtaining the dialog information includes:
acquiring the call information of the calling user;
determining corresponding voice reply information based on the call information of the calling user;
pushing the voice reply information to the calling user;
and circularly executing the step of acquiring the call information of the calling party, the step of determining corresponding voice reply information based on the call information of the calling party, and the step of pushing the voice reply information to the calling party until the call of the calling party and the switching party is detected to be finished so as to obtain the conversation information.
In another possible implementation manner, the determining, based on the call information of the calling party, corresponding voice reply information includes:
determining a current call scene based on the call information of the calling subscriber;
and determining corresponding voice reply information based on the current call scene and the call information of the calling party.
In another possible implementation manner, the determining a current call scenario based on the call information of the calling party includes:
converting the call information of the calling user into call information in a text format;
extracting key words from the call information in the text format, and determining a current call scene based on the extracted key words;
wherein, the determining the corresponding voice reply information based on the current call scenario and the call information of the calling party includes:
determining corresponding reply information in a text format based on the current call scene and the call information of the calling party;
and converting the reply message in the text format into the voice reply message.
In another possible implementation manner, the dialog information includes the dialog information in the text format and the dialog information in the voice format;
the pushing of the identity information of the two parties in the call and the dialogue information in the text format to the called user further comprises:
associating the text-format dialog information with the voice-format dialog information one by one;
carrying out identity labeling on the associated text format conversation information and/or the associated voice format conversation information based on the identity information of the two parties in the call;
the pushing of the identity information of the two parties in the call and the dialogue information in the text format to the called party comprises:
pushing the marked dialog information in the text format to the called user; or,
and pushing the marked text format conversation information and the marked voice format conversation information to the called user.
In a second aspect, an apparatus for call forwarding is provided, the apparatus comprising:
the first obtaining module is configured to obtain and store identity information and session information of both parties of a call when a call request of a calling party is detected and a call forwarding instruction of a called party is detected, where the identity information of both parties of the call includes: the system comprises calling identity information and switching party identity information, wherein the conversation information comprises calling conversation information and answering information corresponding to the switching party;
the first pushing module is used for pushing the identity information of the two parties in the call and the dialogue information in the text format to a called user when a preset condition is met;
wherein the preset condition comprises at least one of the following conditions:
detecting that the call between the calling user and the transfer party is finished;
and detecting a conversation acquisition request of the called user, wherein the conversation acquisition request is used for requesting conversation information between the calling user and the transfer party.
In one possible implementation, when the call request of the calling user is detected and the call transfer instruction of the called user is detected, the apparatus further includes: a second obtaining module, a generating module and a second pushing module, wherein
The second obtaining module is used for obtaining the contextual model currently set by the called user;
the generating module is used for generating the field opening information of the current conversation based on the currently set contextual model;
and the second pushing module is used for pushing the opening information of the current conversation to the calling subscriber.
In another possible implementation manner, when the call request of the calling user is detected and the call transfer instruction of the called user is detected, the apparatus further includes: a third acquisition module and a third push module, wherein,
the third obtaining module is configured to obtain the dialog start-up information set by the called user, and use the dialog start-up information as the start-up information of the current dialog;
and the third pushing module is used for pushing the opening information of the current conversation to the calling subscriber.
In another possible implementation manner, when the context information of the current dialog is in a text format, the second pushing module is specifically configured to, when pushing the context information of the current dialog to the calling user:
converting the opening information of the current conversation from a text format to a voice format;
and pushing the opening information of the current conversation in the voice format to the calling user.
In another possible implementation manner, when the presence information of the current dialog is in a text format, the third pushing module is specifically configured to, when pushing the presence information of the current dialog to the calling user:
converting the opening information of the current conversation from a text format to a voice format;
and pushing the opening information of the current conversation in the voice format to the calling user.
In another possible implementation manner, the apparatus further includes: a detection module for detecting, wherein,
the detection module is specifically configured to, when detecting the manner of the call forwarding instruction of the called subscriber:
detecting that the current contextual model set by the called user is a preset contextual model; or,
detecting that the current contextual model set by the called user is a preset contextual model, and detecting a rejection instruction of the called user, wherein the rejection instruction of the called user is triggered based on the rejection operation of the called user for the current call; or,
and detecting the rejection instruction of the called user.
In another possible implementation manner, when acquiring the session information, the first acquiring module is specifically configured to:
acquiring call information of the calling user;
determining corresponding voice reply information based on the call information of the calling user;
pushing the voice reply information to the calling user;
and circularly executing the step of acquiring the call information of the calling party, the step of determining corresponding voice reply information based on the call information of the calling party, and the step of pushing the voice reply information to the calling party until the call of the calling party and the switching party is detected to be finished so as to obtain the conversation information.
In another possible implementation manner, when determining the corresponding voice reply information based on the call information of the calling user, the first obtaining module is specifically configured to:
determining a current call scene based on the call information of the calling party;
and determining corresponding voice reply information based on the current call scene and the call information of the calling party.
In another possible implementation manner, when determining the current call scenario based on the call information of the calling user, the first obtaining module is specifically configured to:
converting the call information of the calling user into call information in a text format;
extracting keywords from the call information in the text format, and determining a current call scene based on the extracted keywords;
the first obtaining module is specifically configured to, when determining the corresponding voice reply information based on the current call scenario and the call information of the calling party:
determining reply information in a corresponding text format based on the current call scene and the call information of the calling party;
and converting the reply message in the text format into the voice reply message.
In another possible implementation manner, the dialog information includes the dialog information in the text format and the dialog information in the voice format;
the device further comprises: an association module and a labeling module, wherein,
the association module is used for associating the text-format dialogue information and the voice-format dialogue information one by one;
the labeling module is used for carrying out identity labeling on the associated text format conversation information and/or the associated voice format conversation information based on the identity information of the two parties in the call;
when the first pushing module pushes the identity information of the two parties in the call and the dialogue information in the text format to the called party, the first pushing module is specifically configured to:
pushing the marked dialog information in the text format to the called user; or,
and pushing the labeled dialog information in the text format and the labeled dialog information in the voice format to the called user.
In a third aspect, an electronic device is provided, including:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: and executing the corresponding operation of the call forwarding method according to any one of the possible implementation manners of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of call forwarding as shown in any one of the possible implementations of the first aspect.
In summary, the present application includes at least one of the following beneficial technical effects:
compared with the related technology, in the method, when a call request of a calling party is detected and a call transfer instruction of a called party is detected, the calling identity information, the transfer party identity information and conversation information between the calling party identity information and the transfer party identity information are obtained and stored, when the call completion of the calling party and the transfer party is detected or after the conversation obtaining request of the called party is detected, the calling identity information, the transfer party identity information and the conversation information in the text format between the calling party and the transfer party are directly sent to the called party, so that the called party can directly check the conversation between the calling party and the transfer party by checking the conversation between the calling party and the transfer party, the efficiency of the called party user acquiring the conversation information between the calling party and the transfer party is improved, and the accuracy of the called party user acquiring the conversation information between the calling party and the transfer party is improved.
Drawings
Fig. 1 is a flowchart illustrating a call forwarding method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a call forwarding process according to an embodiment of the present application.
Fig. 3 is a schematic diagram of call content delivery according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a call forwarding apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the attached drawings.
The present embodiment is only for explaining the present application, and it is not limited to the present application, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present application.
In modern information society, people inevitably encounter the endless advertisement promotion incoming calls, which is very annoying. The invention provides an anti-harassment and anti-fraud function by the way of observing the call pain points of users, can intelligently identify harassment and fraud calls updated in real time for the users in complicated call information, and guards the call safety all weather, wherein the pain points of ECT are mainly reflected in the loss of call contents, and firstly, if the forwarded party has problems, the call information can be directly lost. And secondly, the rephrased information is lost, and the problem of information loss is inevitable in person-to-person dictation transmission. The Voice Mailbox (VMB) is an advanced version of ECT, and records the calling call audio to realize the faithful restoration of the call content. The pain point of the VMB mainly has two points, one is that the positioning cost of the key information of the call is high, the whole message needs to be listened for a long time with concentrated attention, and the key information has the possibility of being listened and written repeatedly. Secondly, the calling party is easy to miss or reduce the expression of the dialogue information under the condition of lacking dialogue parties.
Different from the existing ECT and VMB communication enhancement products, the embodiment of the application uses an Artificial Intelligent Semantic Analysis (AISA) technology to realize automatic response, missed call push and dialogue information playback. The embodiment of the application is finally realized, manual intervention is reduced, the accuracy of information transmission is improved, the extraction efficiency of key information is improved, the communication experience of a user is improved, and each call is more valuable.
According to the embodiment of the application, a communication assistant system is established to cover internet communication service enhancement products before, during and after conversation, and comprehensive coverage of government units, small and medium-sized micro enterprises and public markets is realized. The intelligent response combines the artificial intelligence technology and the voice recognition capability, quickly matches conversation scenes, and accurately transmits incoming call information. Based on AISA, communication signaling, weChat service and IM (Instant Messenger), disturbance interception, missed call pickup, barrier-free telephone (serving deaf-mute) and conversation push functions are realized, and communication assistant service is provided for operators and communication industries.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present application will be described in further detail with reference to the drawings.
An embodiment of the present application provides a method for call forwarding, which is performed by an electronic device, and as shown in fig. 1, the method may include:
step S1, when a call request of a calling party is detected and a call transfer instruction of a called party is detected, identity information and conversation information of two parties of a call are obtained and stored.
The identity information of the two parties of the call comprises: caller's identity information and switching party's identity information, the dialogue information includes: calling call information and answering information corresponding to the transfer party. In the embodiment of the application, the electronic device can acquire and store the conversation information in real time, and can also store the conversation information after acquiring the conversation information of the conversation.
According to the embodiment of the application, when the called phone cannot be answered or is unwilling to answer, the switching party answers instead of the called phone, the switching party can be a user with other phone numbers or an artificial intelligent robot, namely, the user with other phone numbers can answer the call request of the calling user instead of the calling request to communicate with the calling user, and the artificial intelligent robot can answer the call request of the calling user instead of the calling request to communicate with the calling user. Further, the identity information of the two parties of the call is acquired and stored after the call is started, and the session information of the two parties of the call can be acquired and stored in real time or after one session is completed.
For example, when a call request of the user a to the user B is detected and a call transfer instruction of the user B is detected, the call is transferred to the user C to respond, and the identity information of the user a and the user C and the conversation information of the two parties are acquired.
And S2, when the preset conditions are met, pushing the identity information of the two parties in the call and the dialogue information in the text format to a called party.
The preset conditions may specifically include: detecting the end of the call between the calling user and the transfer party; or detecting a conversation acquisition request of the called user, wherein the conversation acquisition request is used for requesting conversation information between the calling user and the forwarding party.
For the embodiment of the application, when the call is ended or a session acquisition request of the called user is detected, the forwarded call content is sent to the called user in a text format, and meanwhile, the identity information of the calling party and the identity information of the forwarding party are also sent, so that the called user can check the call information of the call between the calling party and the forwarding party.
Compared with the prior art, in the embodiment of the application, when a call request of a calling party is detected and a call transfer instruction of a called party is detected, the calling identity information, the switching party identity information and the conversation information between the calling party identity information and the switching party identity information are acquired and stored, when the call completion of the calling party and the switching party is detected or after the conversation acquisition request of the called party is detected, the calling identity information, the switching party identity information and the conversation information in the text format between the calling party and the switching party are directly sent to the called party, so that the called party can directly check the call between the calling party and the switching party by looking up the call between the calling party and the switching party, the efficiency of the called party knowing the conversation information between the calling party and the switching party is improved, and the accuracy of the called party knowing the conversation information between the calling party and the switching party is improved.
Another possible implementation manner of the embodiment of the present application, a manner of detecting a call forwarding instruction of a called user, may specifically include: at least one of the first, second, and third modes, wherein,
the method comprises the steps of detecting that the current contextual model set by the called user is the preset contextual model. That is, in the embodiment of the present application, when it is detected that the contextual model currently set by the called user is the preset contextual model, the call forwarding instruction of the called user is also detected.
For the embodiment of the application, when it is detected that the contextual model currently set by the called user is the preset contextual model, call forwarding can be triggered without the rejection operation of the called user, that is, unnecessary operations are not required by the called user, so that the time of the called user is saved, and call forwarding can be realized.
And secondly, detecting that the current contextual model set by the called user is a preset contextual model and detecting a rejection instruction of the called user. That is, in the embodiment of the present application, when the contextual model currently set by the called user is detected to be the preset contextual model, and the rejection instruction of the called user is detected, the call forwarding instruction of the called user is determined to be detected.
For the embodiment of the application, when a call request of a calling user for a called user is detected, the contextual model currently set by the user is a preset contextual model, such as a conference model, and a call transfer is triggered when a call rejection instruction of the called user is detected within a short time.
The rejection instruction of the called user is triggered based on the rejection operation of the called user for the current call.
And thirdly, detecting the rejection instruction of the called user. That is, as long as the rejection command of the called user is detected, that is, the call forwarding command of the called user is detected.
For the embodiment of the application, when the call request of the calling user to the called user is detected, the current contextual model set by the called user can be detected, or the preset contextual model set by the called user can not be detected, that is, the contextual model set by the called user has no influence on detecting the call transfer instruction of the called user, and the call transfer is directly triggered by the rejection operation of the called user, so that the time for detecting the call transfer instruction of the called user is saved, and the condition that the call transfer is triggered because the called user forgets to change the contextual model is avoided.
It should be noted that the three ways of detecting the call forwarding instruction of the called user are premised on detecting that the called user opens the call forwarding service. In a possible implementation manner of the embodiment of the present application, when the call request of the calling user is detected and the call forwarding instruction of the called user is detected, the method may further include: step Sa (not shown), step Sb (not shown), and step Sc (not shown), wherein step Sa, step Sb, and step Sc may be performed before acquiring the identity information and the session information of the two parties in call, step Sa, step Sb, and step Sc may also be performed after acquiring the identity information of the two parties in call, and before acquiring the session information of the two parties in call, which is not limited in the embodiment of the present application,
and step Sa, acquiring the current set contextual model of the called user.
For the embodiment of the application, after the call request of the calling user to the called user is detected, the contextual model currently set by the called user is obtained, for example, the contextual model currently set by the called user is a conference model.
And Sb, generating the opening white information of the current conversation based on the currently set contextual model.
For the embodiment of the application, the opening information matched with the contextual model is generated based on the contextual model set by the called user. Specifically, based on the contextual model, dialog opening information corresponding to the current contextual model is searched from the database, where the opening information may be audio records or characters set by the client in advance for different contextual models, opening information corresponding to different contextual models respectively pre-stored in the preset database, or opening information matched with the contextual model currently set by the called user based on the currently set contextual model and through a trained opening generation model.
The above-mentioned embodiment of the present invention may be implemented by a computer, or a computer program.
And step Sc, pushing the opening information of the current conversation to the calling user.
For the embodiment of the application, when the opening white information of the current conversation is in a voice format, the opening white information of the current conversation in the voice format is directly pushed to the calling party, when the opening white information of the current conversation is in a text format, the opening white information of the current conversation in the text format is firstly converted into the voice format, then the opening white information in the voice format after conversion is pushed to the calling party, and certainly, the opening white information in the text format can also be directly pushed.
For the embodiment of the application, before the opening information is sent to the calling party, the corresponding opening information is generated through the contextual model, such as the conference model, set by the called party, so that the generated opening information is more suitable for the current situation of the called party, the calling party can accurately know the reason for the call forwarding of the called party, and the user experience is improved.
Further, the opening information pushed to the calling party may be generated based on a contextual model set by the called party, or may be opening information set by the called party itself, that is, when the call request of the calling party is detected and the call forwarding instruction of the called party is detected, the method may further include: step Sd (not shown in the figure) and step Se (not shown in the figure), wherein step Sd and step Se may be performed before acquiring the identity information and the session information of the two parties of call, and step Sd and step Se may also be performed after acquiring the identity information of the two parties of call and before acquiring the session information of the two parties of call.
And step Sd, obtaining the dialog opening information set by the called user as the opening information of the current dialog.
For the embodiment of the present application, the opening information of the current dialog may also be dialog opening information recorded by the called user, or opening information input by the called user in a text format, or may also be selected from a plurality of candidate dialog opening information by a selection operation, which is not limited in the embodiment of the present application, that is, the opening information of the current dialog is opening information of the current dialog in a voice format, or opening information in a text format.
And step Se, pushing the opening information of the current conversation to the calling user.
In the embodiment of the application, when the call request of the calling party is detected and the call transfer instruction of the called party is detected, the dialogue opening information set by the called party is sent to the calling party.
Specifically, if the opening white information of the current dialog is in a voice format, the opening white information of the current dialog in the voice format is directly pushed to the calling party, and if the opening white information of the current dialog is in a text format, the opening white information of the current dialog in the text format is converted into the opening white information of the current dialog in the voice format, and then the opening white information of the current dialog in the voice format after conversion is pushed to the calling party, and certainly, the opening white information in the text format can also be sent to the calling party.
According to the embodiment of the application, when the call request of the calling party is detected and the call transfer instruction of the called party is detected, the opening information set by the user is pushed to the calling party, so that the reliability of the opening information can be improved, and the user experience of the called party and the calling party can be improved.
Further, in the above embodiment, the field opening information generated based on the contextual model and the obtained field opening information set by the user may be in a voice format or a text format, and if the field opening information of the current conversation is in the text format, the field opening information of the current conversation is pushed to the calling user, which may specifically include: step Sce1 (not shown) and step Sce2 (not shown), wherein,
and step Sce1, converting the opening information of the current conversation from a text format to a voice format.
For the embodiment of the application, the opening information of the current dialog in the Text format is generated by Text-To-Speech (TTS) To the opening information of the current dialog in the Speech format.
And step Sce2, pushing the opening information of the current conversation in the voice format to the calling user.
For the embodiment of the application, the opening information of the current conversation in the voice format is converted into the information in the form of audio stream and pushed to the calling user.
In another possible implementation manner of the embodiment of the present application, the obtaining of the dialog information in step S1 may specifically include: step S101 (not shown in the figure), step S102 (not shown in the figure), step S103 (not shown in the figure), and step S104 (not shown in the figure), wherein,
step S101, obtaining the call information of the calling user.
For the embodiment of the application, the call information of the calling subscriber is acquired, and the call information is in a voice format.
And step S102, determining corresponding voice reply information based on the call information of the calling user.
For the embodiment of the application, the corresponding voice reply information can be determined directly based on the call information of the calling party, and in order to improve the accuracy of the corresponding voice reply information, the corresponding voice reply information can also be determined based on the call information of the calling party and the scene of the call information of the calling party.
Specifically, the determining, in step S102, the corresponding voice reply information based on the call information of the calling party may specifically include: step S1021 (not shown in the drawings) and step S1022 (not shown in the drawings), wherein,
and step S1021, determining the current call scene based on the call information of the calling subscriber.
For the embodiment of the application, when the call information of the calling user is obtained, the call information of the calling user may be analyzed, for example, a current call scenario may be determined by extracting a keyword and the like, and for example, the current call scenario may be a takeaway scenario.
Specifically, determining the current call scenario based on the call information of the calling party may specifically include: step S10211 (not shown in the figure) and step S10212 (not shown in the figure), wherein,
step S10211, converting the call information of the calling party into the call information in text format.
For the embodiment of the application, the call information of the calling party is in a voice format, is received in the form of audio stream, and is converted into a text format to obtain the call information of the text format of the calling party.
Step S10212, extracting keywords from the text-formatted call information, and determining a current call scenario based on the extracted keywords.
For the embodiment of the application, in order to improve the efficiency of the determined current call scene, the keywords are extracted from the call information in the text format, the current call scene is determined based on the keywords, the AISA is analyzed through artificial intelligence semantic analysis to find the call scene with the highest similarity, when the scene is identified, the AISA model realizes multi-scene identification through learning of various scenes, for example, through construction of more application scenes such as express delivery, takeaway, marketing and the like, and through continuous enrichment of resources in the scene, the conversation efficiency and accuracy in the scene are improved. For example, if the call information in the text format of the calling user includes two words "take out", the current call scenario is determined to be a take out scenario.
Step S1022, based on the current call scenario and the call information of the calling party, determine the corresponding voice reply information.
For the embodiment of the application, after the current call scene is determined, the corresponding voice reply information is determined based on the current call scene and the call information of the calling party, specifically, the corresponding voice reply information can be obtained from the current call scene and the call information of the calling party through a pre-trained model, the corresponding reply information database can be determined based on the current call scene, the corresponding reply information is determined from the reply information database corresponding to the current call scene based on the call information of the calling party, and the reply information is converted into the voice format.
For example, the current call scenario is a takeaway scenario, the call information of the calling party is "takeaway is reached", and at this time, based on "takeaway is reached", the corresponding reply information is determined as "please take takeaway to the foreground" from the database of the takeaway scenario.
It should be noted that the reply information contained in the reply information database corresponding to each call scenario may be learned in advance through a model, or may be learned by the called user after being input in advance.
Specifically, the current call scene is determined based on the call information of the calling party, and then the reply information corresponding to the current call scene is determined based on the current call scene and the call information of the calling party, so that the accuracy of determining the voice reply information can be improved, and the user experience can be improved.
Specifically, determining the corresponding voice reply information based on the current call scenario and the call information of the calling party may specifically include: step S10221 (not shown in the figure) and step S10222 (not shown in the figure), wherein,
step S10221, determining a reply message in a corresponding text format based on the current call scenario and the call information of the calling party.
For the embodiment of the application, after the current call scene is determined, the text format of the call information of the calling party is analyzed through AISA to obtain an analysis result, reply information with the highest matching degree with the call information of the calling party is searched from a reply information database corresponding to the scene based on the current call scene and the analysis result, and the reply information in the corresponding text format is determined.
Step S10222, converting the reply message in text format into a voice reply message.
For the embodiment of the application, in order to deliver the reply information to the calling party, the text format reply information needs to be converted into the voice reply information through TTS.
And step S103, pushing the voice reply information to the calling user.
For the embodiment of the application, after the reply message in the text format is converted into the voice reply message, the voice reply message is converted into the audio stream form and is pushed to the calling user.
Step S104, circularly executing the step of acquiring the call information of the calling party, the step of determining the corresponding voice reply information based on the call information of the calling party, and the step of pushing the voice reply information to the calling party until the call between the calling party and the switching party is detected to be finished so as to obtain the conversation information.
For the embodiment of the application, if the calling party and the switching party have only one conversation, the step of obtaining the conversation information of the calling party, the step of determining the corresponding voice reply information based on the conversation information of the calling party and the step of pushing the voice reply information to the calling party are only executed once, if the calling party and the switching party have multiple conversations, the step of obtaining the conversation information of the calling party, the step of determining the corresponding voice reply information based on the conversation information of the calling party and the step of pushing the voice reply information to the calling party are circularly executed according to the conversation times, so that the conversation content pushed to the called party is complete.
For example, as shown in fig. 2, a calling request of a calling party is received through a core network and a UAP, and when a call transfer instruction is received, a dialog opening information is requested, the dialog opening information is in a text format, the text format dialog opening information is converted into a voice format dialog opening information through TTS, the voice format dialog opening information is sent to the calling party through the core network and the UAP in an audio stream form, the calling party receives the voice format dialog opening information, the calling party speaks after receiving the dialog opening information, the UAP sends the call information of the calling party to a VTT after receiving the call information of the calling party, the VTT identifies the dialog and returns the text to an AISA, so that the AISA performs dialog scene identification based on the text to obtain reply information, the text format of the reply information is converted into the voice format through the VTT, and the voice reply information is pushed to the calling party through the UAP in an audio stream form.
In another possible implementation manner of the embodiment of the present application, the dialog information includes dialog information in a text format and dialog information in a voice format; the method for pushing the identity information of the two parties in the call and the conversation information in the text format to the called party can also comprise the following steps: step S2a (not shown in the figure) and step S2b (not shown in the figure), wherein,
and S2a, associating the text-format dialog information with the voice-format dialog information one by one.
For the embodiment of the application, in order to enable the called user to hear the dialog information in the corresponding voice format while viewing the dialog information in the text format, the dialog information in the text format and the dialog information in the voice format are associated one by one, so that each piece of dialog information in the text format corresponds to the dialog information in the voice format, for example, "hello" in the voice format corresponding to the "hello" in the conversation information in the text format, so as to correspond the voice format and the text format of the same sentence.
For example, after one piece of call information in the voice format of the calling party is converted into the call information in the text format, a mapping relationship between the voice format and the text format of the piece of call information is determined, the voice format and the text format of the piece of call information are matched based on the mapping relationship, and the text format conversation information is associated with the corresponding conversation information in the voice format.
And S2b, carrying out identity labeling on the associated text format conversation information and/or the associated voice format conversation information based on the identity information of the two parties to the conversation.
For the embodiment of the present application, based on the identity information of the two parties in communication, only the dialog information in the text format after association may be labeled, only the dialog information in the voice format after association may be labeled, or the dialog information in the text format after association and the dialog information in the voice format after association may be labeled, so as to distinguish the dialog between the calling party and the forwarding party.
On the basis of the above embodiment, the step S2 of pushing the identity information of both parties of the call and the dialog information in text format to the called user may specifically include: step S201 (not shown in the figure) or step S202 (not shown in the figure), wherein,
step S201, the marked dialog information in the text format is pushed to the called user.
For the embodiment of the application, the marked text format conversation information is pushed to the called user, and the called user can acquire the conversation contents of the calling user and the transfer party user in the conversation transfer process through the text format conversation contents.
Step S202, the dialogue information in the text format and the dialogue information in the voice format after the marking are pushed to a called user.
According to the embodiment of the application, the marked dialog information in the text format and the voice format is pushed to the called user, the called user can also hear the dialog information in the voice format when checking the dialog information in the text format, the time for the called user to know the conversation content is saved, the dialog information in the voice format and the dialog information in the text format are marked, and the user can conveniently distinguish the users corresponding to different dialog information respectively.
Further, in the embodiment of the application, the dialog information can be pushed to the called user in a link mode, when the called user clicks the corresponding link, the dialog information between the dialog information of the calling user and the dialog information of the switching party user is pushed to the called user, so that the called user end displays the dialog information in a text format and/or the dialog information in a voice format, and the user can replay the key information of the call content by clicking the dialog information in the voice format for multiple times.
For the embodiment of the application, the call information of the calling party can be semantically analyzed, after the call is finished, the label of the session information is marked, for example, invitation and meeting are performed, so that the called party can quickly know the call information, and when at least two missed calls of the called party are to be checked, the call content can be checked according to the importance degree of the label, wherein the missed calls are calls transferred by the called party.
Further, a possible implementation manner of the embodiment of the present application is introduced through a specific scenario, that is, for an incoming call of a phone number, a call forwarding set by a called user is analyzed, which is specifically shown in fig. 3: the called user opens a call forwarding service, the called user sets call forwarding, the called user sets a conference mode before the conference is opened, when an outside call calls in, the called user refuses the call, the call forwarding is triggered, at the moment, the opening information (opening information in a voice format) of the current conversation of the conference mode is sent to the calling party user (namely, a takeaway rider) in an audio stream mode, the takeaway rider listens to the opening information, the takeaway rider speaks to send the call information through a mobile phone, the call information of the takeaway rider is sent to the forwarding party user in an audio stream mode, a takeaway scene is identified through an AISA model and based on keywords in the call information of the takeaway rider, corresponding voice reply information is determined according to the takeaway scene, the reply information is sent to the takeaway rider in an audio stream mode (namely, the audio of the takeaway scene is returned), and the takeaway rider places the takeaway scene according to the requirement based on the voice reply information.
Further, it should be noted that, in the embodiments of the present application, the "calling user", "called user" and "forwarding user" all refer to devices.
The above embodiments describe a method for call forwarding from the perspective of method flow, and the following embodiments describe an apparatus for call forwarding from the perspective of virtual module or virtual unit, which are described in detail in the following embodiments.
An embodiment of the present application provides a device for call forwarding, as shown in fig. 4, the device 40 may specifically include: a first retrieving module 41 and a first pushing module 42, wherein,
a first obtaining module 41, configured to obtain and store identity information of two parties in a call and session information when a call request of a calling party is detected and a call forwarding instruction of a called party is detected, where the identity information of the two parties in the call includes: the system comprises calling identity information and switching party identity information, wherein the conversation information comprises calling conversation information and answering information corresponding to a switching party;
the first pushing module 42 is configured to, when a preset condition is met, push identity information of both parties in a call and session information in a text format to a called party;
wherein the preset condition comprises at least one of the following conditions:
detecting the end of the call between the calling user and the transfer party;
and detecting a conversation acquisition request of the called user, wherein the conversation acquisition request is used for requesting conversation information between the calling user and the transfer party.
In a possible implementation manner of the embodiment of the present application, when the call request of the calling user is detected and the call forwarding instruction of the called user is detected, the apparatus 40 further includes: a second obtaining module, a generating module and a second pushing module, wherein
The second acquisition module is used for acquiring the current contextual model set by the called user;
the generating module is used for generating the open field white information of the current conversation based on the currently set contextual model;
and the second pushing module is used for pushing the opening information of the current conversation to the calling subscriber.
In another possible implementation manner of the embodiment of the present application, when the call request of the calling subscriber is detected and the call forwarding instruction of the called subscriber is detected, the apparatus 40 further includes: a third acquisition module and a third push module, wherein,
the third acquisition module is used for acquiring the dialogue opening information set by the called user as the opening information of the current dialogue;
and the third pushing module is used for pushing the opening information of the current conversation to the calling subscriber.
In another possible implementation manner of the embodiment of the present application, when the context information of the current dialog is in a text format, the second pushing module is specifically configured to:
converting the opening information of the current conversation from a text format to a voice format;
and pushing the opening information of the current conversation in the voice format to the calling user.
In another possible implementation manner of the embodiment of the present application, when the context information of the current dialog is in a text format, the third pushing module is specifically configured to:
converting the open-field white information of the current conversation from a text format to a voice format;
and pushing the opening information of the current conversation in the voice format to the calling user.
In another possible implementation manner of the embodiment of the present application, the apparatus 40 further includes: a detection module, wherein,
when the detecting module detects the mode of the call forwarding instruction of the called user, the detecting module is specifically configured to:
detecting that a current contextual model set by a called user is a preset contextual model; or,
detecting that a current contextual model set by a called user is a preset contextual model and detecting a rejection instruction of the called user, wherein the rejection instruction of the called user is triggered based on the rejection operation of the called user for the current call; or,
and detecting a rejection instruction of the called user.
In another possible implementation manner of the embodiment of the present application, when acquiring the dialog information, the first acquiring module 41 is specifically configured to:
acquiring call information of a calling user;
determining corresponding voice reply information based on the call information of the calling user;
pushing the voice reply information to the calling subscriber;
and circularly executing the step of acquiring the call information of the calling party, the step of determining corresponding voice reply information based on the call information of the calling party, and the step of pushing the voice reply information to the calling party until the call of the calling party and the switching party is detected to be finished so as to obtain the conversation information.
In another possible implementation manner of the embodiment of the present application, when determining the corresponding voice reply information based on the call information of the calling party, the first obtaining module 41 is specifically configured to:
determining a current call scene based on call information of a calling party;
and determining corresponding voice reply information based on the current call scene and the call information of the calling party.
In another possible implementation manner of the embodiment of the present application, when determining the current call scenario based on the call information of the calling party, the first obtaining module 41 is specifically configured to:
converting the call information of the calling user into the call information in a text format;
extracting keywords from the call information in the text format, and determining a current call scene based on the extracted keywords;
the first obtaining module 41 is specifically configured to, when determining the corresponding voice reply information based on the current call scenario and the call information of the calling party:
determining reply information in a corresponding text format based on the current call scene and the call information of the calling party;
and converting the reply message in the text format into a voice reply message.
In another possible implementation manner of the embodiment of the present application, the dialog information includes dialog information in a text format and dialog information in a voice format;
the apparatus 40 further comprises: an association module and a labeling module, wherein,
the association module is used for associating the text-format dialogue information and the voice-format dialogue information one by one;
the labeling module is used for carrying out identity labeling on the associated text format conversation information and/or the associated voice format conversation information based on the identity information of the two parties in conversation;
when the first pushing module 42 pushes the identity information of the two parties in a call and the dialog information in the text format to the called party, it is specifically configured to:
pushing the marked dialog information in the text format to a called user; or,
and pushing the marked dialog information in the text format and the marked dialog information in the voice format to the called user.
Compared with the prior art, in the embodiment of the application, when a call request of a calling party is detected and a call transfer instruction of a called party is detected, the calling identity information, the switching party identity information and the conversation information between the calling party identity information and the switching party identity information are acquired and stored, when the call completion of the calling party and the switching party is detected or after the conversation acquisition request of the called party is detected, the calling identity information, the switching party identity information and the conversation information in the text format between the calling party and the switching party are directly sent to the called party, so that the called party can directly check the call between the calling party and the switching party by looking up the call between the calling party and the switching party, the efficiency of the called party knowing the conversation information between the calling party and the switching party is improved, and the accuracy of the called party knowing the conversation information between the calling party and the switching party is improved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the call forwarding apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
An embodiment of the present application provides an electronic device, as shown in fig. 5, an electronic device 50 shown in fig. 5 includes: a processor 501 and a memory 503. Wherein the processor 501 is coupled to the memory 503, such as via the bus 502. Optionally, the electronic device 50 may also include a transceiver 504. It should be noted that the transceiver 504 is not limited to one in practical application, and the structure of the electronic device 50 is not limited to the embodiment of the present application.
The Processor 501 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor 501 may also be a combination of implementing computing functionality, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, and the like.
Bus 502 may include a path that transfers information between the above components. The bus 502 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 5, but this is not intended to represent only one bus or type of bus.
The Memory 503 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 503 is used for storing application program codes for executing the scheme of the application, and the processor 501 controls the execution. The processor 501 is configured to execute application program code stored in the memory 503 to implement the content shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, in the embodiment of the application, when the call request of the calling party is detected and the call transfer instruction of the called party is detected, the calling identity information, the switching party identity information and the conversation information between the calling identity information and the switching party are obtained and stored, when the call completion of the calling party and the switching party is detected or after the conversation obtaining request of the called party is detected, the calling identity information, the switching party identity information and the conversation information in the text format between the calling party and the switching party are directly sent to the called party, so that the called party can directly check the call between the calling party and the switching party by checking the call between the calling party and the switching party, the efficiency of the called party knowing the conversation information between the calling party and the switching party by the called party is improved, and the accuracy of the called party knowing the conversation information between the calling party and the switching party by the called party is improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A method of call forwarding, comprising:
when a call request of a calling party is detected and a call transfer instruction of a called party is detected, acquiring and storing identity information and conversation information of two parties of a call, wherein the identity information of the two parties of the call comprises: caller identification information and switching party identification information, the dialogue information includes: calling call information and reply information corresponding to the transfer party;
when a preset condition is met, pushing the identity information of the two parties of the call and the dialogue information in the text format to a called user;
wherein the preset condition comprises at least one of the following conditions:
detecting that the call between the calling user and the transfer party is finished;
and detecting a conversation acquisition request of the called user, wherein the conversation acquisition request is used for requesting conversation information between the calling user and the transfer party.
2. The method according to claim 1, wherein when the call request of the calling subscriber is detected and the call transfer instruction of the called subscriber is detected, the method further comprises any one of the following:
acquiring a current contextual model set by a called user, generating the opening information of the current conversation based on the current contextual model, and pushing the opening information of the current conversation to the calling user;
and acquiring the dialog opening information set by the called user as the opening information of the current dialog, and pushing the opening information of the current dialog to the calling user.
3. The method according to claim 2, wherein if the context information of the current dialog is in a text format, the pushing the context information of the current dialog to the calling party includes:
converting the opening information of the current conversation from a text format to a voice format;
and pushing the opening information of the current conversation in the voice format to the calling user.
4. The method of claim 1, wherein detecting the call forwarding instruction of the called subscriber comprises any one of:
detecting that the current contextual model set by the called user is a preset contextual model;
detecting that the current contextual model set by the called user is a preset contextual model, and detecting a rejection instruction of the called user, wherein the rejection instruction of the called user is triggered based on the rejection operation of the called user for the current call;
and detecting the rejection instruction of the called user.
5. The method of claim 1, wherein obtaining dialog information comprises:
acquiring call information of the calling user;
determining corresponding voice reply information based on the call information of the calling user;
pushing the voice reply information to the calling user;
and circularly executing the step of acquiring the call information of the calling party, the step of determining corresponding voice reply information based on the call information of the calling party, and the step of pushing the voice reply information to the calling party until the call between the calling party and the transfer party is detected to be finished so as to obtain the conversation information.
6. The method of claim 5, wherein the determining the corresponding voice reply information based on the call information of the calling subscriber comprises:
determining a current call scene based on the call information of the calling party;
and determining corresponding voice reply information based on the current call scene and the call information of the calling party.
7. The method of claim 6, wherein the determining a current call scenario based on the call information of the calling party comprises:
converting the call information of the calling user into call information in a text format;
extracting key words from the call information in the text format, and determining a current call scene based on the extracted key words;
wherein, the determining the corresponding voice reply information based on the current call scenario and the call information of the calling party includes:
determining corresponding reply information in a text format based on the current call scene and the call information of the calling party;
and converting the reply message in the text format into the voice reply message.
8. The method of claim 1, wherein the dialog information includes the text-formatted dialog information and a voice-formatted dialog information;
the pushing of the identity information of the two parties in the call and the dialogue information in the text format to the called user further comprises:
associating the text-format dialog information with the voice-format dialog information one by one;
carrying out identity labeling on the associated text format conversation information and/or the associated voice format conversation information based on the identity information of the two parties in the call;
the pushing of the identity information of the two parties in the call and the dialogue information in the text format to the called party comprises:
pushing the marked dialog information in the text format to the called user; or,
and pushing the marked text format conversation information and the marked voice format conversation information to the called user.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: a method of performing call forwarding according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of call forwarding according to any one of claims 1 to 8.
CN202210753561.3A 2022-06-29 2022-06-29 Call forwarding method and device, electronic equipment and computer readable storage medium Pending CN115174749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210753561.3A CN115174749A (en) 2022-06-29 2022-06-29 Call forwarding method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210753561.3A CN115174749A (en) 2022-06-29 2022-06-29 Call forwarding method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115174749A true CN115174749A (en) 2022-10-11

Family

ID=83490054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210753561.3A Pending CN115174749A (en) 2022-06-29 2022-06-29 Call forwarding method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115174749A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519442A (en) * 2019-08-23 2019-11-29 北京金山安全软件有限公司 Method and device for providing telephone message leaving service, electronic equipment and storage medium
CN110708431A (en) * 2019-10-18 2020-01-17 北京珠穆朗玛移动通信有限公司 Call management method, communication terminal and storage medium
CN110891124A (en) * 2019-12-11 2020-03-17 厦门韭黄科技有限公司 System for artificial intelligence pick-up call

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519442A (en) * 2019-08-23 2019-11-29 北京金山安全软件有限公司 Method and device for providing telephone message leaving service, electronic equipment and storage medium
CN110708431A (en) * 2019-10-18 2020-01-17 北京珠穆朗玛移动通信有限公司 Call management method, communication terminal and storage medium
CN110891124A (en) * 2019-12-11 2020-03-17 厦门韭黄科技有限公司 System for artificial intelligence pick-up call

Similar Documents

Publication Publication Date Title
CN110891124B (en) System for artificial intelligence pick-up call
CN107580149B (en) Method and device for identifying reason of outbound failure, electronic equipment and storage medium
US10812655B1 (en) Methods and systems for seamless outbound cold calls using virtual agents
CN110381221B (en) Call processing method, device, system, equipment and computer storage medium
US6775360B2 (en) Method and system for providing textual content along with voice messages
US20210136205A1 (en) Methods and systems of virtual agent real-time recommendation, suggestion and advertisement
US20210134284A1 (en) Methods and systems for personalized virtual agents to learn from customers
US20210133765A1 (en) Methods and systems for socially aware virtual agents
US20210134283A1 (en) Methods and systems of virtual agent real-time recommendation, suggestion and advertisement
US20210136204A1 (en) Virtual agents within a cloud-based contact center
US20210134282A1 (en) Methods and systems for personalized virtual agents to learn from customers
CN109842712B (en) Call record generation method and device, computer equipment and storage medium
US20210136206A1 (en) Virtual agents within a cloud-based contact center
CN111737987B (en) Intention recognition method, device, equipment and storage medium
US20160036969A1 (en) Computer-based streaming voice data contact information extraction
US20210136195A1 (en) Methods and systems for virtual agent to understand and detect spammers, fraud calls, and auto dialers
US20210136208A1 (en) Methods and systems for virtual agent to understand and detect spammers, fraud calls, and auto dialers
US20210136209A1 (en) Methods and systems for virtual agents to check caller identity via multi channels
US20200125645A1 (en) Global simultaneous interpretation mobile phone and method
CN110740212B (en) Call answering method and device based on intelligent voice technology and electronic equipment
CN110502631B (en) Input information response method and device, computer equipment and storage medium
CN115174749A (en) Call forwarding method and device, electronic equipment and computer readable storage medium
US20210329127A1 (en) System and method for identifying call status in real-time
CN112133306B (en) Response method and device based on express delivery user and computer equipment
CN112911074B (en) Voice communication processing method, device, equipment and machine-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221011