CN115620720A - Method and device for muting session, electronic equipment and computer-readable storage medium - Google Patents

Method and device for muting session, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN115620720A
CN115620720A CN202211513066.1A CN202211513066A CN115620720A CN 115620720 A CN115620720 A CN 115620720A CN 202211513066 A CN202211513066 A CN 202211513066A CN 115620720 A CN115620720 A CN 115620720A
Authority
CN
China
Prior art keywords
conversation
mute
determining
session
intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211513066.1A
Other languages
Chinese (zh)
Inventor
薛雯飞
韩亚昕
冯梦盈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingxi Beijing Technology Co Ltd
Original Assignee
Lingxi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingxi Beijing Technology Co Ltd filed Critical Lingxi Beijing Technology Co Ltd
Priority to CN202211513066.1A priority Critical patent/CN115620720A/en
Publication of CN115620720A publication Critical patent/CN115620720A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Abstract

The application belongs to the technical field of communication, and discloses a method, a device, electronic equipment and a computer readable storage medium for session muting, wherein the method comprises the steps of analyzing a current session and determining a session intention; when the current conversation is determined to accord with the mute triggering condition, determining the mute time according to the conversation intention; and executing the session mute operation according to the mute time length. Therefore, in the conversation interaction process, the mute duration can be self-adapted, and the conversation interaction experience is improved.

Description

Method and device for muting session, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for session muting, an electronic device, and a computer-readable storage medium.
Background
In some conversation scenarios, conversational interaction is typically conducted with a user through conversational devices (e.g., robots, and smart voice devices, etc.).
In the prior art, in the process of interacting with a user session, a session device usually performs a mute operation according to a fixed mute duration to improve user experience, for example, to avoid interrupting the integrity of user session output.
However, how to flexibly set the mute duration of the session in the session process is a technical problem to be solved.
Disclosure of Invention
Embodiments of the present application provide a method and an apparatus for session muting, an electronic device, and a computer-readable storage medium, which are used to improve flexibility of session muting duration in a session process.
In one aspect, a method for muting a session is provided, including:
analyzing the current conversation to determine a conversation intention;
when the current conversation is determined to accord with the mute triggering condition, determining mute duration according to the conversation intention;
and executing the session mute operation according to the mute time length.
In one embodiment, analyzing a current conversation to determine a conversation intent includes:
extracting keywords in the current conversation and determining a conversation intention matched with the keywords;
or determining the conversation intention by adopting a regular expression matching mode;
or determining the conversation intention according to the keywords and the audio features in the current conversation.
In one embodiment, the session intent includes any of: event complaints, rejection sessions, information consultation, operational guidance, and marketing.
In one embodiment, determining the mute duration according to the conversation intention comprises:
if the conversation intention is determined to be operation guidance, determining target guidance operation according to the current conversation, and determining mute time according to the operation time of the target guidance operation;
if the conversation intention is determined to be the rejection of the conversation, acquiring the mute duration corresponding to the conversation ending operation;
if the conversation intention is determined to be the event complaint, determining the emotion level of the user according to the keywords and the audio characteristics of the current conversation, and acquiring the mute duration corresponding to the emotion level;
if the conversation intention is determined to be information consultation, acquiring mute time set for consultation conversation;
and if the conversation intention is determined to be marketing, acquiring the mute duration set for the marketing conversation.
In one embodiment, determining that the current session meets the mute trigger condition comprises:
acquiring the silent duration of a user in real time;
and if the silence duration is determined to reach the time threshold, determining that the silence triggering condition is met.
In one aspect, an apparatus for muting a session is provided, including:
the analysis unit is used for analyzing the current conversation and determining a conversation intention;
the determining unit is used for determining the mute time length according to the conversation intention when the current conversation conforms to the mute triggering condition;
and the execution unit is used for executing the session mute operation according to the mute time length.
In one embodiment, the analysis unit is configured to:
extracting keywords in the current conversation and determining a conversation intention matched with the keywords;
or determining the conversation intention by adopting a regular expression matching mode;
or determining the conversation intention according to the keywords and the audio features in the current conversation.
In one embodiment, the session intent includes any of: event complaints, rejection sessions, information consultation, operational guidance, and marketing.
In one embodiment, the determining unit is configured to:
if the conversation intention is determined to be operation guidance, determining target guidance operation according to the current conversation, and determining mute time according to the operation time of the target guidance operation;
if the conversation intention is determined to be the rejection conversation, acquiring a mute time corresponding to the conversation ending operation;
if the conversation intention is determined to be the event complaint, determining the emotion level of the user according to the keywords and the audio characteristics of the current conversation, and acquiring the mute duration corresponding to the emotion level;
if the conversation intention is determined to be information consultation, acquiring mute duration set for consultation conversation;
and if the conversation intention is determined to be marketing, acquiring the mute time set for the marketing conversation.
In one embodiment, the determining unit is configured to:
acquiring the silent duration of a user in real time;
and if the silence duration is determined to reach the time threshold, determining that the silence triggering condition is met.
In one aspect, an electronic device is provided that includes a processor and a memory having computer-readable instructions stored thereon that, when executed by the processor, perform the steps of the method provided in any of the various alternative implementations of session muting as described above.
In one aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, performs the steps of the method as provided in any of the various alternative implementations of session muting as described above.
In one aspect, a computer program product is provided which, when run on a computer, causes the computer to perform the steps of the method as provided in any of the various alternative implementations of session muting as described above.
In the method, the device, the electronic equipment and the computer-readable storage medium for muting the conversation, the current conversation is analyzed, and the conversation intention is determined; when the current conversation is determined to accord with the mute triggering condition, determining mute duration according to the conversation intention; and executing the session mute operation according to the mute time length. Therefore, in the conversation interaction process, the mute time can be self-adapted, and the conversation interaction experience is improved.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of a method for muting a session according to an embodiment of the present application;
fig. 2 is a block diagram illustrating a structure of a device for muting a session according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
First, some terms referred to in the embodiments of the present application will be described to facilitate understanding by those skilled in the art.
A terminal device: may be a mobile terminal, a fixed terminal, or a portable terminal such as a mobile handset, station, unit, device, multimedia computer, multimedia tablet, internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system device, personal navigation device, personal digital assistant, audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the terminal device can support any type of interface to the user (e.g., wearable device), and the like.
A server: the cloud server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, big data and artificial intelligence platforms and the like.
In order to improve flexibility of session muting duration in a session process, embodiments of the present application provide a method, an apparatus, an electronic device, and a computer-readable storage medium for session muting.
The embodiment of the application is applied to the session equipment, and the session equipment can be a server or terminal equipment. As one example, the conversation device is a robot for communicating with a user conversation.
Referring to fig. 1, a flowchart of a method for muting a session according to an embodiment of the present application is shown, where the method is implemented as follows:
step 100: analyzing the current conversation to determine the intention of the conversation.
The session intention may include, but is not limited to, any of the following:
event complaints, rejection sessions, information consultation, operational guidance, and marketing.
In one embodiment, the following steps may be adopted when performing step 100:
mode 1: and extracting keywords in the current conversation, and determining a conversation intention matched with the keywords.
In one embodiment, the association relationship between the keywords and the conversation intention is established in advance, and the conversation intention corresponding to the keywords in the current conversation is determined according to the association relationship.
Wherein the current session contains at least one keyword. If there are a plurality of keywords and a plurality of conversation intentions corresponding to the keywords, the probabilities of the conversation intentions may be determined, and the conversation intention corresponding to the highest probability among the probabilities may be determined as the conversation intention. Further, if the probabilities of the session intentions are the same, the priorities of the session intentions may also be obtained, and the session intention may be determined according to the session intention corresponding to the highest priority among the priorities.
Mode 2: and determining the conversation intention by adopting a regular expression matching mode.
In one embodiment, a regular expression may be configured in advance, and a session intention of a current session may be determined according to the regular expression.
Mode 3: and determining the conversation intention according to the keywords and the audio features in the current conversation.
In one embodiment, the emotion type of a user is determined according to the audio characteristics of the current conversation, and a conversation intention set corresponding to a keyword and the emotion type is obtained.
For example, if the keyword is a defect of a good and the emotion type is anger, it is determined that the conversation is intended as a product complaint.
Mode 4: determining a conversation scene of the current conversation, and determining a conversation intention according to the conversation scene and the keywords and/or the emotion types.
The session scenario may include, but is not limited to, any of the following scenarios:
a complaint scenario, a consultation scenario, an operation guidance scenario, and a consultation scenario.
In this way, the user's intent to talk can be determined.
Step 101: and when the current conversation is determined to accord with the mute triggering condition, determining the mute duration according to the conversation intention.
In one embodiment, the implementation process of determining that the mute trigger condition is met comprises:
acquiring the silent duration of a user in real time; and if the silence duration is determined to reach the time threshold, determining that the silence triggering condition is met.
In one embodiment, the timing is started when the voice of the user stops, and the real-time silent duration of the user is obtained.
In practical applications, both the time threshold (e.g., 5 s) and the mute trigger condition may be set according to practical application scenarios, which is not limited herein.
In one embodiment, the implementation process of determining the mute duration according to the session intention may adopt any one of the following manners:
mode 1: and if the conversation intention is determined to be operation guidance, determining target guidance operation according to the current conversation, and determining mute time according to the operation time of the target guidance operation.
In one embodiment, the operation time periods of the different operations are set in advance. Since the user generally performs an operation slowly when performing the operation according to the guidance, the operation time period may also be enlarged or extended.
As an example, the ratio of the operation duration to the proportional parameter is determined as the mute duration, or the sum of the operation duration and the extension duration is determined as the mute duration.
Mode 2: and if the conversation intention is determined to be the rejection conversation, acquiring the mute duration corresponding to the conversation ending operation.
In one embodiment, different lengths of silence are set in advance for different dialogs. If it is determined that the user does not want to continue the session, the mute duration corresponding to the session ending session may be obtained.
Mode 3: and if the conversation intention is determined to be the event complaint, determining the emotion level of the user according to the keywords and the audio characteristics of the current conversation, and acquiring the mute duration corresponding to the emotion level.
For example, the higher the intonation of the user, the higher the emotion level, the longer the mute time period, whereas the lower the emotion level, the shorter the mute time period.
Mode 3: and if the conversation intention is determined to be information consultation, acquiring the mute time set for the consultation conversation.
Mode 4: and if the conversation intention is determined to be marketing, acquiring the mute duration set for the marketing conversation.
Therefore, the mute duration can be flexibly adjusted according to different conversation intentions.
Step 102: and executing the session mute operation according to the mute time length.
In one embodiment, a session muting operation is performed, if a user session is received within a muting duration, step 100 is performed, otherwise, if it is determined that the muting duration is reached, a reply session is sent to the user terminal, and step 100 is performed.
In a complaint scene, when a robot and a user are in conversation, the user may complain about a product quality problem, or ask how the robot knows a number and asks about the authenticity of a sales promotion product, and the like, it is determined that the conversation intention of the user is an event complaint, a longer mute time needs to be set, so that the user expresses a complete and real intention, the call robbing is avoided, further complaint tendency is caused, a proper answer is given, and the user emotion is soothed, therefore, a longer mute time can be set in advance aiming at the conversation intention of the event complaint, and the mute operation can be executed according to the mute time corresponding to the event complaint.
In another complaint scenario, when the user robot has a conversation with the user, the user strongly indicates that no call is required or made to him, the conversation intention is determined as a rejection conversation, and a small mute time can be set in advance for the conversation intention of the rejection conversation, so that the mute waiting time can be shortened, polite hanging-up is rapid, and complaints are avoided.
In an operation guiding scene, the robot guides the user to add the friend of the enterprise WeChat, and in the session process, when the user is determined to add the friend of the enterprise WeChat, the user is sequentially informed of each step (namely target guiding operation) of adding the friend of the enterprise WeChat, such as opening the friend of the enterprise and finding a button of adding the friend. And setting different mute durations in advance for different target guiding operations, wherein the mute durations are 5s, such as enterprise WeChat opening, the mute duration is 10s, a friend adding button is found, and enough operation time can be provided for a user in the conversation process with the user, a reply dialog can be sent to the user after the mute duration is determined, and if the user does not speak after the mute duration is determined to reach 10s, the user is inquired whether the operation is successful, so that the communication purpose is achieved, and better interactive experience is provided for the user.
In one marketing scenario, the robot promotes a product to a user during a conversation with the user. And setting a mute time length aiming at the marketing talk in advance, for example, 5s, if the user is determined not to speak within 5s, triggering a mute operation, namely, muting for 5s, and if the mute reaches 5s and the user still does not speak, sending the marketing talk to the user.
Further, if the user does not answer after the three times of mute operations are continuously executed, it is determined that a session exception exists, for example, a problem of a bad signal may exist, or the user is not at the telephone terminal at the moment, a polite on-hook can be performed, and an invalid interaction is avoided.
In the embodiment of the application, the mute duration can be adaptively adjusted according to different application scenes and session intentions, and the experience of session interaction is improved. Based on the same inventive concept, the embodiment of the present application further provides a device for muting a session, and because the principle of the device and the apparatus for solving the problem is similar to that of a method for muting a session, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 2, a schematic structural diagram of a session muting device provided in an embodiment of the present application includes:
an analyzing unit 201, configured to analyze a current session and determine a session intention;
a determining unit 202, configured to determine a mute duration according to a session intention when it is determined that the current session meets a mute trigger condition;
an executing unit 203, configured to execute a session muting operation according to the muting duration.
In one embodiment, the analysis unit 201 is configured to:
extracting keywords in the current conversation and determining a conversation intention matched with the keywords;
or determining the conversation intention by adopting a regular expression matching mode;
or determining the conversation intention according to the keywords and the audio features in the current conversation.
In one embodiment, the session intent includes any of: event complaints, rejection sessions, information consultation, operational guidance, and marketing.
In one embodiment, the determining unit 202 is configured to:
if the conversation intention is determined to be operation guidance, determining target guidance operation according to the current conversation, and determining mute time according to the operation time of the target guidance operation;
if the conversation intention is determined to be the rejection conversation, acquiring a mute time corresponding to the conversation ending operation;
if the conversation intention is determined to be the event complaint, determining the emotion level of the user according to the keywords and the audio characteristics of the current conversation, and acquiring the mute duration corresponding to the emotion level;
if the conversation intention is determined to be information consultation, acquiring mute duration set for consultation conversation;
and if the conversation intention is determined to be marketing, acquiring the mute duration set for the marketing conversation.
In one embodiment, the determining unit 202 is configured to:
acquiring the silent duration of a user in real time;
and if the silence duration is determined to reach the time threshold, determining that the silence triggering condition is met.
In the method, the device, the electronic equipment and the computer-readable storage medium for muting the conversation, the current conversation is analyzed, and the conversation intention is determined; when the current conversation is determined to accord with the mute triggering condition, determining the mute time according to the conversation intention; and executing the session mute operation according to the mute time length. Therefore, in the conversation interaction process, the mute time can be self-adapted, and the conversation interaction experience is improved.
Fig. 3 shows a schematic structural diagram of an electronic device 3000. Referring to fig. 3, an electronic device 3000 includes: the processor 3010 and the memory 3020 may further include a power source 3030, a display unit 3040, and an input unit 3050.
The processor 3010 is a control center of the electronic apparatus 3000, connects various components using various interfaces and lines, and executes various functions of the electronic apparatus 3000 by running or executing software programs and/or data stored in the memory 3020, thereby monitoring the electronic apparatus 3000 as a whole.
In the embodiment of the present application, when the processor 3010 calls the computer program stored in the memory 3020, the steps in the above embodiments are performed.
Alternatively, processor 3010 may include one or more processing units; preferably, the processor 3010 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, applications, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 3010. In some embodiments, the processor, memory, and/or memory may be implemented on a single chip, or in some embodiments, they may be implemented separately on separate chips.
The memory 3020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, various applications, and the like; the storage data area may store data created according to use of the electronic device 3000, and the like. Additionally, memory 3020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Electronic device 3000 also includes a power supply 3030 (e.g., a battery) that provides power to various components, which may be logically coupled to processor 3010 via a power management system, thereby providing management of charging, discharging, and power consumption via the power management system.
The display unit 3040 may be used to display information input by a user or information provided to the user, and various menus of the electronic device 3000, and is mainly used to display a display interface of each application in the electronic device 3000 and objects such as texts and pictures displayed in the display interface in the embodiment of the present invention. The display unit 3040 may include a display panel 3041. The Display panel 3041 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 3050 may be used to receive information such as a number or a character input by a user. The input unit 3050 may include a touch panel 3051 and other input devices 3052. Among other things, the touch panel 3051, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of a user on or near the touch panel 3051 using any suitable object or accessory such as a finger, a stylus, etc.).
Specifically, the touch panel 3051 may detect a touch operation of a user, detect signals generated by the touch operation, convert the signals into touch point coordinates, send the touch point coordinates to the processor 3010, receive a command sent by the processor 3010, and execute the command. In addition, the touch panel 3051 can be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. Other input devices 3052 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, power on/off keys, etc.), a trackball, a mouse, a joystick, and the like.
Of course, the touch panel 3051 may cover the display panel 3041, and when the touch panel 3051 detects a touch operation on or near the touch panel 3051, the touch panel is transmitted to the processor 3010 to determine the type of the touch event, and then the processor 3010 provides a corresponding visual output on the display panel 3041 according to the type of the touch event. Although in fig. 3, the touch panel 3051 and the display panel 3041 are shown as two separate components to implement the input and output functions of the electronic device 3000, in some embodiments, the touch panel 3051 and the display panel 3041 may be integrated to implement the input and output functions of the electronic device 3000.
The electronic device 3000 may also include one or more sensors, such as pressure sensors, gravitational acceleration sensors, proximity light sensors, and the like. Of course, the electronic device 3000 may further include other components such as a camera, which are not shown in fig. 3 and will not be described in detail since they are not components that are used in the embodiment of the present application.
Those skilled in the art will appreciate that fig. 3 is merely an example of an electronic device and is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or different components.
In an embodiment of the present application, a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the communication device may perform the steps in the above embodiments.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same one or more pieces of software or hardware when the application is implemented.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. A method of session muting, comprising:
analyzing the current conversation to determine a conversation intention;
when the current conversation is determined to accord with a mute triggering condition, determining mute time according to the conversation intention;
and executing the session mute operation according to the mute time length.
2. The method of claim 1, wherein said analyzing the current session to determine a session intent comprises:
extracting keywords in the current conversation, and determining a conversation intention matched with the keywords;
or, determining the conversation intention by adopting a regular expression matching mode;
or determining the conversation intention according to the keywords and the audio features in the current conversation.
3. The method of claim 1, wherein the session intent comprises any one of: event complaints, rejection sessions, information consultation, operational guidance, and marketing.
4. The method of any of claims 1-3, wherein determining a mute duration according to the session intent comprises:
if the conversation intention is determined to be operation guidance, determining target guidance operation according to the current conversation, and determining the mute time according to the operation time of the target guidance operation;
if the conversation intention is determined to be the rejection conversation, acquiring the mute duration corresponding to the conversation ending operation;
if the conversation intention is determined to be the event complaint, determining the emotion level of the user according to the keywords and the audio characteristics of the current conversation, and acquiring the mute duration corresponding to the emotion level;
if the conversation intention is determined to be information consultation, acquiring mute time set for consultation conversation;
and if the conversation intention is determined to be marketing, acquiring the mute duration set for the marketing conversation.
5. The method of any one of claims 1-3, wherein the determining that the current session meets a mute trigger condition comprises:
acquiring the silent duration of a user in real time;
and if the silence duration is determined to reach a time threshold, determining that the silence trigger condition is met.
6. An apparatus for muting a session, comprising:
the analysis unit is used for analyzing the current conversation and determining a conversation intention;
the determining unit is used for determining the mute duration according to the conversation intention when the current conversation is determined to accord with the mute triggering condition;
and the execution unit is used for executing the session mute operation according to the mute time length.
7. The apparatus of claim 6, wherein the analysis unit is to:
extracting keywords in the current conversation, and determining a conversation intention matched with the keywords;
or, determining the conversation intention by adopting a regular expression matching mode;
or determining the conversation intention according to the keywords and the audio features in the current conversation.
8. The apparatus of claim 6, wherein the session intent comprises any of: event complaints, declined sessions, information consultation, operational guidance, and marketing.
9. The apparatus according to any of claims 6-8, wherein the determining unit is configured to:
if the conversation intention is determined to be operation guidance, determining target guidance operation according to the current conversation, and determining the mute time according to the operation time of the target guidance operation;
if the conversation intention is determined to be the rejection conversation, acquiring a mute time corresponding to the conversation ending operation;
if the conversation intention is determined to be the event complaint, determining the emotion level of the user according to the keywords and the audio features of the current conversation, and acquiring the mute duration corresponding to the emotion level;
if the conversation intention is determined to be information consultation, acquiring mute time set for consultation conversation;
and if the conversation intention is determined to be marketing, acquiring the mute duration set for the marketing conversation.
10. The apparatus according to any of claims 6-8, wherein the determining unit is configured to:
acquiring the silent duration of a user in real time;
and if the silence duration is determined to reach a time threshold, determining that the silence trigger condition is met.
11. An electronic device comprising a processor and a memory, the memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202211513066.1A 2022-11-30 2022-11-30 Method and device for muting session, electronic equipment and computer-readable storage medium Pending CN115620720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211513066.1A CN115620720A (en) 2022-11-30 2022-11-30 Method and device for muting session, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211513066.1A CN115620720A (en) 2022-11-30 2022-11-30 Method and device for muting session, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115620720A true CN115620720A (en) 2023-01-17

Family

ID=84880760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211513066.1A Pending CN115620720A (en) 2022-11-30 2022-11-30 Method and device for muting session, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115620720A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150223110A1 (en) * 2014-02-05 2015-08-06 Qualcomm Incorporated Robust voice-activated floor control
CN111105782A (en) * 2019-11-27 2020-05-05 深圳追一科技有限公司 Session interaction processing method and device, computer equipment and storage medium
CN112053687A (en) * 2020-07-31 2020-12-08 出门问问信息科技有限公司 Voice processing method and device, computer readable storage medium and equipment
CN112527994A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Emotion analysis method, emotion analysis device, emotion analysis equipment and readable storage medium
CN114155839A (en) * 2021-12-15 2022-03-08 科大讯飞股份有限公司 Voice endpoint detection method, device, equipment and storage medium
CN114363467A (en) * 2021-12-22 2022-04-15 天翼电子商务有限公司 Method for dynamically correcting silent time in robot outbound call
CN114708856A (en) * 2022-05-07 2022-07-05 科大讯飞股份有限公司 Voice processing method and related equipment thereof
CN114756668A (en) * 2022-04-24 2022-07-15 平安普惠企业管理有限公司 Dialog interaction method and device based on artificial intelligence, computer equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150223110A1 (en) * 2014-02-05 2015-08-06 Qualcomm Incorporated Robust voice-activated floor control
CN111105782A (en) * 2019-11-27 2020-05-05 深圳追一科技有限公司 Session interaction processing method and device, computer equipment and storage medium
CN112053687A (en) * 2020-07-31 2020-12-08 出门问问信息科技有限公司 Voice processing method and device, computer readable storage medium and equipment
CN112527994A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Emotion analysis method, emotion analysis device, emotion analysis equipment and readable storage medium
CN114155839A (en) * 2021-12-15 2022-03-08 科大讯飞股份有限公司 Voice endpoint detection method, device, equipment and storage medium
CN114363467A (en) * 2021-12-22 2022-04-15 天翼电子商务有限公司 Method for dynamically correcting silent time in robot outbound call
CN114756668A (en) * 2022-04-24 2022-07-15 平安普惠企业管理有限公司 Dialog interaction method and device based on artificial intelligence, computer equipment and medium
CN114708856A (en) * 2022-05-07 2022-07-05 科大讯飞股份有限公司 Voice processing method and related equipment thereof

Similar Documents

Publication Publication Date Title
CN107209781B (en) Contextual search using natural language
CN111049996B (en) Multi-scene voice recognition method and device and intelligent customer service system applying same
US9276802B2 (en) Systems and methods for sharing information between virtual agents
CN108491147A (en) A kind of man-machine interaction method and mobile terminal based on virtual portrait
US20120108221A1 (en) Augmenting communication sessions with applications
US20140164532A1 (en) Systems and methods for virtual agent participation in multiparty conversation
CN108694947B (en) Voice control method, device, storage medium and electronic equipment
US11600266B2 (en) Network-based learning models for natural language processing
CN110196833A (en) Searching method, device, terminal and the storage medium of application program
CN108809894B (en) Method and terminal for processing network telephone
CN108711428B (en) Instruction execution method and device, storage medium and electronic equipment
WO2020051881A1 (en) Information prompt method and related product
CN115620720A (en) Method and device for muting session, electronic equipment and computer-readable storage medium
CN113138702B (en) Information processing method, device, electronic equipment and storage medium
CN108874975A (en) Search for content recommendation method, device, terminal device and storage medium
CN112911074B (en) Voice communication processing method, device, equipment and machine-readable medium
CN113413590A (en) Information verification method and device, computer equipment and storage medium
CN113709506A (en) Multimedia playing method, device, medium and program product based on cloud mobile phone
US20210191951A1 (en) Acquiring entity webpage link based on topological relationship graph
CN112597022A (en) Remote diagnosis method, device, storage medium and electronic equipment
CN111277708A (en) Information processing method and electronic equipment
CN115757746A (en) Session interruption method and device, electronic equipment and computer-readable storage medium
CN113660375B (en) Call method and device and electronic equipment
WO2018170992A1 (en) Method and device for controlling conversation
CN113946674A (en) Method and device for realizing real-time conversation during man-machine conversation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230117