CN112770005A - Intelligent response method, mobile terminal and storage medium - Google Patents

Intelligent response method, mobile terminal and storage medium Download PDF

Info

Publication number
CN112770005A
CN112770005A CN202011445925.9A CN202011445925A CN112770005A CN 112770005 A CN112770005 A CN 112770005A CN 202011445925 A CN202011445925 A CN 202011445925A CN 112770005 A CN112770005 A CN 112770005A
Authority
CN
China
Prior art keywords
message
responded
voice
preset
urgency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011445925.9A
Other languages
Chinese (zh)
Inventor
周凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN202011445925.9A priority Critical patent/CN112770005A/en
Publication of CN112770005A publication Critical patent/CN112770005A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Telephone Function (AREA)

Abstract

The method comprises the steps of receiving a message to be responded, determining the urgency degree of the message to be responded, responding the message to be responded according to a preset response rule when the urgency degree of the message to be responded meets a preset response condition, so that a mobile terminal can also respond to the received message to be responded under the condition that a user does not control the mobile terminal, and poor experience brought to the user and a message sender due to the fact that the user misses important messages is avoided.

Description

Intelligent response method, mobile terminal and storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an intelligent response method, a mobile terminal, and a storage medium.
Background
When a user is in a meeting, fitness, and other activities, the user is inconvenient to look up the messages on the mobile terminal, the user can miss messages such as telephone calls, short messages, social application messages and the like, if the missed messages are important, the user can be troubled, user experience is affected, and meanwhile, the user can also be not experienced well for a message sender because the user does not timely respond to the messages.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an intelligent response method, a mobile terminal and a storage medium,
the technical problem of poor user experience caused by the fact that the user cannot respond to the message received by the mobile terminal in time can be solved.
In order to solve the above technical problem, the present application provides an intelligent response method, including:
s11, receiving a message to be responded;
s12, determining the urgency of the message to be responded;
and S13, if the urgency of the message to be responded meets the preset response condition, responding the message to be responded according to the preset response rule.
Optionally, the message to be responded comprises at least one of a voice incoming call, a character message and a voice message.
Optionally, if the message to be responded received in step S11 is a voice incoming call, step S12 further includes:
intercepting a voice incoming call, wherein optionally, the voice incoming call comprises an incoming call and a voice call;
sending a voice message acquisition instruction and/or a character message acquisition instruction to a sender of a message to be responded;
and acquiring the voice message and/or the character message fed back by the sender of the message to be responded, and taking the voice message and/or the character message as the message to be responded.
Optionally, step S12 includes at least one of:
if the message to be responded is a voice message, converting the voice message into a character message, and obtaining the urgency of the message to be responded based on the urgency of the voice message and/or the urgency of the character message;
and if the message to be responded is the character message, taking the urgency degree of the character message as the urgency degree of the message to be responded.
Optionally, step S13 includes:
judging whether the urgency degree of the message to be responded exceeds a preset threshold value, if so, starting an emergency prompt for the message to be responded; and/or the presence of a gas in the gas,
if not, replying the message to be responded according to preset response content, wherein optionally the preset response content comprises character response content and/or voice response content.
Optionally, step S12 is preceded by:
and determining that the message to be responded is not responded within a preset time length, and/or determining that a current time point has a preset plan.
Optionally, the preset plan includes at least one of:
schedule plan, meeting plan in mailbox, and preset event added before step S11.
The application also provides an intelligent response method, which comprises the following steps:
s21, determining preset response content to the preset event;
s22, receiving a message to be responded;
s23, determining that the message to be responded corresponds to a preset event;
and S24, responding the message to be responded according to the preset response content of the preset event and the preset response rule.
Optionally, the message to be responded comprises at least one of a voice incoming call, a character message and a voice message.
Optionally, if the message to be responded in step S22 is a voice incoming call, step S23 further includes:
intercepting a voice incoming call, wherein optionally, the voice incoming call comprises an incoming call and a voice call;
sending a voice message acquisition instruction and/or a character message acquisition instruction to a sender of a message to be responded;
and acquiring the voice message and/or the character message fed back by the sender of the message to be responded, and taking the voice message and/or the character message as the message to be responded.
Optionally, step S23 includes:
and extracting key information from the message to be responded, matching the key information with the key information of the preset event, and determining that the message to be responded corresponds to the preset event if the matching is successful.
Optionally, step S24 includes:
determining the urgency of a message to be responded;
and if the urgency degree of the message to be responded meets the preset response condition, sending the preset response content to a sender of the message to be responded.
Optionally, the determining the urgency level of the message to be responded includes at least one of:
if the message to be responded is a voice message, converting the voice message into a character message, and obtaining the urgency of the message to be responded based on the urgency of the voice message and/or the urgency of the character message;
and if the message to be responded is the character message, taking the urgency degree of the character message as the urgency degree of the message to be responded.
Optionally, if the urgency of the message to be responded meets the preset response condition, sending the preset response content to the sender of the message to be responded includes:
judging whether the urgency degree of the message to be responded exceeds a preset threshold value, if so, starting an emergency prompt for the message to be responded; and/or the presence of a gas in the gas,
if not, replying the message to be responded according to preset response content, wherein optionally the preset response content comprises character response content and/or voice response content.
Optionally, step S21 includes:
receiving input characters, and using the input characters as preset response content;
and/or recording voice messages, and using the recorded voice messages as preset response contents.
Optionally, step S23 is preceded by:
and determining that the message to be responded is not responded within a preset time length.
Optionally, the preset plan includes at least one of: schedule plan, meeting plan in mailbox.
The present application further provides a mobile terminal, including: the intelligent response program is stored on the memory, and when being executed by the processor, the intelligent response program realizes the steps of the method.
The present application also provides a computer storage medium, which stores a computer program, and the computer program realizes the steps of the intelligent response method when being executed by a processor.
As described above, based on the intelligent response method provided by the present application, when the mobile terminal receives the message to be responded, the urgency level of the message to be responded can be determined, and if the urgency level of the message is determined to satisfy the preset response condition, the message to be responded can be responded according to the preset response rule, so that the mobile terminal can also respond to the received message to be responded under the condition that the user does not operate the mobile terminal, and better experience is brought to the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a first intelligent response method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an interactive interface for receiving a message to be responded according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating a second smart response method according to an embodiment of the present application;
FIG. 6 is a flow chart illustrating a third smart response method according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a fourth smart response method according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and optionally, identically named components, features, and elements in different embodiments of the present application may have different meanings, as may be determined by their interpretation in the embodiment or by their further context within the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S11 and S12 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S12 first and then S11 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The mobile terminal may be implemented in various forms. For example, the mobile terminal described in the present application may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Alternatively, the radio frequency unit 101 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. Alternatively, the touch panel 1071 may be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Optionally, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
In some implementations, after the mobile terminal receives the message, the user may have something that the user cannot reply to the message in time, and at this time, the intelligent response method may be applied to the mobile terminal to respond to the message that is not responded to by the user and is received by the mobile terminal based on the intelligent response method provided in the embodiment of the present application, please refer to fig. 3, and the method includes steps S11 to S13, where:
and S11, receiving the message to be responded.
The message to be responded is a message received by the mobile terminal of the user, the message needs to be responded by the user, and may include a voice incoming call, a character message and/or a voice message, please refer to fig. 4, where the message to be responded in fig. 4 is a character message. The voice call comprises an incoming call on the mobile terminal and a voice call of an application program (such as a social application program) on the mobile terminal; the character message can be a short message on the mobile terminal and character information in an application program (such as a social application program) on the mobile terminal, and comprises letters, numbers, operation symbols, punctuation marks, other symbols and some functional symbols; the voice message may be a piece of audio received by the mobile terminal, and the audio may be a voice message or a voice of an application (e.g., a social contact application) on the mobile terminal.
And S12, determining the urgency of the message to be responded.
It can be understood that the message to be responded received by the mobile terminal is actually urgent, some messages to be responded are urgent and require the user to respond immediately, such as a voice call of a takeaway, and some messages to be responded are not urgent, such as an express notification receiving short message. In this embodiment of the application, after the mobile terminal receives the message to be responded in step S11, the urgency level of the message to be responded may be determined, and then the message to be responded corresponds based on the urgency level of the message to be responded. The determination method of the urgency degree is different corresponding to different kinds of messages to be responded, and is introduced as follows:
for character messages, the urgency of the messages can be determined based on text emotion analysis (SentimentAnalysis), which refers to a process of analyzing, processing and extracting subjective text with emotional colors by using natural language processing and text mining technologies. The basic process of text emotion analysis comprises the processes of crawling from an original text, text preprocessing, corpus and emotion word bank construction, emotion analysis results and the like.
For the voice message, the urgency level of the voice message may be the urgency level obtained by the urgency level of the character message and the urgency level of the voice emotion according to a preset operation method, and in some examples, the urgency level may be the sum of the urgency levels of the character message and the voice emotion. For character messages, the urgency degree of the character messages can be determined according to text emotion analysis, for voice emotion, the urgency degree can be determined through voice emotion recognition, and the voice emotion recognition can extract emotion information contained in voice and recognize the category of the emotion information. There are two main approaches to emotion description at present. The first is to divide basic emotions widely used in human daily life into anger, joy, excitement, sadness, aversion and the like based on discrete emotion division; the other is based on continuous dimension emotion classification, and different emotions are mainly distinguished through different valence degrees and activation degrees.
And S13, if the urgency of the message to be responded meets the preset response condition, responding the message to be responded according to the preset response rule.
The message to be responded received by the mobile terminal has different urgency degrees, whether the urgency degree meets the preset response condition or not can be judged, and if yes, the message to be responded can be responded. The preset response condition may be that the urgency level of the message to be responded exceeds a preset threshold, and the corresponding preset response rule may be that the mobile terminal performs an emergency prompt on the message to be responded, and in some examples, the preset response condition may be that an emergency prompt sound is emitted, a breathing lamp is turned on to perform normal lighting or flashing, a shock is performed, and/or a flash lamp is turned on to perform normal lighting and/or flashing. The preset response condition may also be that the urgency of the message to be responded is smaller than a preset threshold, and the corresponding preset response rule may be that the message to be responded is responded according to preset response content on the mobile terminal, in some examples, the preset response content may be character response content or voice response content, and the response content may be user-defined setting or factory setting on the mobile terminal.
The intelligent response method provided by the embodiment of the application can determine the urgency degree of the message to be responded when the mobile terminal receives the message to be responded, and can respond to the message to be responded according to the preset response rule if the urgency degree of the message is judged to meet the preset response condition, so that the mobile terminal can respond to the received message to be responded under the condition that a user does not control the mobile terminal, and the user is prevented from missing important messages and bringing poor experience to the user and a message sender.
Other embodiments of the smart response method provided by the present application will be further described based on the above description of the smart response method.
An embodiment of the present application further provides an intelligent response method, please refer to fig. 5, where the intelligent response method includes steps S101 to S109, including:
s101, receiving a message to be responded.
The mobile terminal of the user may receive a message to be responded sent by the sender, and the user may respond to the message to be responded, where the message to be responded includes a voice incoming call, a character message, and/or a voice message in some examples. Optionally, the voice call includes an incoming call on the mobile terminal and a voice call of an application (e.g., a social application) on the mobile terminal; the character message can be a short message on the mobile terminal and character information in an application program (such as a social application program) on the mobile terminal, and comprises letters, numbers, operation symbols, punctuation marks, other symbols and some functional symbols; the voice message may be a piece of audio received by the mobile terminal, and the audio may be a voice message or a voice of an application (e.g., a social contact application) on the mobile terminal. The embodiment of the application takes the voice incoming call as an example to continuously introduce the provided intelligent response method.
S102, determining that the message to be responded is not responded within a preset time length.
The mobile terminal may count time after receiving the message to be responded, and if the message to be responded is not processed by the user within the preset time length, the next step S103 may be executed.
In some other examples, there may be preset schedules on the mobile terminal, including a schedule for the user to add on a calendar, a meeting schedule for notification in a mailbox mail, and preset events added by the user before step S101. In these preset plans, the user will not view the mobile terminal due to some factors, and the mobile terminal may respond to the to-be-responded message according to the intelligent response provided by the embodiment of the present application. The preset event can be set on the mobile terminal by the user aiming at a certain event, for example, the user tells the voice assistant that ' shift will help me to push away all dining room invitations this night ', ' i go on a business abroad ', if there is an express delivery help me to be forwarded to xxx ', the mobile terminal can set a dining room inviting event and an express delivery receiving event respectively, and when receiving the dining room inviting and receiving the relevant to-be-responded message of the express delivery, the mobile terminal can directly respond to the two types of to-be-responded messages according to the intelligent response method provided by the embodiment of the application.
The schedule plan and the conference plan are characterized in that a clear schedule time period is provided, a user does not operate the mobile terminal in the schedule time period, at this time, if the mobile terminal receives the message to be responded, the mobile terminal does not need to execute the step S103 according to the step S102 that the message to be responded is determined not to be responded within the preset time length, but can directly execute the step S103 when the message to be responded is received and the message to be responded is determined to be received within the preset time period of the schedule plan and the conference plan.
Compared with a schedule plan and a conference plan, the preset event is characterized in that the preset event has no preset scheduled time period, but the event corresponding to the message to be responded is preset, if the event corresponding to the message to be responded is the preset event preset on the mobile terminal, the mobile terminal can directly execute the step S103 without determining that the message to be responded is not responded within the preset time length according to the step S102, but after receiving the message to be responded and determining that the event corresponding to the message to be responded is the preset event on the mobile terminal.
And S103, cutting off the voice incoming call.
The intelligent response method provided continues to be introduced by taking a voice incoming call as an example. When the mobile terminal receives the voice call and determines that the voice call is not responded and replied by the user within the preset time span, the mobile terminal can actively cut off (hang up) the voice call. Voice calls include incoming calls on mobile terminals and voice calls of applications (e.g., social applications) on mobile terminals.
And S104, sending the voice message acquisition instruction and/or the language character message acquisition instruction to a sender of the message to be responded.
And S105, acquiring the voice message and/or the character message fed back by the sender of the message to be responded, and taking the voice message and/or the character message as the message to be responded.
After the voice call is intercepted (hung up), the mobile terminal can send a voice message acquisition instruction and/or a character message acquisition instruction to a sender of the message to be responded so as to acquire the voice message and/or the character message of the sender, the messages can be used as the newly received message to be responded by the mobile terminal, and the subsequent steps of the intelligent response method can continue to process the newly received message to be responded. The embodiment of the present application takes the obtained voice message as an example and continues the description.
S106, converting the voice message into a character message, calculating the urgency of the voice message and the urgency of the character message, and obtaining the urgency of the message to be responded based on the urgency of the voice message and the urgency of the character message.
After receiving the voice message based on the voice obtaining instruction, the mobile terminal also needs to convert the voice message into a character message, calculates the urgency of the voice message and the character message corresponding to the voice message, and obtains the urgency of the message to be responded (the voice message in the embodiment of the present application) by performing a preset operation on the urgency of the voice message and the urgency of the character message. For example, if the urgency score of the mood and emotion of the voice message is P1, and the urgency score of the character message (text content) of the voice message is P2, the overall urgency P of the message to be responded (voice message) is P1+ P2.
In some other examples, if the character message is received in step S105, step S106 only needs to calculate the urgency level of the character message, and directly uses the urgency level of the character message as the urgency level of the message to be responded, for example, the overall urgency level P of the character message is P2. In other examples, if the message to be responded in step S101 is a character message and/or a voice message, the urgency level of the message to be responded (character message and/or voice message) may be directly calculated without performing steps 103 to 105. The determination method of the urgency degree is different corresponding to different kinds of messages to be responded, and is introduced as follows:
for a character message, the urgency of the character message can be determined based on text emotion Analysis (Sentiment Analysis), which refers to a process of analyzing, processing and extracting subjective text with emotional colors by using natural language processing and text mining technologies. The basic process of text emotion analysis comprises the processes of crawling from an original text, text preprocessing, corpus and emotion word bank construction, emotion analysis results and the like.
For the voice message, the urgency level of the voice message may be the urgency level obtained by the urgency level of the character message and the urgency level of the voice emotion according to a preset operation method, and in some examples, the urgency level may be the sum of the urgency levels of the character message and the voice emotion. For character messages, the urgency degree of the character messages can be determined according to text emotion analysis, for voice emotion, the urgency degree can be determined through voice emotion recognition, and the voice emotion recognition can extract emotion information contained in voice and recognize the category of the emotion information. There are two main approaches to emotion description at present. The first is to divide basic emotions widely used in human daily life into anger, joy, excitement, sadness, aversion and the like based on discrete emotion division; the other is based on continuous dimension emotion classification, and different emotions are mainly distinguished through different valence degrees and activation degrees.
And S107, judging whether the urgency degree of the message to be responded exceeds a preset threshold value.
And S108, if the urgency degree of the message to be responded exceeds a preset threshold value, starting an emergency prompt for the message to be responded.
The emergency alert of the mobile terminal may include sounding an emergency alert tone, turning on a breathing light for constant illumination or flashing, shaking, and/or turning on a flash for constant illumination and/or flashing.
And S109, if the urgency of the message to be responded does not exceed the preset threshold, replying the message to be responded according to the preset response content.
The preset response content includes character response content and/or voice response content, and the response content may be user-defined setting or factory setting on the mobile terminal.
Several examples are given here that can respond using the intelligent response method provided by the embodiments of the present application:
the ringing of the mobile phone in the working meeting is a very unfortunate behavior, a user can add a schedule plan of the working meeting on a mobile terminal, the mobile terminal can adjust the mobile terminal into a do-not-disturb mode and an intelligent reply mode within the planning time according to the schedule plan, the mobile terminal does not have any prompting sound and prompting lamp when receiving a voice call, a character message and a voice message in the do-not-disturb mode, but can intelligently respond to the received message to be responded under the condition that the mobile terminal starts the intelligent response mode, for example, the mobile terminal receives a sick telephone of a family in the meeting process, the mobile terminal can enable the opposite side to leave a message to obtain the voice message, then converts the obtained voice message into the character message, and then carries out urgency scoring and adding on the voice emotion and the character message of the voice message to obtain the urgency scoring of the voice message finally, if the mobile terminal judges that the urgency level of a voice message (the voice message acquired based on the ill-conditioned telephone of the family) exceeds a preset threshold, the voice message is determined to belong to an emergency event, and even if the mobile terminal is in the do-not-disturb mode, the mobile terminal directly sends out an emergency prompt tone and turns on emergency light to prompt a user to process the voice message in the first time.
For incoming calls or messages which are not preset by the user and are not urgent, the mobile terminal can simply remember the demands and prompt the user to reply when the user is empty. For example, a user receives a friend telephone in a company meeting process to invite a weekend to play a ball, at the moment, the mobile terminal can intelligently answer and let the other party leave a message to acquire a voice message, and carries out urgency scoring on the voice message: and recording the urgency degree score of the tone and emotion of the voice message as P1, and recording the urgency degree score of the character message (text content) of the voice message as P2, wherein the overall urgency degree P of the message to be responded (voice message) is P1+ P2, and if the value P is smaller than a preset threshold, the voice message can be recorded, and the user is reminded to reply after the user conference is finished.
For another example, many harassing calls and promotional calls in life are very annoying. For this situation, the user may preset the mobile terminal, for example, say "call uniformly refuses to the insurance promotion call" to the voice assistant, and then the mobile terminal may process the message as long as the mobile terminal recognizes the message to be responded with the promotion intention in the intelligent reply mode, without pushing the message to the user.
The intelligent response method provided by the embodiment of the application can enable the mobile terminal to respond to the received message to be responded under the condition that the user does not control the mobile terminal, and avoid the user from missing important messages and bringing poor experience to the user and a message sender.
An embodiment of the present application further provides an intelligent response method, please refer to fig. 6, where the intelligent response method includes steps S21 to S24, where:
and S21, determining the preset response content to the preset event.
The preset event is preset on the mobile terminal by a user aiming at a certain event, for example, a dining office invitation event and an express receiving event, the mobile terminal can respond to the preset event according to preset response content, the mobile terminal can have uniform preset response content for different preset events, and the mobile terminal can also have different preset response content for different preset events.
The preset response content is content which is set in advance by a user and used for responding to a message to be responded received by the mobile terminal, and can be a voice message (a section of audio) or a character message (comprising letters, numbers, operation symbols, punctuation marks, other symbols and some functional symbols).
And S22, receiving the message to be responded.
The mobile terminal of the user may receive a message to be responded sent by the sender, and the user may respond to the message to be responded, where the message to be responded includes a voice incoming call, a character message, and/or a voice message in some examples. Optionally, the voice call includes an incoming call on the mobile terminal and a voice call of an application (e.g., a social application) on the mobile terminal; the character message can be a short message on the mobile terminal and character information in an application program (such as a social application program) on the mobile terminal, and comprises letters, numbers, operation symbols, punctuation marks, other symbols and some functional symbols; the voice message may be a piece of audio received by the mobile terminal, and the audio may be a voice message or a voice of an application (e.g., a social contact application) on the mobile terminal. The embodiment of the application takes the voice incoming call as an example to continuously introduce the provided intelligent response method.
And S23, determining that the message to be responded corresponds to the preset event.
The received message to be responded corresponds to an event, and the mobile terminal may determine whether the event corresponding to the message to be responded is a preset event on the mobile terminal, and if so, may proceed to the next step S209. The key information can be extracted from the message to be responded, the key information is matched with the key information of the preset event, and if the matching is successful, the message to be responded is determined to correspond to the preset event. Corresponding to the message to be responded and the preset event, the key information may be field information or audio information, and in some examples, the key information may be the field information and the audio information at the same time. For example: if the received message to be responded is a character message "how do you get to eat together today in the evening? If the preset response content corresponding to the preset event (meal invitation) set by the user is a voice message, "shift will help me to push away all the meal invitations this evening", it may be determined that the key information may be "the meal" or "eat", and the key information in the message to be responded is matched with the key information in the preset event (meal invitation), and it may be determined that the message to be responded corresponds to the preset event (meal invitation).
And S24, responding the message to be responded according to the preset response content of the preset event and the preset response rule.
After determining the preset event corresponding to the message to be responded on the mobile terminal, the message to be responded may be responded according to preset response content corresponding to the preset event, where the preset response content is set by a user on the mobile terminal, may also be preset content set by a factory on the mobile terminal and used for responding to the message to be responded, and may also be character response content or voice response content.
In some examples, the to-be-responded message may be responded to directly according to the determined preset response content after the preset response content is determined, in other examples, the urgency of the to-be-responded message may be continuously determined, and the to-be-responded message may be responded to according to the preset response content only after it is determined that the urgency of the to-be-responded message exceeds a preset threshold.
The intelligent response method provided by the embodiment of the application can determine the preset response content to the preset event, receive the message to be responded at the mobile terminal, and determine that the message to be responded corresponds to the preset event, and then can respond to the message to be responded according to the preset response content of the preset event and the preset response rule. Therefore, under the condition that the user does not control the mobile terminal, the mobile terminal can also respond to the received message to be responded, and poor experience brought to the user and the message sender due to the fact that the user misses an important message is avoided.
Continuing with the description of other embodiments of the smart response method, referring to fig. 7, the smart response method includes steps S201 to S209, including:
s201, starting a microphone to record voice messages, and taking the recorded voice messages as preset response contents.
S202, determining preset response content to the preset event.
The preset event is preset on the mobile terminal by a user aiming at a certain event, for example, a dining office invitation event and an express receiving event, the mobile terminal can respond to the preset event according to preset response content, the mobile terminal can have uniform preset response content for different preset events, and the mobile terminal can also have different preset response content for different preset events.
The preset response content is content which is set by a user in advance and used for responding to a message to be responded received by the mobile terminal, and may be a voice message (a segment of audio), or a character message (including letters, numbers, operation symbols, punctuation marks, other symbols, and some functional symbols), if the preset response content is the voice message, a microphone may be started to record the voice message based on user operation, and the recorded voice message is used as the preset response content, in some other examples, characters input by the user may be received based on a touch instruction of the user to the mobile terminal, and the input character message may be used as the preset response content by the mobile terminal.
Taking the preset response message as the voice message as an example, the mobile terminal may start a microphone to record the voice message recorded by the voice message based on a user operation, for example, the user may tell the voice assistant "shift over this night to help me push away all the rice office invitations", "i go out of business", if there is a courier to help me forward to xxxx ", the mobile terminal may set a rice office invitations event and a courier receiving event respectively, and when receiving the rice office invitations and receiving courier-related messages to be responded, the mobile terminal may directly reply (respond) to the voice message and/or character message corresponding to the preset time, for example," busy, busy i reply "and" courier help forward to the a site ".
S203, receiving a message to be responded.
The mobile terminal of the user may receive a message to be responded sent by the sender, and the user may respond to the message to be responded, where the message to be responded includes a voice incoming call, a character message, and/or a voice message in some examples. Optionally, the voice call includes an incoming call on the mobile terminal and a voice call of an application (e.g., a social application) on the mobile terminal; the character message can be a short message on the mobile terminal and character information in an application program (such as a social application program) on the mobile terminal, and comprises letters, numbers, operation symbols, punctuation marks, other symbols and some functional symbols; the voice message may be a piece of audio received by the mobile terminal, and the audio may be a voice message or a voice of an application (e.g., a social contact application) on the mobile terminal. The embodiment of the application takes the voice incoming call as an example to continuously introduce the provided intelligent response method.
And S204, determining that the message to be responded is not responded within the preset time length.
The mobile terminal may count time after receiving the message to be responded, and if the message to be responded is not processed by the user within the preset time length, the next step S205 may be executed.
In some other examples, there may be pre-set schedules on the mobile terminal, including schedules that the user adds to a calendar, meeting schedules that are announced in mailbox mail. In these preset plans, the user will not view the mobile terminal due to some factors, and the mobile terminal may respond to the to-be-responded message according to the intelligent response provided by the embodiment of the present application. The schedule plan and the conference plan are characterized in that a clear schedule time period is provided, a user does not operate the mobile terminal in the schedule time period, at this time, if the mobile terminal receives the message to be responded, the mobile terminal does not need to execute the step S205 according to the step S204 that the message to be responded is not responded within the preset time length, but can directly execute the step S205 when the message to be responded is received and the message to be responded is determined to occur within the preset time period of the schedule plan and the conference plan.
And S205, cutting off the voice incoming call.
The embodiment of the application continues to introduce the provided intelligent response method by taking the voice incoming call as an example. When the mobile terminal receives the voice call and determines that the voice call is not responded and replied by the user within the preset time span, the mobile terminal can actively cut off (hang up) the voice call. Voice calls include incoming calls on mobile terminals and voice calls of applications (e.g., social applications) on mobile terminals.
S206, sending the voice message acquisition instruction to a sender of the message to be responded.
S207, acquiring the voice message and/or the character message fed back by the sender of the message to be responded, and taking the voice message and/or the character message as the message to be responded.
After the voice call is intercepted (hung up), the mobile terminal can send a voice message acquisition instruction and/or a character message acquisition instruction to a sender of the message to be responded so as to acquire the voice message and/or the character message of the sender, the messages can be used as the newly received message to be responded by the mobile terminal, and the subsequent steps of the intelligent response method can continue to process the newly received message to be responded. The embodiment of the present application takes the obtained voice message as an example and continues the description.
And S208, determining that the message to be responded corresponds to the preset event.
The received message to be responded corresponds to an event, and the mobile terminal may determine whether the event corresponding to the message to be responded is a preset event on the mobile terminal, and if so, may proceed to the next step S209. The key information can be extracted from the message to be responded, the key information is matched with the key information of the preset event, and if the matching is successful, the message to be responded is determined to correspond to the preset event. Corresponding to the message to be responded and the preset event, the key information may be field information or audio information, and in some examples, the key information may be the field information and the audio information at the same time. For example: if the received message to be responded is a character message "how do you get to eat together today in the evening? If the preset response content corresponding to the preset event (meal invitation) set by the user is a voice message, "shift will help me to push away all the meal invitations this evening", it may be determined that the key information may be "the meal" or "eat", and the key information in the message to be responded is matched with the key information in the preset event (meal invitation), and it may be determined that the message to be responded corresponds to the preset event (meal invitation).
S209, determining the urgency of the message to be responded.
In the embodiment of the application, after receiving the voice message based on the voice obtaining instruction, the mobile terminal needs to convert the voice message into the character message, calculate the urgency of the voice message and the character message corresponding to the voice message, and obtain the urgency of the message to be responded (the voice message in the embodiment of the application) by performing preset operation on the urgency of the voice message and the urgency of the character message. For example, if the urgency score of the mood and emotion of the voice message is P1, and the urgency score of the character message (text content) of the voice message is P2, the overall urgency P of the message to be responded (voice message) is P1+ P2.
In some other examples, if the character message is received in step S105, step S106 only needs to calculate the urgency level of the character message, and directly uses the urgency level of the character message as the urgency level of the message to be responded, for example, the overall urgency level P of the character message is P2. In other examples, if the message to be responded in step S101 is a character message and/or a voice message, the urgency level of the message to be responded (character message and/or voice message) may be directly calculated without performing steps 103 to 105.
It can be understood that the message to be responded received by the mobile terminal is actually urgent, some messages to be responded are urgent and require the user to respond immediately, such as a voice call of a takeaway, and some messages to be responded are not urgent, such as an express notification receiving short message. The determination method of the urgency degree is different corresponding to different kinds of messages to be responded, and is introduced as follows:
for character messages, the urgency of the messages can be determined based on text emotion analysis (SentimentAnalysis), which refers to a process of analyzing, processing and extracting subjective text with emotional colors by using natural language processing and text mining technologies. The basic process of text emotion analysis comprises the processes of crawling from an original text, text preprocessing, corpus and emotion word bank construction, emotion analysis results and the like.
For the voice message, the urgency level of the voice message may be the urgency level obtained by the urgency level of the character message and the urgency level of the voice emotion according to a preset operation method, and in some examples, the urgency level may be the sum of the urgency levels of the character message and the voice emotion. For character messages, the urgency degree of the character messages can be determined according to text emotion analysis, for voice emotion, the urgency degree can be determined through voice emotion recognition, and the voice emotion recognition can extract emotion information contained in voice and recognize the category of the emotion information. There are two main approaches to emotion description at present. The first is to divide basic emotions widely used in human daily life into anger, joy, excitement, sadness, aversion and the like based on discrete emotion division; the other is based on continuous dimension emotion classification, and different emotions are mainly distinguished through different valence degrees and activation degrees.
S210, judging whether the urgency degree of the message to be responded exceeds a preset threshold value.
And S211, if the urgency degree of the message to be responded exceeds a preset threshold, starting an emergency prompt for the message to be responded.
The emergency alert of the mobile terminal may include sounding an emergency alert tone, turning on a breathing light for constant illumination or flashing, shaking, and/or turning on a flash for constant illumination and/or flashing.
S212, if the urgency of the message to be responded does not exceed the preset threshold, replying the message to be responded according to preset response content, wherein the preset response content comprises character response content and/or voice response content.
The preset response content includes character response content and/or voice response content, and the response content may be user-defined setting or factory setting on the mobile terminal.
Several examples are given here that can respond using the intelligent response method provided by the embodiments of the present application:
if the user presets an open intelligent response mode on the mobile terminal in advance and presets a preset event on the mobile terminal in advance, for example, the user can say that the user can take the take-out to a voice assistant and put the user out, the user can set the take-out event on the mobile terminal, and the preset response content corresponding to the preset response content that the user can set the take-out event can be a character message (the take-out call dialed by the take-out person can be answered in a short message mode) or a voice message (the take-out call dialed by the take-out person can be answered in a voice message mode). Before taking out is received after setting a taking-out event, a user can do other things safely, for example, playing a game, if a mobile terminal receives a taking-out call in the game process, the mobile terminal can hang up the taking-out call and automatically send a short message (preset response content of character message class) to a take-out person to put the taking-out call out.
Optionally, when the user goes to take a bath, the intelligent response mode can be started for the intelligent terminal, a preset event of "bath" is set for the mobile terminal, the preset response content of the preset event is "busy and reply", and when the mobile terminal receives a voice call, a character message and a voice message, the mobile terminal can reply according to the preset response content of "busy and reply". Optionally, for the received message to be responded, the mobile terminal may also summarize the message to be responded (record the message to be responded), even after performing the intelligent response, so that the user may see the content of the message to be responded after the mobile terminal performs the intelligent response.
For another example, many harassing calls and promotional calls in life are very annoying. For this situation, the user may preset the mobile terminal, for example, say "call uniformly refuses to the insurance promotion call" to the voice assistant, and then the mobile terminal may process the message as long as the mobile terminal recognizes the message to be responded with the promotion intention in the intelligent reply mode, without pushing the message to the user.
According to the intelligent response method provided by the embodiment of the application, the mobile terminal can also respond to the received message to be responded under the condition that the user does not control the mobile terminal, so that the user is prevented from missing important messages and bringing poor experience to the user and a message sender.
The application also provides a mobile terminal device, the terminal device includes a memory and a processor, the memory stores an intelligent response program, and the intelligent response program is executed by the processor to implement the steps of the intelligent response method in any of the above embodiments.
The present application further provides a computer-readable storage medium, on which an intelligent response program is stored, and when being executed by a processor, the intelligent response program implements the steps of the intelligent response method in any of the above embodiments.
In the embodiments of the mobile terminal and the computer-readable storage medium provided in the present application, all technical features of the embodiments of the intelligent response method are included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (13)

1. An intelligent response method, the method comprising:
s11, receiving a message to be responded;
s12, determining the urgency of the message to be responded;
and S13, if the urgency of the message to be responded meets the preset response condition, responding the message to be responded according to the preset response rule.
2. The method of claim 1, comprising at least one of:
the message to be responded comprises at least one of a voice incoming call, a character message and a voice message;
if the message to be responded received in step S11 is a voice incoming call, step S12 further includes: intercepting the voice incoming call, sending a voice message acquisition instruction and/or a character message acquisition instruction to a sender of the message to be responded, acquiring a voice message and/or a character message fed back by the sender of the message to be responded, and taking the voice message and/or the character message as the message to be responded;
the step S12 is preceded by: and determining that the message to be responded is not responded within a preset time length, and/or determining that a preset plan exists at the current time point.
3. The method of claim 2, wherein the step S12 includes at least one of:
if the message to be responded is a voice message, converting the voice message into a character message, and obtaining the urgency of the message to be responded based on the urgency of the voice message and/or the urgency of the character message;
and if the message to be responded is a character message, taking the urgency degree of the character message as the urgency degree of the message to be responded.
4. The method of claim 3, wherein the step S13 includes:
judging whether the urgency degree of the message to be responded exceeds a preset threshold value, if so, starting an emergency prompt for the message to be responded; and/or the presence of a gas in the gas,
if not, replying the message to be responded according to preset response content.
5. The method of claim 2, wherein the predetermined plan comprises at least one of:
schedule plan, meeting plan in mailbox, and preset event added before step S11.
6. An intelligent response method, the method comprising:
s21, determining preset response content to the preset event;
s22, receiving a message to be responded;
s23, determining that the message to be responded corresponds to the preset event;
and S24, responding the message to be responded according to the preset response content of the preset event and the preset response rule.
7. The method of claim 6, comprising at least one of:
the message to be responded comprises at least one of a voice incoming call, a character message and a voice message;
if the message to be responded in step S22 is an incoming voice call, step S23 further includes: intercepting the voice incoming call, sending a voice message acquisition instruction and/or a character message acquisition instruction to a sender of the message to be responded, acquiring a voice message and/or a character message fed back by the sender of the message to be responded, and taking the voice message and/or the character message as the message to be responded;
the step S21 includes: receiving input characters, using the input characters as the preset response content, and/or recording voice messages, and using the recorded voice messages as the preset response content;
the step S23 is preceded by: and determining that the message to be responded is not responded within a preset time length.
8. The method of claim 6 or 7, comprising at least one of:
the step S23 includes: extracting key information from the message to be responded, matching the key information with the key information of the preset event, and determining that the message to be responded corresponds to the preset event if the matching is successful;
the step S24 includes: and determining the urgency degree of the message to be responded, and if the urgency degree of the message to be responded meets a preset response condition, sending the preset response content to a sender of the message to be responded.
9. The method of claim 8, wherein the determining the urgency of the message to respond comprises at least one of:
if the message to be responded is a voice message, converting the voice message into a character message, and obtaining the urgency of the message to be responded based on the urgency of the voice message and/or the urgency of the character message;
and if the message to be responded is a character message, taking the urgency degree of the character message as the urgency degree of the message to be responded.
10. The method according to claim 8 or 9, wherein the sending the preset response content to the sender of the message to be responded if the urgency of the message to be responded meets a preset response condition comprises:
judging whether the urgency degree of the message to be responded exceeds a preset threshold value, if so, starting an emergency prompt for the message to be responded; and/or the presence of a gas in the gas,
if not, replying the message to be responded according to preset response content.
11. The method of claim 7, wherein the predetermined plan comprises at least one of:
schedule plan, meeting plan in mailbox.
12. A mobile terminal, characterized in that the mobile terminal comprises: memory, a processor, wherein said memory has stored thereon a smart response program which when executed by said processor implements the steps of the smart response method of any of claims 1 to 11.
13. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the smart response method according to any one of claims 1 to 11.
CN202011445925.9A 2020-12-08 2020-12-08 Intelligent response method, mobile terminal and storage medium Pending CN112770005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011445925.9A CN112770005A (en) 2020-12-08 2020-12-08 Intelligent response method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011445925.9A CN112770005A (en) 2020-12-08 2020-12-08 Intelligent response method, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112770005A true CN112770005A (en) 2021-05-07

Family

ID=75693638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011445925.9A Pending CN112770005A (en) 2020-12-08 2020-12-08 Intelligent response method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112770005A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710377A (en) * 2022-03-31 2022-07-05 北京贝壳时代网络科技有限公司 Notification method, notification device, storage medium and computer program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710377A (en) * 2022-03-31 2022-07-05 北京贝壳时代网络科技有限公司 Notification method, notification device, storage medium and computer program product

Similar Documents

Publication Publication Date Title
CN111343081B (en) Information display method and electronic equipment
KR101920019B1 (en) Apparatus and method for processing a call service of mobile terminal
CN108183853A (en) Message prompt method, mobile terminal and readable storage medium storing program for executing
CN106161749B (en) Malicious telephone identification method and device
CN107807767B (en) Communication service processing method, terminal and computer readable storage medium
CN107621915B (en) Message reminding method, device and computer storage medium
CN109587319B (en) Incoming call processing method, terminal and computer readable storage medium
CN106664336B (en) Method and terminal for processing communication event
CN108494943A (en) Message sink sends processing method, terminal and computer readable storage medium
CN109729210B (en) Information display method and terminal equipment
CN115022457A (en) Reminding method, intelligent terminal and storage medium
CN108053184B (en) Item prompting method, mobile terminal and computer readable storage medium
CN107347114B (en) Voice information receiving and sending control method and terminal
CN113485899A (en) Information processing method, terminal device and storage medium
CN108769384A (en) Call processing method, terminal and computer readable storage medium
CN112770005A (en) Intelligent response method, mobile terminal and storage medium
CN108566476B (en) Information processing method, terminal and computer readable storage medium
CN109889646A (en) A kind of call processing method, mobile terminal and storage medium
CN107528770B (en) Display method, terminal and computer storage medium
CN109558503B (en) Expression pack display method, mobile terminal and computer readable storage medium
CN108196926B (en) Platform content identification method, terminal and computer readable storage medium
CN108650403B (en) Message sending method, mobile terminal and server
CN112468650A (en) Information reply method, terminal device and storage medium
CN112565517A (en) Notification message processing method, mobile terminal and storage medium
CN109348038B (en) Incoming call processing method, mobile terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination