CN112927699A - Voice communication method, system, equipment and storage medium - Google Patents

Voice communication method, system, equipment and storage medium Download PDF

Info

Publication number
CN112927699A
CN112927699A CN202110140501.XA CN202110140501A CN112927699A CN 112927699 A CN112927699 A CN 112927699A CN 202110140501 A CN202110140501 A CN 202110140501A CN 112927699 A CN112927699 A CN 112927699A
Authority
CN
China
Prior art keywords
information
voice
text information
speaker
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110140501.XA
Other languages
Chinese (zh)
Inventor
吕刚
张珉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shimao Internet Of Things Technology Co ltd
Original Assignee
Shanghai Shimao Internet Of Things Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shimao Internet Of Things Technology Co ltd filed Critical Shanghai Shimao Internet Of Things Technology Co ltd
Priority to CN202110140501.XA priority Critical patent/CN112927699A/en
Publication of CN112927699A publication Critical patent/CN112927699A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing

Abstract

The system comprises an intelligent telephone unit, a business service unit, a role separation unit, a voice recognition unit and a telephone traffic quality inspection unit. The system has the advantages that the system can carry out real-time separation and voice recognition on the conversation between the user and the customer service, and feed back to the precise problem and related conversation to the customer service, so that the customer service can conveniently judge the intention of the customer in time and obtain the effective information of the customer; by utilizing real-time separation and voice recognition, the client image can be directly depicted on line, so that the client can conveniently adjust the speech technology.

Description

Voice communication method, system, equipment and storage medium
Technical Field
The present application relates to the field of smart phone technologies, and in particular, to a voice communication method, system, device, and storage medium.
Background
In the related art, the application target of the telephone is mainly the common people, and the telephone is generally applied to the interior of an enterprise or an organization, and only completes the basic call communication function, records call information and the like.
However, in order to improve the communication efficiency and obtain more effective information, the customer service needs to know the relevant information of the customer in advance to accurately obtain the information. Therefore, customer service is required to perform information statistics, customer tracking, business return visit, customer service operation, and the like. However, these tasks need to be solved manually, resulting in inefficiency. Therefore, the intention of the client cannot be judged in time during the communication with the client, and effective information cannot be acquired.
At present, no effective solution is provided for the problems that the intention of a client cannot be judged in time and effective information cannot be obtained in the related technology.
Disclosure of Invention
The embodiment of the application provides a voice communication method, a system, equipment and a storage medium, which are used for at least solving the problems that the intention of a client cannot be judged in time and effective information cannot be obtained in the related technology.
In a first aspect, an embodiment of the present application provides a voice communication method, including:
acquiring first voiceprint information of a first voice person and second voiceprint information of a second voice person;
identifying the first voiceprint information, and judging whether the first voice person is a registered user;
under the condition that the first speaker is a registered user, acquiring and displaying related information of the first speaker for reference of the second speaker;
separating real-time call information between the first voice person and the second voice person according to the first voiceprint information and the second voiceprint information to obtain first audio information and second audio information, wherein the first audio information is the voice information of the first voice person, and the second audio information is the voice information of the second voice person;
identifying the first audio information to obtain first text information, identifying the second audio information to obtain second text information;
judging whether the first text information and the second text information comprise key information or not;
under the condition that the first text information and/or the second text information comprise the key information, generating and displaying question text information so that the second speaker asks a question to the first speaker and acquires answer information corresponding to the question text information;
and updating the related information of the first speaker under the condition of acquiring the answer information.
In some of these embodiments, identifying the first audio information to obtain first text information, identifying the second audio information to obtain second text information comprises:
generating a first timestamp corresponding to the first text information and a second timestamp corresponding to the second text information;
and assembling the first text information and the second text information to form a first dialogue log according to the relative time sequence of the first time stamp and the second time stamp.
In some embodiments, after updating the related information of the first speech, the method further comprises:
generating call recording information of the first voice person and the second voice person;
separating the call recording information to obtain third audio information and fourth audio information, wherein the third audio information is the voice information of the first voice person, and the fourth audio information is the voice information of the second voice person;
identifying the third audio information to obtain third text information, and identifying the fourth audio information to obtain fourth text information;
searching a database, and judging whether the third text information and the fourth text information comprise sensitive information;
under the condition that the third text information comprises the sensitive information, marking the third text information and updating the related information of the first speaker; and/or
And in the case that the fourth text information comprises the sensitive information, labeling the fourth text information and generating warning information related to the second speaker.
In some of these embodiments, identifying the third audio information to obtain third text information, identifying the fourth audio information to obtain fourth text information comprises:
generating a third timestamp corresponding to the third text information and a fourth timestamp corresponding to the fourth text information;
and assembling the third text information and the fourth text information to form a second dialogue log according to the relative time sequence of the third timestamp and the fourth timestamp.
In some embodiments, after identifying the first voiceprint information and determining whether the first speaker is a registered user, the method further comprises:
under the condition that the first voice person is an unregistered user, acquiring a first voiceprint feature of the first voiceprint information;
acquiring related information of the first voice person, and binding the related information with the first voiceprint feature;
and after the related information is bound with the first voiceprint characteristic, marking the first voice person as a registered user.
In some of these embodiments, after identifying the first audio information to obtain first text information and identifying the second audio information to obtain second text information, the method further comprises:
judging whether the first text information and the second text information comprise sensitive information or not;
in the event that the first textual information includes sensitive information, generating and displaying suggested textual information to cause the second speaker to provide suggestions to the first speaker; and/or
And under the condition that the second text information comprises sensitive information, cutting off a first call connection between the first voice person and the second voice person, and establishing a second call connection between the first voice person and a third voice person.
In a second aspect, an embodiment of the present application provides a voice communication system, including:
the intelligent telephone unit is used for acquiring first voiceprint information of a first voice person and second voiceprint information of a second voice person;
a service unit, configured to identify the first voiceprint information, determine whether the first talker is a registered user, and obtain, when the first talker is a registered user, related information of the first talker for reference by the second talker;
the role separation unit is used for separating real-time call information between the first voice person and the second voice person to obtain first audio information and second audio information, wherein the first audio information is voice information of the first voice person, and the second audio information is voice information of the second voice person;
the voice recognition unit is used for recognizing the first audio information to obtain first text information and recognizing the second audio information to obtain second text information;
the telephone traffic quality inspection unit is used for judging whether the first text information and the second text information comprise key information or not, and generating question text information under the condition that the first text information and/or the second text information comprise the key information so that the second speaker asks a question to the first speaker and acquires answer information corresponding to the question text information;
the intelligent telephone unit is further configured to display related information of the first speaker and the question text information, and the service unit is further configured to update the related information of the first speaker when the answer information is acquired.
In some embodiments, the business service unit is further configured to generate call record information of the first speaker and the second speaker;
the role separation unit is further configured to separate the call recording information to obtain third audio information and fourth audio information, where the third audio information is voice information of a first speaker, and the fourth audio information is voice information of a second speaker;
the voice recognition unit is further used for recognizing the third audio information to obtain third text information and recognizing the fourth audio information to obtain fourth text information;
the telephone traffic quality inspection system is also used for searching a database and judging whether the third text information and the fourth text information comprise sensitive information or not;
the service unit is further configured to label the third text information and update the related information of the first speaker when the third text information includes the sensitive information; and/or labeling the fourth text information and generating warning information related to the second speaker in the case that the fourth text information comprises the sensitive information.
In a third aspect, an embodiment of the present application provides a computer device, including:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the voice communication method as described above.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the voice communication method as described above.
Compared with the related art, the voice communication method, the system, the equipment and the storage medium provided by the embodiment of the application can be used for separating and recognizing the conversation between the user and the customer service in real time and feeding back to the accurate problem and related conversation of the customer service, so that the customer service can conveniently and timely judge the intention of the customer and acquire the effective information of the customer; by utilizing real-time separation and voice recognition, the client image can be directly depicted on line, so that the client can conveniently adjust the speech technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a terminal according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of voice communication according to an embodiment of the present application;
fig. 3 is a flow chart of a voice communication method according to an embodiment of the present application (two);
fig. 4 is a flowchart (iii) of a voice communication method according to an embodiment of the present application;
fig. 5 is a flowchart (iv) of a voice communication method according to an embodiment of the present application;
fig. 6 is a flowchart (five) of a voice communication method according to an embodiment of the present application;
fig. 7 is a flowchart (vi) of a voice communication method according to an embodiment of the present application;
fig. 8 is a block diagram of a voice communication system according to an embodiment of the present application;
fig. 9 is a block diagram of a voice communication system and method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Fig. 1 is a block diagram of a terminal according to an embodiment of the present application. As shown in fig. 1, the terminal, as shown in fig. 1, includes: a Radio Frequency (RF) circuit 110, a memory 120, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a wireless fidelity (WiFi) module 170, a processor 180, and a power supply 190. Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes the various components of the terminal in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 180; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuits include, but are not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 120 may be used to store software programs and modules, and the processor 180 executes various functional applications and data processing of the mobile terminal by operating the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also referred to as a touch screen, may collect touch operations of a user on or near the touch panel 131 (e.g., operations of the user on or near the touch panel 131 using any suitable object or accessory such as a finger or a stylus pen), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 131 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and can receive and execute commands sent by the processor 180. In addition, the touch panel 131 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 130 may include other input devices 132 in addition to the touch panel 131. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by a user or information provided to the user and various menus of the terminal. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 131 can cover the display panel 141, and when the touch panel 131 detects a touch operation on or near the touch panel 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although the touch panel 131 and the display panel 141 are shown in fig. 1 as two separate components to implement the input and output functions of the terminal, in some embodiments, the touch panel 131 and the display panel 141 may be integrated to implement the input and output functions of the mobile terminal.
The terminal may also include at least one sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or a backlight when the terminal is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
A speaker 161 and a microphone 162 in the audio circuit 160 may provide an audio interface between the user and the terminal. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the processor 180 for processing, and then to the RF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 120 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and the terminal can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 170, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 170, it is understood that it does not belong to the essential constitution of the terminal, and it can be omitted or replaced with other short-range wireless transmission modules, such as Zigbee module, or WAPI module, etc., as necessary within the scope not changing the essence of the invention.
The processor 180 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the terminal. Alternatively, processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The terminal also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 180 via a power management system to manage charging, discharging, and power consumption via the power management system.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein.
Fig. 2 is a flowchart (one) of a voice communication method according to an embodiment of the present application. As shown in fig. 2, a voice communication method includes:
step S202, acquiring first voiceprint information of a first voice person and second voiceprint information of a second voice person;
step S204, identifying first voiceprint information and judging whether the first speaker is a registered user or not;
step S206, under the condition that the first speaker is a registered user, acquiring and displaying the related information of the first speaker for reference of the second speaker;
step S208, separating real-time call information between the first voice person and the second voice person according to the first voiceprint information and the second voiceprint information to obtain first audio information and second audio information, wherein the first audio information is the voice information of the first voice person, and the second audio information is the voice information of the second voice person;
step S210, identifying first audio information to obtain first text information, and identifying second audio information to obtain second text information;
step S212, judging whether the first text information and the second text information comprise key information;
step S214, under the condition that the first text information and/or the second text information comprise key information, generating and displaying question text information so that the second speaker asks a question to the first speaker and obtains answer information corresponding to the question text information;
and step S216, updating the related information of the first voice person under the condition of acquiring the answer information.
In step S202, the first speaker is a customer, and the second speaker is a customer service.
In step S202, since the voiceprint information of each of the speeches is uniquely determined, the identity information of the speeches can be determined by the voiceprint information. The purpose of obtaining the voiceprint information is to facilitate subsequent feature judgment of the first voice person and the second voice person, so as to quickly retrieve relevant information matched with the first voice person and the second voice person from an existing database, facilitate positioning of the first voice person and the second voice person in the subsequent process, and facilitate information updating of the first voice person and evaluation of the second voice person.
In addition, the purpose of obtaining the second voiceprint information is to identify the customer service, so that the call of the customer service is recorded to evaluate the customer service.
In steps S204 to S206, it is determined whether the first speaker is a registered user for the purpose of quickly acquiring the related information of the first speaker, and when the first speaker is a registered user, the second speaker is helped to quickly know the related information of the first speaker, so as to perform a conversation more accurately.
The related information of the first speaker includes identity information, call records, past question and answer information and the like.
In step S208, a conversation between the first speaker and the second speaker can be separated in real time according to the first voiceprint information and the second voiceprint information, so as to obtain first audio information corresponding to the first speaker and second audio information corresponding to the second speaker. Because the characteristics of the first voiceprint information and the second voiceprint information are different, and the conversation environment of the first voice person and the second voice person is simple, the real-time conversation information can be accurately separated, and the first audio information does not comprise the voice information of the second voice person and the second audio information does not comprise the voice information of the first voice person.
In step S210, the purpose of recognizing the audio information to obtain the text information is to convert the audio information into the text information, so as to reduce the data storage amount, improve the storage efficiency, and facilitate the backtracking in the later period.
In step S212, the key information includes, but is not limited to, verbal expressions, knowledge base question-answer pairs, dirty words, sensitive words, etc.
In step S214, through the recognition and extraction of the key information, the question text information can be quickly generated, so as to help the second speaker to perform quick question asking, so as to guide the first speaker to provide relevant information, thereby improving conversation efficiency, and avoiding problems such as low communication efficiency caused by mistakes made by the second speaker or difficulty in question asking.
In step S216, according to the answer information provided by the first speaker, the related information of the first speaker can be updated, so as to obtain a more accurate user profile of the first speaker, and further enable the second speaker to perform question asking or recommendation more accurately.
Compared with the voice communication method in the related art, the voice communication method can quickly acquire the past information of the client and extract the key information, improves the conversation efficiency, and further describes the first voice person to obtain more accurate information.
Fig. 3 is a flowchart (iii) of a voice communication method according to an embodiment of the present application. As shown in fig. 3, in the voice communication method, recognizing the first audio information to obtain the first text information and recognizing the second audio information to obtain the second text information includes:
step S302, generating a first time stamp corresponding to the first text information and a second time stamp corresponding to the second text information;
and S304, assembling the first text information and the second text information according to the relative time sequence of the first timestamp and the second timestamp to form a first dialogue log.
In the embodiment, in the process of recognizing the text message, the time information of the real-time call information is extracted, and a plurality of audio clips can be obtained. The first audio information comprises a plurality of segments of first audio segments, and the second audio information comprises a plurality of segments of second audio segments. In the whole real-time call information, the starting time and the ending time of each audio segment are different, namely, each audio segment has a determined time relationship.
Correspondingly, the first text information comprises a plurality of first text segments corresponding to the first audio segments, the second text information comprises a plurality of second text segments corresponding to the second audio segments, each text segment has a corresponding timestamp, including a start time and an end time, so that the relative time sequence of the text segments can be judged through the timestamps of different text segments, and a first conversation log corresponding to the real-time conversation information and marking a first voice person and a second voice person is generated.
In this embodiment, the relative time sequence of the text information is determined by the timestamp and the first conversation log is formed, which can help the second speaker to quickly trace the key point of the conversation during the conversation.
Fig. 4 is a flowchart (iii) of a voice communication method according to an embodiment of the present application. As shown in fig. 4, after updating the related information of the first voice, the voice communication method further includes:
step S402, generating call recording information of a first voice person and a second voice person;
step S404, separating the call recording information to obtain third audio information and fourth audio information, wherein the third audio information is the voice information of the first voice person, and the fourth audio information is the voice information of the second voice person;
step S406, identifying third audio information to obtain third text information, and identifying fourth audio information to obtain fourth text information;
step S408, searching a database, and judging whether the third text information and the fourth text information comprise sensitive information;
step S410, under the condition that the third text information comprises sensitive information, labeling the third text information and updating the related information of the first speaker; and/or
In the event that the fourth textual information includes sensitive information, the fourth textual information is labeled and warning information associated with the second speaker is generated.
In this embodiment, after the call is ended, the call recording information is generated, and at this time, the same separation and recognition operations as those of the real-time voice information are performed on the call recording information, so that the first and second speeches can be analyzed and reviewed afterwards.
In steps S408 to S410, the sensitive information includes, but is not limited to, a dirty word, and a sensitive word (e.g., a word related to illegal crime).
If the third text information of the first speaker includes the sensitive word, labeling the first speaker, such as updating the character, spleen qi, etc. of the first speaker, even labeling the first speaker as a dangerous client, and reporting to a relevant office for examination in time.
If the fourth text message of the second speaker includes the sensitive word, the second speaker is labeled and warned to correct the language habit of the second speaker and prevent the second speaker from speaking the sensitive word again during the conversation process of the first speaker.
In the embodiment, the call recording information is separated and subjected to voice recognition, so that a supervisor can conveniently perform call information quality inspection, and the service risk is reduced.
Fig. 5 is a flowchart (iv) of a voice communication method according to an embodiment of the present application. As shown in fig. 5, identifying the third audio information to obtain the third text information and identifying the fourth audio information to obtain the fourth text information includes:
step S502, generating a third time stamp corresponding to the third text information and a fourth time stamp corresponding to the fourth text information;
and step S504, according to the relative time sequence of the third timestamp and the fourth timestamp, assembling the third text information and the fourth text information to form a second dialogue log.
In this embodiment, in the process of recognizing the text message, the time information of the call recording message is extracted, and a plurality of audio clips can be obtained. The third audio information comprises a plurality of segments of third audio segments, and the fourth audio information comprises a plurality of segments of fourth audio segments. In the whole real-time call information, the starting time and the ending time of each audio segment are different, namely, each audio segment has a determined time relationship.
Correspondingly, the third text information comprises a plurality of third text segments corresponding to the third audio segments, the fourth text information comprises a plurality of fourth text segments corresponding to the fourth audio segments, each text segment has a corresponding timestamp, including a start time and an end time, so that the relative time sequence of the text segments can be judged through the timestamps of different text segments, and a second conversation log corresponding to the real-time conversation information and marking the first and second speeches is generated.
In this embodiment, the relative time sequence of the text information is determined by the timestamp and a second dialogue log is formed, which can help a supervisor of a second speaker to perform quality inspection on the dialogue information.
Fig. 6 is a flowchart (v) of a voice communication method according to an embodiment of the present application. As shown in fig. 6, after recognizing the first voiceprint information and determining whether the first speaker is a registered user, the voice communication method further includes:
step S602, under the condition that the first speaker is an unregistered user, acquiring a first voiceprint feature of first voiceprint information;
step S604, acquiring related information of the first voice person, and binding the related information with the first voiceprint feature;
step S606, after the relevant information is bound with the first voiceprint feature, the first speaker is marked as a registered user.
In the embodiment, in the registration step of the unregistered user, the first voiceprint feature of the unregistered user is extracted, and the first voiceprint feature and the related information are bound and registered in combination with the related information of the first speaker acquired in the subsequent call, so that the registered user can be quickly retrieved from the database for browsing by the second speaker in the next call.
In the embodiment, the workload of the second speaker is reduced and the accuracy of information is improved through an automatic extraction and registration process, so that an accurate user portrait is obtained.
Fig. 7 is a flowchart (vi) of a voice communication method according to an embodiment of the present application. As shown in fig. 7, after recognizing the first audio information to obtain the first text information and recognizing the second audio information to obtain the second text information, the voice communication method further includes:
step S702, judging whether the first text information and the second text information comprise sensitive information;
step S704, under the condition that the first text information comprises sensitive information, generating and displaying suggested text information so that a second speaker provides suggestions to the first speaker; and/or
And under the condition that the second text information comprises sensitive information, cutting off the first call connection between the first voice person and the second voice person, and establishing a second call connection between the first voice person and the third voice person.
Wherein, the third voice is customer service.
In this embodiment, if the sensitive word appears in the voice of the first speaker, the second speaker will provide the suggestion in a graceful manner to avoid the first speaker from speaking the sensitive word continuously. If the sensitive word appears in the voice of the second voice person, the third voice person and the first voice person are in communication in order to reduce the communication risk.
Specifically, if the second speaker speaks a dirty word, in order to prevent the subsequent conversation from becoming noisy or abusive, and to sooth the mood of the first speaker, the second speaker needs to be replaced, and the third speaker continues to speak, and the third speaker can obtain the first conversation text of the first and second speakers to know the conversation content, the reason of the related problem, and the like, so as to complete the conversation with the first speaker.
Correspondingly, the third speaker has a third voiceprint feature.
When the real-time call information of the third speaker and the first speaker is separated, the first audio information and the fifth audio information can be obtained, wherein the first audio information corresponds to the first speaker, and the fifth audio information corresponds to the third speaker.
When performing speech recognition on the first audio information, the second audio information, and the fifth audio information, the first text information (including the first timestamp), the second text information (the second timestamp), and the fifth text information (the fifth timestamp) may be obtained, and the first text information, the second text information, and the fifth text information may be assembled into the third dialog log.
When the call recording information of the second voice person and the first voice person and the call recording information of the third voice person and the first voice person are separated, third audio information, fourth audio information and sixth audio information can be obtained, wherein the third audio information corresponds to the first voice person, the fourth audio information corresponds to the second voice person, and the sixth audio information corresponds to the third voice person.
When performing voice recognition on the third audio information, the fourth audio information, and the sixth audio information, the third text information (including the third timestamp), the fourth text information (the fourth timestamp), and the sixth text information (the sixth timestamp) may be obtained, and the third text information, the fourth text information, and the sixth text information may be assembled into a fourth dialog log.
In the embodiment, the dialogue can be monitored in real time, and the problem of increasing business risk such as noise, abuse and the like is prevented.
Fig. 8 is a block diagram of a voice communication system according to an embodiment of the present application. As shown in fig. 8, the voice communication system includes:
the intelligent phone unit 810 is configured to obtain first voiceprint information of a first voice subscriber and second voiceprint information of a second voice subscriber;
a service unit 820, communicatively connected to the smart phone unit 810, configured to identify first voiceprint information, determine whether the first speaker is a registered user, and obtain related information of the first speaker for reference by the second speaker when the first speaker is the registered user;
a role separation unit 830, communicatively connected to the service unit 820, configured to separate real-time call information between a first talker and a second talker to obtain first audio information and second audio information, where the first audio information is voice information of the first talker and the second audio information is voice information of the second talker;
a voice recognition unit 840, communicatively connected to the service unit 820, for recognizing the first audio information to obtain the first text information and recognizing the second audio information to obtain the second text information;
a telephone traffic quality inspection unit 850, communicatively connected to the service unit 820, configured to determine whether the first text information and the second text information include key information, and generate question text information under the condition that the first text information and/or the second text information include the key information, so that the second speaker asks a question to the first speaker and obtains answer information corresponding to the question text information;
the smart phone unit 810 is further configured to display related information of the first speaker and question text information, and the service unit 820 is further configured to update the related information of the first speaker when obtaining answer information.
In some of these embodiments, the speech recognition unit 840 is also communicatively coupled to the character separation unit 830.
In some embodiments, traffic quality inspection unit 850 is also communicatively coupled to voice recognition unit 840.
In some of these embodiments, the voice communication system further includes a memory unit communicatively coupled to at least the business service unit 820.
In some embodiments, the storage unit is further communicatively coupled to the character separation unit 830, the voice recognition unit 840, and the traffic quality inspection unit 850.
In some embodiments, the service unit 820 is further configured to generate call record information of the first speaker and the second speaker;
the role separation unit 830 is further configured to separate the call recording information to obtain third audio information and fourth audio information, where the third audio information is the voice information of the first voice person, and the fourth audio information is the voice information of the second voice person;
the speech recognition unit 840 is further configured to recognize the third audio information to obtain third text information, and recognize the fourth audio information to obtain fourth text information;
the telephone traffic quality inspection system 850 is further configured to retrieve the database, and determine whether the third text information and the fourth text information include sensitive information;
the service unit 820 is further configured to label the third text information and update the related information of the first speaker when the third text information includes sensitive information; and/or labeling the fourth text information and generating warning information related to the second speaker in the case that the fourth text information comprises sensitive information.
In some of these embodiments, the voice communication system further comprises a business reporting unit and a data statistics unit.
The business report unit is respectively in communication connection with the business service unit and the storage unit and is used for forming a business report; the data statistics unit is respectively in communication connection with the service unit and the storage unit and is used for performing data statistics.
The voice communication system corresponds to the voice communication method in the above embodiments, and the technical effects thereof are as described above and are not described herein again.
In addition, the voice communication method of the embodiment of the present application may be implemented by a computer device. Components of the computer device may include, but are not limited to, a processor and a memory storing computer program instructions.
In some embodiments, the processor may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of embodiments of the present Application.
In some embodiments, the memory may include mass storage for data or instructions. By way of example, and not limitation, memory may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a Non-Volatile (Non-Volatile) memory. In particular embodiments, the Memory includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by the processor.
The processor implements any of the voice communication methods in the above embodiments by reading and executing computer program instructions stored in the memory.
In some of these embodiments, the computer device may also include a communication interface and a bus. The processor, the memory and the communication interface are connected through a bus and complete mutual communication.
The communication interface is used for realizing communication among modules, devices, units and/or equipment in the embodiment of the application. The communication interface may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
A bus comprises hardware, software, or both that couple components of a computer device to one another. Buses include, but are not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, abbreviated VLB) bus or other suitable bus or a combination of two or more of these. A bus may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the voice communication method in the foregoing embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the voice communication methods in the above embodiments.
Fig. 9 is a block diagram of a voice communication system and method according to an embodiment of the present application. As shown in fig. 9, the voice communication system includes a smart phone (corresponding to the smart phone unit 810), a service (corresponding to the service 820), a role separation (corresponding to the role separation unit 830), voice recognition (corresponding to the voice recognition unit 840), and a traffic quality check (corresponding to the traffic quality check unit 850).
In this embodiment, quality control is performed only on call record information.
In this embodiment, the functions of the smart phone are: providing the calling and calling functions; providing the functions of editing and updating the customer name cards; providing functions of recording call records and recording calls; providing a task management function for incoming calls and outgoing calls on the service; and providing terminal equipment state management and message processing.
In this embodiment, the functions of the business service are: organization, project, customer information management and synchronization; providing management and monitoring of a terminal phone, including phone-enabled forbidden attribution allocation, equipment state monitoring and the like; managing and updating customer information and call records; performing flow management of role separation and voice recognition; providing other business functions such as a dialect template, a knowledge base, sensitive words and the like.
In this embodiment, the role separation function is: the method provides the role separation of the speaker voice recording, and the audio of the single speaker can be obtained after the separation. The role separation service is a technology for automatically cutting the voice into segments according to the identity of the speaker, and mainly solves the problem of 'who speaks when', namely, the pronunciation position corresponding to each speaker is determined in the voice stream. The technical scheme is mainly aimed at the situation that a plurality of speakers do not sound at the same time, namely under the condition that overlapping sound does not exist or rarely exists, recording is separated according to roles.
In this embodiment, the functions of speech recognition are: and sending the separated audio file to a voice recognition engine for voice recognition and transcribing into a recognition text. And simultaneously identifying the relative time stamp of the text carrying the identification result, wherein the size of the time stamp reflects the relative precedence relationship of the identification text. And through comparison and sorting of the sizes of the recognized text timestamps, the text timestamps can be assembled into a dialog log.
In this embodiment, the function of the traffic quality inspection is: and performing sensitive word detection on the text after voice recognition, wherein the sensitive word detection comprises a dirty word, a sensitive word, a linguistic terminology, a knowledge base answer and the like. The Language and technology quality inspection is mainly realized by NLP (Natural Language Processing) technology, which means that semantic information is analyzed from Natural Language and organized into a representation form-semantic intention which can be used by a machine. And configuring sensitive words in the background, loading the sensitive words into a cache, and performing semantic understanding and detection on the files after voice recognition by using an NLP (non-line segment) technology.
In addition, the voice communication system also comprises a service report and data statistics.
The business report has the functions of: the method is characterized in that daily generated data are recorded in a database in real time, a database script is compiled according to service requirements, various service reports of system data can be provided by performing operations such as data cleaning and screening, and visual data reports can be checked in a management system in an online and real-time manner. And performing multi-dimensional specialized statistics on the service data, and rendering and displaying the service data at the front end to provide data support for statistical analysis.
The data statistics function is as follows: and writing a data statistics script according to the service monitoring requirement aiming at the basic data storage record of the system, cleaning and screening the data, performing statistics on the basic data through a timing task, refreshing the statistical result, and providing the overall data statistics of the system operation condition for the client.
For the present application, it has the following advantages:
in the conversation process, collecting and updating client information in real time, and extracting more value information in time;
processing the collected customer information directly on line to depict the user portrait;
through the statistical analysis of the online platform, the method can be used for customer tracking service return visit, and can be effectively applied to subsequent service scenes in time;
and the data statistics is faster and more accurate by using the real-time online data report.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of voice communication, comprising:
acquiring first voiceprint information of a first voice person and second voiceprint information of a second voice person;
identifying the first voiceprint information, and judging whether the first voice person is a registered user;
under the condition that the first speaker is a registered user, acquiring and displaying related information of the first speaker for reference of the second speaker;
separating real-time call information between the first voice person and the second voice person according to the first voiceprint information and the second voiceprint information to obtain first audio information and second audio information, wherein the first audio information is the voice information of the first voice person, and the second audio information is the voice information of the second voice person;
identifying the first audio information to obtain first text information, identifying the second audio information to obtain second text information;
judging whether the first text information and the second text information comprise key information or not;
under the condition that the first text information and/or the second text information comprise the key information, generating and displaying question text information so that the second speaker asks a question to the first speaker and acquires answer information corresponding to the question text information;
and updating the related information of the first speaker under the condition of acquiring the answer information.
2. The method of claim 1, wherein identifying the first audio information to obtain first text information and identifying the second audio information to obtain second text information comprises:
generating a first timestamp corresponding to the first text information and a second timestamp corresponding to the second text information;
and assembling the first text information and the second text information to form a first dialogue log according to the relative time sequence of the first time stamp and the second time stamp.
3. The voice communication method according to claim 1, wherein after updating the related information of the first voice, the method further comprises:
generating call recording information of the first voice person and the second voice person;
separating the call recording information to obtain third audio information and fourth audio information, wherein the third audio information is the voice information of a first voice person searching books, and the fourth audio information is the voice information of a second voice person;
identifying the third audio information to obtain third text information, and identifying the fourth audio information to obtain fourth text information;
searching a database, and judging whether the third text information and the fourth text information comprise sensitive information;
under the condition that the third text information comprises the sensitive information, marking the third text information and updating the related information of the first speaker; and/or
And in the case that the fourth text information comprises the sensitive information, labeling the fourth text information and generating warning information related to the second speaker.
4. The method of claim 3, wherein identifying the third audio information to obtain third text information and identifying the fourth audio information to obtain fourth text information comprises:
generating a third timestamp corresponding to the third text information and a fourth timestamp corresponding to the fourth text information;
and assembling the third text information and the fourth text information to form a second dialogue log according to the relative time sequence of the third timestamp and the fourth timestamp.
5. The voice communication method according to claim 1, wherein after recognizing the first voiceprint information and determining whether the first speaker is a registered user, the method further comprises:
under the condition that the first voice person is an unregistered user, acquiring a first voiceprint feature of the first voiceprint information;
acquiring related information of the first voice person, and binding the related information with the first voiceprint feature;
and after the related information is bound with the first voiceprint characteristic, marking the first voice person as a registered user.
6. The method of claim 1, wherein after identifying the first audio information to obtain first text information and identifying the second audio information to obtain second text information, the method further comprises:
judging whether the first text information and the second text information comprise sensitive information or not;
in the event that the first textual information includes sensitive information, generating and displaying suggested textual information to cause the second speaker to provide suggestions to the first speaker; and/or
And under the condition that the second text information comprises sensitive information, cutting off a first call connection between the first voice person and the second voice person, and establishing a second call connection between the first voice person and a third voice person.
7. A voice communication system, comprising:
the intelligent telephone unit is used for acquiring first voiceprint information of a first voice person and second voiceprint information of a second voice person;
a service unit, configured to identify the first voiceprint information, determine whether the first talker is a registered user, and obtain, when the first talker is a registered user, related information of the first talker for reference by the second talker;
the role separation unit is used for separating real-time call information between the first voice person and the second voice person to obtain first audio information and second audio information, wherein the first audio information is voice information of the first voice person, and the second audio information is voice information of the second voice person;
the voice recognition unit is used for recognizing the first audio information to obtain first text information and recognizing the second audio information to obtain second text information;
the telephone traffic quality inspection unit is used for judging whether the first text information and the second text information comprise key information or not, and generating question text information under the condition that the first text information and/or the second text information comprise the key information so that the second speaker asks a question to the first speaker and acquires answer information corresponding to the question text information;
the intelligent telephone unit is further configured to display related information of the first speaker and the question text information, and the service unit is further configured to update the related information of the first speaker when the answer information is acquired.
8. The voice communication system according to claim 7, wherein the service unit is further configured to generate call recording information of the first speaker and the second speaker;
the role separation unit is further configured to separate the call recording information to obtain third audio information and fourth audio information, where the third audio information is voice information of a first speaker, and the fourth audio information is voice information of a second speaker;
the voice recognition unit is further used for recognizing the third audio information to obtain third text information and recognizing the fourth audio information to obtain fourth text information;
the telephone traffic quality inspection system is also used for searching a database and judging whether the third text information and the fourth text information comprise sensitive information or not;
the service unit is further configured to label the third text information and update the related information of the first speaker when the third text information includes the sensitive information; and/or labeling the fourth text information and generating warning information related to the second speaker in the case that the fourth text information comprises the sensitive information.
9. A computer device, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method of voice communication of any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the voice communication method according to any one of claims 1 to 6.
CN202110140501.XA 2021-02-02 2021-02-02 Voice communication method, system, equipment and storage medium Pending CN112927699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110140501.XA CN112927699A (en) 2021-02-02 2021-02-02 Voice communication method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110140501.XA CN112927699A (en) 2021-02-02 2021-02-02 Voice communication method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112927699A true CN112927699A (en) 2021-06-08

Family

ID=76169432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110140501.XA Pending CN112927699A (en) 2021-02-02 2021-02-02 Voice communication method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112927699A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103701999A (en) * 2012-09-27 2014-04-02 中国电信股份有限公司 Method and system for monitoring voice communication of call center
CN109658923A (en) * 2018-10-19 2019-04-19 平安科技(深圳)有限公司 Voice quality detecting method, equipment, storage medium and device based on artificial intelligence
CN109842712A (en) * 2019-03-12 2019-06-04 贵州财富之舟科技有限公司 Method, apparatus, computer equipment and the storage medium that message registration generates
CN110809095A (en) * 2019-10-25 2020-02-18 大唐网络有限公司 Method and device for voice call-out
CN111314566A (en) * 2020-01-20 2020-06-19 北京神州泰岳智能数据技术有限公司 Voice quality inspection method, device and system
CN111933151A (en) * 2020-08-16 2020-11-13 云知声智能科技股份有限公司 Method, device and equipment for processing call data and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103701999A (en) * 2012-09-27 2014-04-02 中国电信股份有限公司 Method and system for monitoring voice communication of call center
CN109658923A (en) * 2018-10-19 2019-04-19 平安科技(深圳)有限公司 Voice quality detecting method, equipment, storage medium and device based on artificial intelligence
CN109842712A (en) * 2019-03-12 2019-06-04 贵州财富之舟科技有限公司 Method, apparatus, computer equipment and the storage medium that message registration generates
CN110809095A (en) * 2019-10-25 2020-02-18 大唐网络有限公司 Method and device for voice call-out
CN111314566A (en) * 2020-01-20 2020-06-19 北京神州泰岳智能数据技术有限公司 Voice quality inspection method, device and system
CN111933151A (en) * 2020-08-16 2020-11-13 云知声智能科技股份有限公司 Method, device and equipment for processing call data and storage medium

Similar Documents

Publication Publication Date Title
US9355637B2 (en) Method and apparatus for performing speech keyword retrieval
CN110334241B (en) Quality inspection method, device and equipment for customer service record and computer readable storage medium
CN107170454B (en) Speech recognition method and related product
CN107274885B (en) Speech recognition method and related product
CN109309751B (en) Voice recording method, electronic device and storage medium
CN106161749B (en) Malicious telephone identification method and device
CN104978868A (en) Stop arrival reminding method and stop arrival reminding device
US11274932B2 (en) Navigation method, navigation device, and storage medium
CN108156508B (en) Barrage information processing method and device, mobile terminal, server and system
CN110827826B (en) Method for converting words by voice and electronic equipment
CN104123937A (en) Method, device and system for reminding setting
CN104378441A (en) Schedule creating method and device
CN107948729B (en) Rich media processing method and device, storage medium and electronic equipment
CN110096611A (en) A kind of song recommendations method, mobile terminal and computer readable storage medium
CN109032491A (en) Data processing method, device and mobile terminal
CN109286728A (en) A kind of dialog context processing method and terminal device
CN111028834B (en) Voice message reminding method and device, server and voice message reminding equipment
CN109151148B (en) Call content recording method, device, terminal and computer readable storage medium
CN108549681B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108600559B (en) Control method and device of mute mode, storage medium and electronic equipment
CN109240486B (en) Pop-up message processing method, device, equipment and storage medium
CN103888617B (en) The method of output notice message and device
CN112381798A (en) Transmission line defect identification method and terminal
CN109684006B (en) Terminal control method and device
CN111034152B (en) Information processing method, device, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination