CN114003699A - Method and device for matching dialect, electronic equipment and storage medium - Google Patents

Method and device for matching dialect, electronic equipment and storage medium Download PDF

Info

Publication number
CN114003699A
CN114003699A CN202111106211.XA CN202111106211A CN114003699A CN 114003699 A CN114003699 A CN 114003699A CN 202111106211 A CN202111106211 A CN 202111106211A CN 114003699 A CN114003699 A CN 114003699A
Authority
CN
China
Prior art keywords
information
recommended
dialect
historical
keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111106211.XA
Other languages
Chinese (zh)
Inventor
蒋广珍
向林
白金蓬
黎清顾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202111106211.XA priority Critical patent/CN114003699A/en
Publication of CN114003699A publication Critical patent/CN114003699A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The method comprises the steps of obtaining a historical chat record between a target chat object and a first preset time period, and screening out historical information sent by the target chat object; identifying the historical information, and extracting keywords in the historical information; confirming the interval duration from the sending time of the last record in the historical chat records to the current time; and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs. The method can accurately match proper dialogues according to the keywords and the chatting time mentioned by the chatting object, send the sensitive and uninteresting topics which avoid sending the chatting object, eliminate the communication barrier between two parties of the chatting and avoid the emotional crisis in advance.

Description

Method and device for matching dialect, electronic equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for matching speech techniques, an electronic device, and a storage medium.
Background
Currently, with the rise of social networks, emotional communication of people tends to be virtualized more and more. People often do not need to communicate face to face and can express own emotion by means of chat software and social networks. In the chat process, the chat object inevitably generates some negative emotions, and in this case, if the user cannot accurately acquire the emotional state of the chat object and sensitive words are generated carelessly during communication with the user, the negative emotion of the chat object is deepened, communication obstacle occurs, and the relationship between the two parties is worsened.
In addition, the two parties may not be in contact for a long time, have no chat topics, or the opposite party of the chat topics is not interested, so that the relationship between the two parties is more and less, communication obstacles occur, and emotional crisis and other problems are caused.
Disclosure of Invention
In order to solve the problems, the application provides a conversation matching method, a conversation matching device, electronic equipment and a storage medium, and solves the technical problem that communication is obstructed due to improper words or no proper chat topics in the chat process of a social platform in the prior art.
In a first aspect, the present application provides a matching method for dialogs, including:
acquiring historical chat records between the target chat objects and a first preset time period, and screening out historical information sent by the target chat objects;
identifying the historical information, and extracting keywords in the historical information;
confirming the interval duration from the sending time of the last record in the historical chat records to the current time;
and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs.
According to an embodiment of the application, optionally, in the above-mentioned utterance matching method, the history information includes at least one of text information, voice information, and link information.
According to an embodiment of the present application, optionally, in the above-mentioned speech matching method, the history information at least includes voice information;
identifying the historical information and extracting the keywords of the historical information, wherein the method comprises the following steps:
processing the voice information in the historical information to convert the voice information into corresponding character information;
and identifying the text information converted from the voice information to extract the keywords in the voice information.
According to an embodiment of the present application, optionally, in the above-mentioned speech matching method, before the step of processing the speech information in the history information to convert the speech information into corresponding text information, the method further includes:
recognizing the voice information to extract voice features in the voice information;
and confirming the first emotional state of the target chat object according to the voice characteristics in the voice information.
According to an embodiment of the application, optionally, in the above-mentioned speech matching method, the speech feature includes at least one of tone, speech speed, and intonation.
According to an embodiment of the present application, optionally, in the above matching method of dialogues, selecting a recommended dialogues matched with the keyword and the interval duration from a pre-established dialogues library according to the keyword and the interval duration includes the following steps:
confirming a second emotional state and interest characteristics of the target chat object according to the key words;
and selecting recommended dialogs matched with the first emotional state, the second emotional state, the interest characteristics and the interval duration of the target chat object from a preset dialogs library according to the first emotional state, the second emotional state and the interest characteristics of the target chat object.
According to an embodiment of the present application, optionally, in the above-mentioned speech matching method, processing the speech information in the history information to convert the speech information into corresponding text information includes the following steps:
carrying out noise reduction processing on the voice information;
carrying out endpoint detection on the voice information after the noise reduction processing to obtain effective voice information in the voice information;
performing framing processing on effective voice information in the voice information to obtain a plurality of audio frames corresponding to the voice information;
sequentially inputting each audio frame into a pre-trained recognition model to obtain text information corresponding to each audio frame;
and the text information corresponding to the plurality of audio frames forms character information corresponding to the voice information.
According to an embodiment of the present application, optionally, in the above matching method of dialogues, after pushing the recommended dialogues, the method further includes:
when a sending instruction corresponding to the recommended dialogues is received, sending the recommended dialogues to the target chat object;
and when a modification instruction corresponding to the recommended dialect is received, modifying the recommended dialect according to the modification instruction to obtain the modified recommended dialect.
According to an embodiment of the application, optionally, in the above-mentioned speech matching method, when a modification instruction corresponding to the recommended speech is received, after the step of modifying the recommended speech according to the modification instruction to obtain the modified recommended speech, the method further includes:
acquiring emotion information of the recommended dialect before modification and emotion information of the recommended dialect after modification, and comparing the emotion information of the recommended dialect before modification with the emotion information of the recommended dialect after modification;
and when the emotional information of the recommended dialect before modification is inconsistent with the emotional information of the recommended dialect after modification, refusing to execute the sending instruction when receiving the sending instruction of the recommended dialect after corresponding modification.
According to an embodiment of the present application, optionally, in the above-mentioned speech matching method, the speech library is created by:
creating an initial dialect library;
acquiring related information of the target chat object; the relevant information comprises background information of the target chat object and the dynamic state of the target chat object published on the target social platform in a second preset time period;
acquiring hot news in a third preset time period;
and updating the initial dialect library according to the related information and the hot news to obtain an optimized dialect library.
In a second aspect, the present application provides a tactical matching apparatus, comprising:
the acquisition module is used for acquiring historical chat records between the target chat objects and the target chat objects in a first preset time period and screening out historical information sent by the target chat objects;
the extraction module is used for identifying the historical information and extracting the keywords in the historical information;
the confirmation module is used for confirming the interval duration from the sending time of the last record in the historical chat records to the current time;
and the matching module is used for selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration and pushing the recommended dialogs.
In a third aspect, the present application provides an electronic device comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, performs the dialect matching method according to any one of the first aspect.
In a fourth aspect, the present application provides a storage medium storing a computer program which, when executed by one or more processors, implements the tactical matching method of any of the first aspects.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
the method comprises the steps of obtaining a historical chat record between a target chat object and a first preset time period, and screening out historical information sent by the target chat object; identifying the historical information, and extracting keywords in the historical information; confirming the interval duration from the sending time of the last record in the historical chat records to the current time; and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs. The method can accurately match proper dialect according to the keywords and the chatting time mentioned by the chatting object, can sooth the emotion of the chatting object, open the topics interested by the chatting object, avoid the sensitive and uninteresting topics of the chatting object, eliminate the communication barrier between two parties of chatting conversation, avoid the emotional crisis in advance, and is beneficial to the establishment of the harmonious relationship between the two parties.
Drawings
The present application will be described in more detail hereinafter on the basis of embodiments and with reference to the accompanying drawings:
fig. 1 is a schematic flowchart of a matching method for speech technology according to an embodiment of the present disclosure;
fig. 2 is another schematic flow chart of a speech matching method according to an embodiment of the present application;
fig. 3 is another schematic flow chart of a speech matching method according to an embodiment of the present application;
fig. 4 is a connection block diagram of a speech matching apparatus according to an embodiment of the present application;
in the drawings, like parts are designated with like reference numerals, and the drawings are not drawn to scale.
Detailed Description
The following detailed description will be provided with reference to the accompanying drawings and embodiments, so that how to apply the technical means to solve the technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments and various features in the embodiments of the present application can be combined with each other without conflict, and the formed technical solutions are all within the scope of protection of the present application.
Example one
Referring to fig. 1, the present embodiment provides a method for matching a phone call, including:
step S110: and acquiring historical chat records between the target chat objects and the target chat objects in a first preset time period, and screening out historical information sent by the target chat objects.
The history chat records comprise history messages sent by the user and history messages sent by the target chat objects.
The historical chatting records are stored in preset chatting software installed on the intelligent terminal currently used by the user.
The first preset time period may be a recent time period, or a time period during which the last record in the history chat records is pushed forward, and may be set as required. That is to say, this embodiment may be applied to a chat scene in which the user and the target chat object are forming (that is, the first preset time period is within a time period of half a day, within 1 hour, and the like), or may be applied to a scene in which the user does not communicate with the target chat object for a long time and the user wants to open a topic, and at this time, a chat record of a certain past time period between the user and the target chat object may be collected.
Through screening out the historical information sent by the target chat object, the keywords mentioned by the target chat object can be analyzed in the subsequent process.
The history information includes at least one of text information, voice information, and link information.
The link information is a webpage link, a music link and the like shared by the target chat object to the user.
In some cases, the history information also includes chat information such as pictures, videos, files, and the like.
Step S120: and identifying the historical information, and extracting keywords in the historical information.
When the historical information at least comprises character information, directly extracting keywords in the character information.
And when the history information at least comprises the link information, directly extracting the key words in the title content of the link information.
In this embodiment, the method for extracting the keywords includes a Text Rank keyword extraction algorithm, and the step of extracting the keywords by using the Text Rank keyword extraction algorithm includes:
(1) the title content of the text information and/or the link information is punctuated and may be separated by punctuation marks, spaces, or other marks.
(2) Segmenting words of the character content after sentence breaking, removing stop words, and performing part-of-speech tagging; and reserving the appointed parts of speech, such as nouns, verbs and the like, removing words with other irrelevant parts of speech, and reserving the words as candidate keywords.
(3) Based on the candidate keywords, constructing a weighted graph G (V, E), wherein V is a node set/candidate keyword set, E is an edge set, E is a subset of V, constructing an edge between any two nodes/words by setting a window and a co-occurrence relation, and continuously sliding the window from head to tail if and only if the two nodes/words co-occur in the window with the length of m.
(4) According to a PageRank iterative formula, initializing the weight/Rank value (which can be 1/N, and N is the number of nodes/words) of each node/word, and substituting each node/word into the formula to iterate until convergence.
(5) And (4) performing descending arrangement on the final weights/Rank values of all nodes/terms, and selecting terms (Top N) of the first few ranks as keywords.
(6) And (5) obtaining the most important N words, marking in the original text, and combining into a multi-word keyword and adding a keyword sequence if adjacent phrases are formed.
For example, the target chat object likes to send health maintenance information to the user, and the health maintenance information has a weight for growth, so that keywords such as health maintenance, health maintenance and growth can be obtained according to an algorithm, the target chat object is interested in health maintenance, and a targeted conversational recommendation can be performed after a recommendation process is summarized.
When the history information at least includes the voice information, step S120 includes the steps of:
s123: processing the voice information in the historical information to convert the voice information into corresponding character information;
s125: and identifying the text information converted from the voice information to extract the keywords in the voice information.
The method for extracting the keywords from the text information converted from the voice information may be the same as the method for extracting the keywords from the text information and the link information, and is not repeated here.
In step S123, processing the voice information in the history information to convert the voice information into corresponding text information, including the following steps:
(1) carrying out noise reduction processing on the voice information;
(2) carrying out endpoint detection on the voice information after the noise reduction processing to obtain effective voice information in the voice information;
(3) performing framing processing on effective voice information in the voice information to obtain a plurality of audio frames corresponding to the voice information;
(4) sequentially inputting each audio frame into a pre-trained recognition model to obtain text information corresponding to each audio frame;
and the text information corresponding to the plurality of audio frames forms character information corresponding to the voice information.
The method comprises the following steps of sequentially inputting each audio frame into a pre-trained recognition model to obtain text information corresponding to each audio frame, wherein the method comprises the following steps:
(a) extracting the characteristics of each audio frame;
(b) and sequentially inputting the characteristics corresponding to each audio frame into a pre-trained recognition model to obtain the text information corresponding to each audio frame.
In addition, in this embodiment, in addition to extracting the keyword in the voice information, the method may also identify the voice feature carried in the voice information, that is, before step S123, the method further includes the following steps:
s121: recognizing the voice information to extract voice features in the voice information;
s122: and confirming the first emotional state of the target chat object according to the voice characteristics in the voice information.
Wherein the voice features comprise at least one of tone, speed and intonation.
That is, the chat target object may be presented with the first emotional state according to the voice characteristics, such as the tone, the speed and the intonation of the chat target object, carried in the voice information.
The first emotional state may comprise 3 positive, neutral, negative emotions, the positive emotion being subdivided into: 3 types of love, pleasure and thanks; negative emotions are subdivided into: complaints, anger, disgust, fear, sadness 5.
Specifically, a speech recognition model may be trained to recognize a first emotional state of the target chat object from speech features in the speech information.
Step S130: and confirming the interval duration from the sending time of the last record in the historical chat records to the current time.
In order to avoid the problems of emotional crisis and the like caused by the fact that no messages are sent to each other for a long time, no chat topics exist between two parties, or the chat topics are not interested by the other party, in this embodiment, the interval duration from the sending time of the last record in the history chat records to the current time, that is, the duration of the current information which is not received or sent by the user, is also obtained.
Step S140: and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs.
And the method for pushing the recommended dialogs is to send a push message at the intelligent terminal.
When the history information includes voice information, in step S140, a recommended word matched with the keyword and the interval duration is selected from a pre-established word library according to the keyword and the interval duration, including the following steps:
s142: confirming a second emotional state and interest characteristics of the target chat object according to the key words;
s144: and selecting recommended dialogs matched with the first emotional state, the second emotional state, the interest characteristics and the interval duration of the target chat object from a preset dialogs library according to the first emotional state, the second emotional state and the interest characteristics of the target chat object.
And the keywords comprise voice information, text information and link information, and the second emotional state and the interest characteristics of the target chat object can be confirmed from the keywords.
That is, an emotional tendency analysis may be performed on the target chat object based on the keywords and the voice characteristics.
When the history information does not include the voice information, in step S140, a recommended word matched with the keyword and the interval duration is selected from a pre-established word library according to the keyword and the interval duration, including the following steps:
s146: confirming a second emotional state and interest characteristics of the target chat object according to the key words;
s148: and selecting recommended dialogs matched with the second emotional state and the interest characteristics of the target chat object and the interval duration from a preset dialogs library according to the second emotional state and the interest characteristics of the target chat object.
Illustratively, according to history information, the extracted keywords "like" and "fishing" indicate that the target chat object likes fishing, and when the interval duration is one day, the matching recommended word may be "today you fishing? "; when the interval duration is one week, the matching recommended word may be "is you fishing in this week? "; when the interval duration is one month, then the matching recommended word may be "is you fishing in this month? ".
The keywords, including the text information and the link information, can also confirm the second emotional state and the interest characteristics of the target chat object.
In this implementation, the dialect library is established by the following steps:
(1) creating an initial dialect library;
(2) acquiring related information of the target chat object; the relevant information comprises background information of the target chat object and the dynamic state of the target chat object published on the target social platform in a second preset time period;
(3) acquiring hot news in a third preset time period;
(4) and updating the initial dialect library according to the related information and the hot news to obtain an optimized dialect library.
Wherein the initial dialect library comprises dialects which can soothe negative emotions such as complaints, anger, disgust, fear and sadness of the target chat objects, and also conventional dialects used for calling and greeting.
The background information of the target chat object comprises personal nature and social attributes of the target chat object, such as age, gender, location, education level, marital status, hobbies and the like. That is, based on the information, new topics can be created and the initial dialogs library can be updated. Of course, the background information of the target chat object can be obtained through user input, and the privacy of the target chat object cannot be violated.
The second preset time period may be the day, the week or the year corresponding to the current time. It is understood that some topics of interest to the user may be created and the initial chat library may be updated according to the dynamics of the target chat object published on the target social platform. For example, if the dynamic keyword published by the target chat object in the week is "food", some topics related to food can be created.
In the conversational operation matching process, the corresponding food topics can be matched according to the interval duration, for example, a user can share the conversational operations such as the recently eaten food of the user or hometown food.
The third preset time period may be the day, the week or the two weeks corresponding to the current time. It can be understood that the initial conversation library can be updated by creating a hot topic for discussing the hot news according to the network hot news, but because the news is short in freshness, the topic providing of the type is short in effectiveness and is not suitable for more than 2 weeks.
It should be noted that, as shown in fig. 2, when the method in this embodiment is applied to a chat scene formed by a user and the target chat object (that is, the first preset time period is within a half day, within 1 hour, and the like), at this time, a corresponding recommended word technique is mainly matched according to keywords in currently received voice information or text information, and the recommended word technique at this time mainly has an effect of soothing an emotion of the target chat object.
When the method in the embodiment is applied to a scene that a user does not chat or communicate with the target chat object for a long time (exceeding a preset time length) and the user wants to open a topic, the corresponding recommended speech is mainly matched according to the interval time between the sending time of the last record in the historical chat records and the current time (namely the time length of not chatting), and the recommended speech at the moment mainly has the function of opening the topic, so that the method is beneficial to the user to actively open a proper conversation, avoiding embarrassment and maintaining contact again.
As shown in fig. 3, after step S140, the method further includes the following steps:
step S150: and when a sending instruction corresponding to the recommended dialogues is received, sending the recommended dialogues to the target chat object.
After the user sees the pushed recommended dialogs, the user can select a click or confirmation control to send the recommended dialogs to the target chat object.
Step S160: and when a modification instruction corresponding to the recommended dialect is received, modifying the recommended dialect according to the modification instruction to obtain the modified recommended dialect.
It can be understood that if the user is not satisfied with the automatically pushed recommendation, the modification can be performed, and the modification is performed by clicking a modification control on the intelligent terminal, so that the recommendation after the modification is obtained.
Correspondingly, the following steps may be further included after step S160:
s162: acquiring emotion information of the recommended dialect before modification and emotion information of the recommended dialect after modification, and comparing the emotion information of the recommended dialect before modification with the emotion information of the recommended dialect after modification;
s164: and when the emotional information of the recommended dialect before modification is inconsistent with the emotional information of the recommended dialect after modification, refusing to execute the sending instruction when receiving the sending instruction of the recommended dialect after corresponding modification.
For the purposes of soothing and improving the relationships between parties, the emotions required by the recommended dialogs before and after modification need to be the same, positive emotions with positive optimism such as happiness, thank you and love can be accepted, and if not, the operation of sending the recommended dialogs after modification to the target chat object is prevented.
In addition, the following steps are also included after S162:
s166: when the emotion information of the recommended dialect before modification is consistent with the emotion information of the recommended dialect after modification, when a sending instruction corresponding to the recommended dialect after modification is received, sending the recommended dialect to the target chat object, and adding the recommended dialect after modification into the dialect library to optimize the dialect library.
The method can continuously meet the requirement of higher accuracy rate of exclusive scenes, and continuously expand the content of the rich scheme library.
It should be noted that the method provided in this embodiment can be applied to a low-age user group, an adolescent user group, an elderly user group, and a peer user group. The age and sex of the current user are intelligently recognized by voice, proper words matched with the current user are intelligently recognized by using the method, corresponding improper word use prompt is realized, and a topic scheme (dialect) matched with the current user is provided.
The embodiment provides a conversation matching method, which comprises the steps of obtaining historical chat records between a target chat object and a first preset time period, and screening out historical information sent by the target chat object; identifying the historical information, and extracting keywords in the historical information; confirming the interval duration from the sending time of the last record in the historical chat records to the current time; and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs. The method can accurately match proper dialect according to the keywords and the chatting time mentioned by the chatting object, can sooth the emotion of the chatting object, open the topics interested by the chatting object, avoid sending the sensitive and uninteresting topics of the chatting object, eliminate the communication barrier between two parties of the chatting, avoid the emotional crisis in advance, and is beneficial to the establishment of the harmonious relationship between the two parties.
Example two
Referring to fig. 4, the present embodiment provides a matching device for dialogs, including: an acquisition module 110, an extraction module 120, a confirmation module 130, and a matching module 140.
The obtaining module 110 is configured to obtain a history chat record between the target chat object and a first preset time period, and filter history information sent by the target chat object from the history chat record;
an extracting module 120, configured to identify the history information and extract a keyword in the history information;
a confirming module 130, configured to confirm an interval duration from a sending time of a last record in the historical chat records to a current time;
and the matching module 140 is configured to select a recommended word from a pre-established word library according to the keyword and the interval duration, and push the recommended word.
The obtaining module 110 obtains a history chat record between the target chat object and the target chat object in a first preset time period, and filters out history information sent by the target chat object; the extraction module 120 identifies the historical information and extracts keywords in the historical information; the confirming module 130 confirms the interval duration from the sending time of the last record in the history chat records to the current time; the matching module 140 selects a recommended word from a pre-established word library according to the keyword and the interval duration, and pushes the recommended word.
The specific embodiment process of the above method steps can be referred to as embodiment one, and the details of this embodiment are not repeated herein.
EXAMPLE III
The embodiment provides an electronic device, which may be a mobile phone, a computer, a tablet computer, or the like, and includes a memory and a processor, where the memory stores a computer program, and the computer program is executed by the processor to implement the matching method according to the first embodiment. It is to be appreciated that the electronic device can also include input/output (I/O) interfaces, as well as communication components.
Wherein the processor is used for executing all or part of the steps in the dialect matching method in the first embodiment. The memory is used to store various types of data, which may include, for example, instructions for any application or method in the electronic device, as well as application-related data.
The Processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and is configured to perform the method of matching words in the first embodiment.
The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
Example four
The present embodiments provide a computer readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., having stored thereon a computer program which, when executed by a processor, may implement the method steps of:
step S110: acquiring historical chat records between the target chat objects and a first preset time period, and screening out historical information sent by the target chat objects;
step S120: identifying the historical information, and extracting keywords in the historical information;
step S130: confirming the interval duration from the sending time of the last record in the historical chat records to the current time;
step S140: and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs.
The specific embodiment process of the above method steps can be referred to as embodiment one, and the details of this embodiment are not repeated herein.
In summary, the method, the device, the electronic device and the storage medium for matching the dialect provided by the application include the steps of obtaining a history chat record between the target chat object and the target chat object in a first preset time period, and screening out history information sent by the target chat object; identifying the historical information, and extracting keywords in the historical information; confirming the interval duration from the sending time of the last record in the historical chat records to the current time; and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs. The method can accurately match proper dialect according to the keywords and the chatting time mentioned by the chatting object, can sooth the emotion of the chatting object, open the topics interested by the chatting object, avoid sending the sensitive and uninteresting topics of the chatting object, eliminate the communication barrier between two parties of the chatting, avoid the emotional crisis in advance, and is beneficial to the establishment of the harmonious relationship between the two parties.
In the embodiments provided in the present application, it should be understood that the disclosed method can be implemented in other ways. The above-described method embodiments are merely illustrative.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments disclosed in the present application are described above, the descriptions are only for the convenience of understanding the present application, and are not intended to limit the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.

Claims (13)

1. A method of matching speech techniques, comprising:
acquiring historical chat records between the target chat objects and a first preset time period, and screening out historical information sent by the target chat objects;
identifying the historical information, and extracting keywords in the historical information;
confirming the interval duration from the sending time of the last record in the historical chat records to the current time;
and selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration, and pushing the recommended dialogs.
2. The method of claim 1, wherein the historical information comprises at least one of textual information, voice information, and link information.
3. The method of claim 2, wherein the historical information includes at least voice information;
identifying the historical information and extracting the keywords of the historical information, wherein the method comprises the following steps:
processing the voice information in the historical information to convert the voice information into corresponding character information;
and identifying the text information converted from the voice information to extract the keywords in the voice information.
4. The method of claim 3, wherein prior to the step of processing the voice information in the history information to convert the voice information into corresponding text information, the method further comprises:
recognizing the voice information to extract voice features in the voice information;
and confirming the first emotional state of the target chat object according to the voice characteristics in the voice information.
5. The method of claim 4, wherein the speech features include at least one of mood, pace, and intonation.
6. The method of claim 4, wherein selecting recommended dialogs matching the keyword and the interval duration from a pre-established dialogs library according to the keyword and the interval duration comprises the steps of:
confirming a second emotional state and interest characteristics of the target chat object according to the key words;
and selecting recommended dialogs matched with the first emotional state, the second emotional state, the interest characteristics and the interval duration of the target chat object from a preset dialogs library according to the first emotional state, the second emotional state and the interest characteristics of the target chat object.
7. The method of claim 3, wherein processing the voice message in the history information to convert the voice message into corresponding text message comprises:
carrying out noise reduction processing on the voice information;
carrying out endpoint detection on the voice information after the noise reduction processing to obtain effective voice information in the voice information;
performing framing processing on effective voice information in the voice information to obtain a plurality of audio frames corresponding to the voice information;
sequentially inputting each audio frame into a pre-trained recognition model to obtain text information corresponding to each audio frame;
and the text information corresponding to the plurality of audio frames forms character information corresponding to the voice information.
8. The method of claim 1, wherein after pushing the recommended utterance, the method further comprises:
when a sending instruction corresponding to the recommended dialogues is received, sending the recommended dialogues to the target chat object;
and when a modification instruction corresponding to the recommended dialect is received, modifying the recommended dialect according to the modification instruction to obtain the modified recommended dialect.
9. The method of claim 8, wherein after the step of modifying the recommended utterance to obtain the modified recommended utterance according to the modification instruction when the modification instruction corresponding to the recommended utterance is received, the method further comprises:
acquiring emotion information of the recommended dialect before modification and emotion information of the recommended dialect after modification, and comparing the emotion information of the recommended dialect before modification with the emotion information of the recommended dialect after modification;
and when the emotional information of the recommended dialect before modification is inconsistent with the emotional information of the recommended dialect after modification, refusing to execute the sending instruction when receiving the sending instruction of the recommended dialect after corresponding modification.
10. The method of claim 1, wherein the dialect library is created by:
creating an initial dialect library;
acquiring related information of the target chat object; the relevant information comprises background information of the target chat object and the dynamic state of the target chat object published on the target social platform in a second preset time period;
acquiring hot news in a third preset time period;
and updating the initial dialect library according to the related information and the hot news to obtain an optimized dialect library.
11. A tactical matching apparatus, comprising:
the acquisition module is used for acquiring historical chat records between the target chat objects and the target chat objects in a first preset time period and screening out historical information sent by the target chat objects;
the extraction module is used for identifying the historical information and extracting the keywords in the historical information;
the confirmation module is used for confirming the interval duration from the sending time of the last record in the historical chat records to the current time;
and the matching module is used for selecting recommended dialogs matched with the keywords and the interval duration from a preset dialogs library according to the keywords and the interval duration and pushing the recommended dialogs.
12. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, performs the matching method of any of claims 1 to 10.
13. A storage medium storing a computer program which, when executed by one or more processors, implements a tactical matching method as claimed in any one of claims 1 to 10.
CN202111106211.XA 2021-09-22 2021-09-22 Method and device for matching dialect, electronic equipment and storage medium Pending CN114003699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111106211.XA CN114003699A (en) 2021-09-22 2021-09-22 Method and device for matching dialect, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111106211.XA CN114003699A (en) 2021-09-22 2021-09-22 Method and device for matching dialect, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114003699A true CN114003699A (en) 2022-02-01

Family

ID=79921705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111106211.XA Pending CN114003699A (en) 2021-09-22 2021-09-22 Method and device for matching dialect, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114003699A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578691A (en) * 2023-07-13 2023-08-11 江西合一云数据科技股份有限公司 Intelligent pension robot dialogue method and dialogue system thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578691A (en) * 2023-07-13 2023-08-11 江西合一云数据科技股份有限公司 Intelligent pension robot dialogue method and dialogue system thereof

Similar Documents

Publication Publication Date Title
US11599729B2 (en) Method and apparatus for intelligent automated chatting
US11238226B2 (en) System and method for accelerating user agent chats
US20200154170A1 (en) Media content recommendation through chatbots
US20100223335A1 (en) Dynamically Managing Online Communication Groups
CN110869969A (en) Virtual assistant for generating personalized responses within a communication session
FR2947358A1 (en) A CONSULTING ASSISTANT USING THE SEMANTIC ANALYSIS OF COMMUNITY EXCHANGES
US20200226216A1 (en) Context-sensitive summarization
CN110166802B (en) Bullet screen processing method and device and storage medium
US9922644B2 (en) Analysis of professional-client interactions
WO2016173326A1 (en) Subject based interaction system and method
CA3061788A1 (en) Electronic communication system with drafting assistant and method of using same
CN116541504B (en) Dialog generation method, device, medium and computing equipment
EP3577579A1 (en) Input method editor
US20220156823A1 (en) System and method for product searching based on natural language processing
CN114003699A (en) Method and device for matching dialect, electronic equipment and storage medium
US11297183B2 (en) Method and apparatus for predicting customer behavior
CN116521822B (en) User intention recognition method and device based on 5G message multi-round session mechanism
CN112562659A (en) Voice recognition method and device, electronic equipment and storage medium
US20180158457A1 (en) Dialog agent, reply sentence generation method, and non-transitory computer-readable recording medium
CN110740212B (en) Call answering method and device based on intelligent voice technology and electronic equipment
US20230063713A1 (en) Sentence level dialogue summaries using unsupervised machine learning for keyword selection and scoring
EP2164237B1 (en) Communication method and system for displaying a link to a service according to an expression spoken in the course of a conversation
FR3089324A1 (en) Method for determining a conversational agent on a terminal
US11863503B2 (en) Methods and systems for supplementing a message
CN112989015B (en) Adaptive conversation method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination