CN111310460A - Statement adjusting method and device - Google Patents
Statement adjusting method and device Download PDFInfo
- Publication number
- CN111310460A CN111310460A CN201811515760.0A CN201811515760A CN111310460A CN 111310460 A CN111310460 A CN 111310460A CN 201811515760 A CN201811515760 A CN 201811515760A CN 111310460 A CN111310460 A CN 111310460A
- Authority
- CN
- China
- Prior art keywords
- matrix
- neural network
- sentence
- input
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Machine Translation (AREA)
Abstract
The invention is suitable for the technical field of artificial intelligence, and provides a sentence adjusting method and a sentence adjusting device, wherein the sentence adjusting method comprises the following steps: the method comprises the steps of obtaining a chat record of a target contact in a first time period, and analyzing emotion types corresponding to the chat record; if the emotion type corresponding to the chat record belongs to the preset emotion type set, receiving a sentence input by a user, and analyzing the emotion type corresponding to the sentence input by the user; if the emotion type corresponding to the sentence input by the user belongs to the preset emotion type set, converting the sentence input by the user into an adjusting sentence; and displaying an adjusting statement or sending the adjusting statement to the target contact person so as to adjust the statement input during the chat more intelligently and improve the satisfaction degree of the target contact person on the chat.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a sentence adjusting method and device.
Background
With the rapid development of the internet, the face-to-face communication is becoming less and less, and people choose more online chat tools such as WeChat to chat with others. However, in the process of using the chat tool to communicate words, the user often cannot feel the emotion of the other party due to improper wording or failure to sense the emotion of the other party in time, so that the words are not fortuitous and will be untimely within a few words.
However, the existing chat software has a low degree of intelligence, and directly transmits the words input by the user to the other party, so that the existing chat software lacks the functions of assistance and reminding, and can not avoid misunderstanding of the two parties in the process of text chat.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for adjusting a statement, so as to solve the problems of low intelligence degree and poor flexibility in transmitting a statement input by a user in the prior art.
A first aspect of an embodiment of the present invention provides a statement adjusting method, including: the method comprises the steps of obtaining a chat record of a target contact in a first time period, and analyzing an emotion type corresponding to the chat record; if the emotion type corresponding to the chat record belongs to a preset emotion type set, receiving a statement input by a user, and analyzing the emotion type corresponding to the statement input by the user; if the emotion type corresponding to the sentence input by the user belongs to a preset emotion type set, converting the sentence input by the user into an adjusting sentence; and displaying the adjusting statement or sending the adjusting statement to the target contact.
A second aspect of the embodiments of the present invention provides a sentence adjusting apparatus, including: the obtaining module is used for obtaining the chat records of the target contact in a first time period and analyzing the emotion types corresponding to the chat records; the first analysis module is used for receiving the sentences input by the user and analyzing the emotion types corresponding to the sentences input by the user if the emotion types corresponding to the chat records belong to a preset emotion type set; the second analysis module is used for converting the sentence input by the user into an adjusting sentence if the emotion category corresponding to the sentence input by the user belongs to a preset emotion category set; and the execution module is used for displaying the adjusting statement or sending the adjusting statement to the target contact person.
A third aspect of the embodiments of the present invention provides a sentence adjusting apparatus, including: memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method provided by the first aspect of an embodiment of the present invention are implemented when the computer program is executed by the processor.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method provided by the first aspect of the embodiments of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the method comprises the steps of obtaining a chat record of a target contact in a first time period, and analyzing emotion types corresponding to the chat record; if the emotion type corresponding to the chat record belongs to the preset emotion type set, receiving a sentence input by a user, and analyzing the emotion type corresponding to the sentence input by the user; if the emotion type corresponding to the sentence input by the user belongs to the preset emotion type set, converting the sentence input by the user into an adjusting sentence; and displaying an adjusting statement or sending the adjusting statement to the target contact person so as to adjust the statement input during the chat more intelligently and improve the satisfaction degree of the target contact person on the chat.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of an implementation of a statement adjustment method according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific implementation of the statement adjustment method S101 according to the embodiment of the present invention;
fig. 3 is a flowchart illustrating a specific implementation of the statement adjustment method S106 according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a specific implementation of the statement adjustment method S103 according to an embodiment of the present invention;
FIG. 5 is a block diagram of a sentence adjusting apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a sentence adjustment apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows an implementation flow of a statement adjustment method provided in an embodiment of the present invention, which is detailed as follows:
in S101, a chat record of a user and a target contact in a first time period is obtained, and an emotion type corresponding to the chat record is analyzed.
In the embodiment of the invention, before or during chatting with the target contact person, a user collects the chatting record of the target contact person in a first time period, stores the chatting record of the target contact person into the language database, deletes the previously stored data to update the language database, and analyzes the data in the language database in the subsequent process. It will be appreciated that typically, a chat log will contain a plurality of statements, for example: and if too many sentences are input by the target contact person in the first time period, extracting the latest input preset number of sentences in the first time period as data for further subsequent analysis.
Optionally, the statements in the chat records are converted into a first matrix, and the first matrix is input into a preset first neural network, so as to obtain the emotion category corresponding to the chat records. The method comprises the steps of firstly carrying out word segmentation processing on each sentence in a chat record, then calculating word vectors of each word, and splicing the word vectors to generate a first matrix.
Optionally, embodiments of the invention compute a Word vector for each Word via the Word2Vec tool.
In the embodiment of the invention, the emotion type corresponding to the chat records is determined through the trained first neural network. It can be understood that the preset first neural network is obtained by training a plurality of training data composed of chat records and emotion categories, so that after the chat records to be recognized are input, the emotion categories corresponding to the chat records are output.
Illustratively, in an embodiment of the present invention, the emotion categories include pleasure, satisfaction, self-hating, souvenir, anger, hurry, panic, suspicion, and the like, and the different emotion categories may be classified into positive emotion categories and negative emotion categories. For example, pleasure, satisfaction, self-hating, souvenir belong to positive emotion categories, anger, injury, panic, and suspicion belong to negative emotion categories.
In S102, it is determined whether the emotion category corresponding to the chat record belongs to a preset emotion category set.
Since it is necessary to determine whether an emotion class is positive or negative in the embodiments of the present invention, all positive emotion classes that may be identified in the previous step are combined to generate a predetermined emotion class set. Thus, after the emotion type corresponding to the chat record is analyzed in the previous step, the emotion type is compared with elements in a preset emotion type set, if the emotion type corresponding to the chat record belongs to the preset emotion type set, the emotion type corresponding to the chat record is judged to be a positive emotion type, otherwise, the emotion type corresponding to the chat record is judged to be a negative emotion type.
In S103, if the emotion category corresponding to the chat record does not belong to the preset emotion category set, a reply policy is generated according to the sentence input by the target contact in the second time period, and the reply policy is displayed.
As described above, since the emotion category corresponding to the chat log does not belong to the preset emotion category set, the emotion of the target contact reflected by the chat log is negative, and in this case, a reply policy is generated and displayed according to the sentence input by the target contact in the second time period, so as to remind the user of what type of speech is suitable or unsuitable currently. For example, when it is determined that the target contact is currently in an angry state, a first reply policy may be displayed, the first reply policy comprising: reminding the user to shorten the length of the sentence to be sent, and performing explanation and apology without initiating another topic, etc.
Optionally, respectively converting a plurality of statements input by the target contact in the second time period into a matrix, and generating a matrix set; and inputting the matrix in the matrix set into a preset third neural network to obtain a reply strategy corresponding to the statement input by the target contact in a second preset time period.
In the embodiment of the invention, the emotion category corresponding to the sentence input by the target contact in the second time period is determined through the trained third neural network. It is understood that the third neural network is trained by training data composed of a plurality of sentence sets and the reply strategies, and therefore, after inputting the sentences input by the target contacts in the second time period, the corresponding reply strategies are output.
In S104, if the emotion type corresponding to the chat record belongs to a preset emotion type set, receiving a sentence input by a user, and analyzing the emotion type corresponding to the sentence input by the user.
As described above, because the emotion category corresponding to the chat record belongs to the preset emotion category set, the emotion of the target contact person reflected by the chat record is more positive, and therefore, in this case, the influence of the reply policy is not required, the user can directly input the words the user wants to say into the local device, and the local device does not directly send the received statements input by the user to the target contact person, but analyzes the statements input by the user, so as to obtain the emotion category corresponding to the statements input by the user.
Optionally, the sentence input by the user is converted into a second matrix, and the second matrix is input into the first neural network, so as to obtain the emotion category corresponding to the sentence input by the user. It is understood that the neural network used herein for identifying the emotion category corresponding to the sentence input by the user may be the same as the neural network used above for identifying the emotion category corresponding to the chat log, i.e. both are the first neural network, because the chat log and the received sentence input by the user are basically formed by clustering the sentences, and both application scenarios are used for identifying the emotion type.
S105, judging whether the emotion type corresponding to the sentence input by the user belongs to a preset emotion type set.
And S106, if the emotion type corresponding to the sentence input by the user does not belong to the preset emotion type set, stopping processing the sentence input by the user, and reminding the user to input the sentence again.
S107, if the emotion type corresponding to the sentence input by the user belongs to a preset emotion type set, converting the sentence input by the user into an adjusting sentence; and displaying the adjusting statement or sending the adjusting statement to the target contact.
Optionally, the second matrix is input into a preset second neural network, and an adjusting statement corresponding to the statement input by the user is obtained.
It will be appreciated from the above description that the second matrix may be used to characterize a sentence entered by a user.
In the embodiment of the present invention, the second neural network is obtained by training data composed of a plurality of original sentences and tuning sentences, so that after a sentence input by a user is input, a corresponding tuning sentence is output. Wherein the adjustment sentence is more appropriate than the sentence input by the user.
The method and the device for obtaining the emotion types of the target contact in the first time period can be understood, and the emotion types corresponding to the chat records are analyzed; if the emotion type corresponding to the chat record belongs to the preset emotion type set, receiving a sentence input by a user, and analyzing the emotion type corresponding to the sentence input by the user; if the emotion type corresponding to the sentence input by the user belongs to the preset emotion type set, converting the sentence input by the user into an adjusting sentence; and displaying an adjusting statement or sending the adjusting statement to the target contact person so as to adjust the statement input during the chat more intelligently and improve the satisfaction degree of the target contact person on the chat.
As an embodiment of the present invention, in the above embodiment S101, if the first matrix is input into a preset first neural network to obtain an emotion type corresponding to the chat log, as shown in fig. 2, the above S101 includes:
in S1011, performing convolution operation on the first matrix through the convolution layer of the first sub-neural network to generate a first feature matrix, and performing pooling operation on the first feature matrix through the pooling layer of the first sub-neural network to generate a second feature matrix; converting, by an attention layer of the first sub-neural network, the second feature matrix into a third feature matrix based on a preset attention mechanism.
Notably, in an embodiment of the present invention, the first neural network includes a first sub-neural network and a second sub-neural network, wherein the first sub-neural network is configured to select a sentence from the chat records that best characterizes the chat records, and the second sub-neural network is configured to determine an emotion category that best characterizes the sentence of the chat records, wherein the first sub-neural network may be an emotion convolutional neural network, and the second sub-neural network may be a text convolutional neural network.
Optionally, in an embodiment of the present invention, a convolution kernel of the convolution layer of the first sub-neural network has a size of H × H, where H takes values of 3, 4, and 5, and in order to obtain more features, the number of channels of the wrapper kernel is set to be 3.
Optionally, in this embodiment of the present invention, the pooling layer of the first sub-neural network performs a maximum pooling operation on the first feature matrix, and generates the second feature matrix.
In S1012, the third feature matrix is calculated through the connection layer in the first sub-neural network, and probability values of the statements in the chat record are generated.
And calculating a third feature matrix output by the attention layer through a sigmoid function, and generating probability values of all sentences in the chatting records through a weighted summation algorithm.
In S1013, the sentence with the highest probability value in the chat record is converted into a representative matrix, and the representative matrix is calculated by using the convolutional layer and the pooling layer in the second sub-neural network, so as to generate a fourth feature matrix.
In S1014, determining a category of the fourth feature matrix according to a classifier preset in the second sub-neural network, as an emotion category corresponding to the chat record.
Alternatively, the category of the fourth feature matrix may be calculated by a softmax classifier.
In the embodiment of the invention, by means of superposition of two neural networks, after a sentence which can most represent a chat record is determined, the sentence is analyzed to determine the emotion type corresponding to the chat record. It is to be understood that, since the technology of training the neural network is common knowledge in the art, and therefore will not be described herein, the embodiments of the present invention mainly describe the internal logic levels of the first neural network, and the functions of each level.
As an embodiment of the present invention, in the above embodiment S106, if the second matrix is input into a preset second neural network, and an adjusted term corresponding to the term input by the user is obtained, as shown in fig. 3, the above S106 includes: .
In S1061, the second matrix is encoded by an encoder layer of the second neural network, and an encoding matrix is generated.
In the embodiment of the invention, the second neural network is a preset sequence-to-sequence model, in the training process, a matrix corresponding to a sentence input by a user is used as the input of the second neural network, a corresponding adjusting sentence is used as the output of the second neural network, and the trained second neural network is output and directly called in the subsequent process until a loss function is converged in the training process.
Specifically, assuming that a matrix corresponding to a sentence input by a user is a second matrix, an encoding layer of the second neural network encodes the second matrix through the recurrent neural network to generate an encoding matrix.
In S1062, the encoding matrix is converted into a first attention matrix by a local attention mechanism of an attention layer of the second neural network.
In the embodiment of the invention, the attention layer of the second neural network firstly calculates the score of each hidden layer of the recurrent neural network in the encoder according to the softmax classifier through a local attention mechanism, then calculates the attention center point according to the encoding matrix, selects a window with a preset radius and selects the score of the hidden layer in the window, and finally multiplies and sums the score of the hidden layer and the corresponding hidden layer state to generate the first attention matrix.
At S1063, the first attention matrix and the coding matrix are input to a decoder layer of the third neural network, and an adjustment statement corresponding to the statement input by the user is output.
In the embodiment of the present invention, the decoder layer of the third neural network is implemented by using a multi-layer recurrent neural network, and in order to obtain more information about the sequence, a feedback mechanism is added in the decoder layer, and the state at the previous time is input to the current time, so as to increase the complexity of the model and make the generalization thereof higher.
In the embodiment of the invention, the sentence input by the user is processed by the pre-trained second neural network, so that the sentence input by the user can be quickly and accurately converted into a more appropriate adjusting sentence.
As an embodiment of the present invention, in the above embodiment S103, a plurality of statements input by the target contact in the second time period are respectively converted into matrices, so as to generate a matrix set; inputting the matrix in the matrix set into a preset third neural network to obtain a reply strategy corresponding to a statement input by the target contact in a second preset time period, where as shown in fig. 4, the step S103 includes:
in S1031, each matrix in the set of matrices is converted into a feature vector by an encoder layer of the third neural network.
In the embodiment of the invention, the third neural network adopts a sequence-to-sequence model, the coding layer of the third neural network adopts a speech-level model to convert each matrix in the matrix set into a feature vector, and the coding layer simultaneously generates a relation vector for representing the relation of each matrix in the matrix set based on an inter-speech model.
In S1032, a score of each of the feature vectors is calculated by the attention layer of the third neural network, and a second attention matrix is generated based on the score of each of the feature vectors and a state of each of the hidden layers of the third neural network.
In the embodiment of the invention, the attention layer of the third neural network calculates scores of each feature vector and each relation vector based on the softmax classifier, multiplies the scores of each feature vector and each relation vector by the hidden layer state of the corresponding coding layer, and then performs splicing to generate the second attention matrix.
In S1033, the connection layer of the third neural network converts the feature vector into a full connection matrix based on the second attention matrix.
In S1034, the full-link matrix is input to the decoder layer of the third neural network, so as to obtain the semantic probability of the sentence input by the target contact, and the reply policy corresponding to the semantic with the highest semantic probability is output based on the corresponding relationship between the preset semantic and the reply policy.
In the embodiment of the invention, the structure of the decoder adopts a multilayer recurrent neural network, and in order to acquire more information about the sequence, a feedback mechanism is added into the decoder, and the state of the last moment is input to the current moment, so that the complexity of the model is increased, and the generalization is higher.
In the embodiment of the invention, the sentence input by the target contact in the second time period is processed through the pre-trained third neural network, and the reply strategy can be quickly output.
Corresponding to the application upgrading method described in the foregoing embodiment, fig. 5 shows a structural block diagram of a statement adjusting apparatus provided in the embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown.
Referring to fig. 5, the apparatus includes:
the obtaining module 501 is configured to obtain a chat record of a user and a target contact in a first time period, and analyze an emotion category corresponding to the chat record;
a first analysis module 502, configured to receive a sentence input by a user and analyze an emotion category corresponding to the sentence input by the user if the emotion category corresponding to the chat record belongs to a preset emotion category set;
a second analysis module 503, configured to convert the sentence input by the user into an adjustment sentence if the emotion category corresponding to the sentence input by the user belongs to a preset emotion category set;
and the execution module 504 is configured to display the adjustment statement, or send the adjustment statement to the target contact.
Optionally, the apparatus further comprises:
and the strategy generation module is used for generating a reply strategy according to the statement input by the target contact in the second time period and displaying the reply strategy if the emotion type corresponding to the chat record does not belong to the preset emotion type set.
Optionally, the analyzing the emotion category corresponding to the chat record includes: converting the statements in the chat records into a first matrix, and inputting the first matrix into a preset first neural network to obtain the emotion types corresponding to the chat records; the analyzing the emotion category corresponding to the sentence input by the user comprises: converting the statement input by the user into a second matrix, and inputting the second matrix into the first neural network to obtain an emotion category corresponding to the statement input by the user; the converting the sentence input by the user into an adjusting sentence comprises: and inputting the second matrix into a preset second neural network to obtain an adjusting statement corresponding to the statement input by the user.
Optionally, the generating a reply policy according to the statement input by the target contact in the second time period includes: respectively converting a plurality of sentences input by the target contact person in the second time period into matrixes, and generating a matrix set; and inputting the matrix in the matrix set into a preset third neural network to obtain a reply strategy corresponding to the statement input by the target contact in a second preset time period.
Optionally, the first neural network comprises a first sub-neural network and a second sub-neural network.
The acquisition module comprises:
the first calculation submodule is used for performing convolution operation on the first matrix through the convolution layer of the first sub-neural network to generate a first characteristic matrix, and performing pooling operation on the first characteristic matrix through the pooling layer of the first sub-neural network to generate a second characteristic matrix; converting, by an attention layer of the first sub-neural network, the second feature matrix into a third feature matrix based on a preset attention mechanism.
And the second calculating submodule is used for calculating the third feature matrix through a connecting layer in the first sub-neural network and generating the probability value of each statement in the chat record.
The third computation submodule is used for converting the statement with the highest probability value in the chat records into a representative matrix, and calculating the representative matrix through a convolution layer and a pooling layer in the second sub-neural network to generate a fourth feature matrix;
and the fourth calculating submodule is used for determining the category of the fourth feature matrix according to a preset classifier in the second sub-neural network, and the category is used as the emotion category corresponding to the chat record.
Optionally, the second analysis module comprises:
the first coding submodule is used for coding the second matrix through a coder layer of the second neural network to generate a coding matrix;
a first attention submodule for converting the coding matrix into a first attention matrix by a local attention mechanism of an attention layer of the second neural network;
and the first decoding submodule is used for inputting the first attention matrix and the coding matrix into a decoder layer of the second neural network and outputting an adjusting statement corresponding to the statement input by the user.
Optionally, the policy generation module includes:
a second encoding submodule, configured to convert each matrix in the matrix set into a feature vector through an encoder layer of the third neural network;
a second attention submodule, configured to calculate a score of each feature vector through an attention layer of the third neural network, and generate a second attention matrix based on the score of each feature vector and a state of each hidden layer of the third neural network;
a connection sub-module for a connection layer of the third neural network to convert the eigenvector into a fully-connected matrix based on the second attention matrix;
and the second decoding submodule is used for inputting the full-connection matrix into a decoder layer of the third neural network to obtain the semantic probability of the sentence input by the target contact person, and outputting the reply strategy corresponding to the semantic with the highest semantic probability based on the corresponding relation between the preset semantic and the reply strategy.
In the embodiment of the invention, the chat records of the target contact in the first time period are obtained, and the emotion types corresponding to the chat records are analyzed; if the emotion type corresponding to the chat record belongs to the preset emotion type set, receiving a sentence input by a user, and analyzing the emotion type corresponding to the sentence input by the user; if the emotion type corresponding to the sentence input by the user belongs to the preset emotion type set, converting the sentence input by the user into an adjusting sentence; and displaying an adjusting statement or sending the adjusting statement to the target contact person so as to adjust the statement input during the chat more intelligently and improve the satisfaction degree of the target contact person on the chat.
Fig. 6 is a schematic diagram of a sentence adjustment apparatus according to an embodiment of the present invention. As shown in fig. 6, the sentence adjustment device of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a statement adjustment program, stored in said memory 61 and executable on said processor 60. The processor 60 executes the computer program 62 to implement the steps in the above-mentioned embodiment of the adjusting method for each sentence, such as the steps S101 to S107 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 501 to 504 shown in fig. 6.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 62 in the statement adjustment device 6.
The sentence adjusting device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The adjusting means/device of the sentence may include, but is not limited to, the processor 60 and the memory 61. It will be understood by those skilled in the art that fig. 6 is only an example of the adjusting means 6 of the sentence, and does not constitute a limitation of the adjusting means 6 of the sentence, and may comprise more or less components than those shown, or some components may be combined, or different components, for example, the adjusting means of the sentence may further comprise an input-output device, a network access device, a bus, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the sentence adjustment apparatus, such as a hard disk or a memory of the sentence adjustment apparatus 6. The memory 61 may also be an external storage device of the sentence adjusting apparatus/device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the sentence adjusting apparatus/device 6. Further, the memory 61 may also comprise both an internal storage unit and an external storage device of the sentence adaptation means/means 6. The memory 61 is used for storing the computer program and other programs and data required by the adjusting means/means of the statements. The memory 61 may also be used to temporarily store data that has been output or is to be output. It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. A sentence adjustment method, comprising:
the method comprises the steps of obtaining chat records of a user and a target contact in a first time period, and analyzing emotion types corresponding to the chat records;
if the emotion type corresponding to the chat record belongs to a preset emotion type set, receiving the statement input by the user, and analyzing the emotion type corresponding to the statement input by the user;
and if the emotion type corresponding to the sentence input by the user belongs to a preset emotion type set, converting the sentence input by the user into an adjusting sentence.
2. The sentence adjustment method according to claim 1, further comprising:
and if the emotion type corresponding to the chat record does not belong to the preset emotion type set, generating a reply strategy according to the sentence input by the target contact in the second time period, and displaying the reply strategy.
3. The method for adjusting sentences according to claim 1, wherein the parsing of the emotion classification corresponding to the chat history includes: converting the statements in the chat records into a first matrix, and inputting the first matrix into a preset first neural network to obtain the emotion types corresponding to the chat records;
the analyzing the emotion category corresponding to the sentence input by the user comprises: converting the statement input by the user into a second matrix, and inputting the second matrix into the first neural network to obtain an emotion category corresponding to the statement input by the user;
the converting the sentence input by the user into an adjusting sentence comprises: and inputting the second matrix into a preset second neural network to obtain an adjusting statement corresponding to the statement input by the user.
4. The method for adjusting sentences according to claim 2, wherein the generating a reply policy according to the sentences input by the target contact in the second time period comprises:
respectively converting a plurality of sentences input by the target contact person in the second time period into matrixes, and generating a matrix set;
and inputting the matrix in the matrix set into a preset third neural network to obtain a reply strategy corresponding to the statement input by the target contact in a second preset time period.
5. The adjustment method of the sentence according to claim 3, wherein the first neural network includes a first sub-neural network and a second sub-neural network;
the inputting the first matrix into a preset first neural network to obtain the emotion category corresponding to the chat record includes:
performing convolution operation on the first matrix through the convolution layer of the first sub-neural network to generate a first characteristic matrix, and performing pooling operation on the first characteristic matrix through the pooling layer of the first sub-neural network to generate a second characteristic matrix; converting, by an attention layer of the first sub-neural network, the second feature matrix into a third feature matrix based on a preset attention mechanism;
calculating the third feature matrix through a connecting layer in the first sub-neural network to generate probability values of all sentences in the chat records;
converting the sentence with the highest probability value in the chat records into a representative matrix, and calculating the representative matrix through a convolutional layer and a pooling layer in the second sub-neural network to generate a fourth feature matrix;
and determining the category of the fourth feature matrix according to a classifier preset in the second sub-neural network, wherein the category is used as the emotion category corresponding to the chat record.
6. The sentence adjustment method of claim 3, wherein the inputting the second matrix into a preset second neural network to obtain the adjusted sentence corresponding to the sentence input by the user comprises:
encoding the second matrix through an encoder layer of the second neural network to generate an encoding matrix;
converting the encoding matrix into a first attention matrix through a local attention mechanism of an attention layer of the second neural network;
and inputting the first attention matrix and the coding matrix into a decoder layer of the second neural network, and outputting an adjusting statement corresponding to the statement input by the user.
7. The sentence adjustment method of claim 4, wherein the inputting the matrix in the matrix set into a preset third neural network to obtain a reply policy corresponding to the sentence input by the target contact within a second preset time period includes:
converting, by an encoder layer of the third neural network, each matrix in the set of matrices into a feature vector;
calculating scores of the feature vectors through an attention layer of the third neural network, and generating a second attention matrix based on the scores of the feature vectors and states of hidden layers of the third neural network;
a connection layer of the third neural network converts the feature vector into a fully connected matrix based on the second attention matrix;
and inputting the full-connection matrix into a decoder layer of the third neural network to obtain the semantic probability of the sentence input by the target contact person, and outputting a reply strategy corresponding to the semantic with the highest semantic probability based on the corresponding relation between the preset semantic and the reply strategy.
8. An apparatus for adjusting a sentence, comprising:
the obtaining module is used for obtaining the chat records of the user and the target contact in a first time period and analyzing the emotion types corresponding to the chat records;
the first analysis module is used for receiving the sentences input by the user and analyzing the emotion types corresponding to the sentences input by the user if the emotion types corresponding to the chat records belong to a preset emotion type set;
the second analysis module is used for converting the sentence input by the user into an adjusting sentence if the emotion category corresponding to the sentence input by the user belongs to a preset emotion category set;
and the execution module is used for displaying the adjusting statement or sending the adjusting statement to the target contact person.
9. An apparatus for adjusting a sentence, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811515760.0A CN111310460B (en) | 2018-12-12 | 2018-12-12 | Statement adjusting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811515760.0A CN111310460B (en) | 2018-12-12 | 2018-12-12 | Statement adjusting method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111310460A true CN111310460A (en) | 2020-06-19 |
CN111310460B CN111310460B (en) | 2022-03-01 |
Family
ID=71161394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811515760.0A Active CN111310460B (en) | 2018-12-12 | 2018-12-12 | Statement adjusting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111310460B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112017758A (en) * | 2020-09-15 | 2020-12-01 | 龙马智芯(珠海横琴)科技有限公司 | Emotion recognition method and device, emotion recognition system and analysis decision terminal |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102566768A (en) * | 2010-12-13 | 2012-07-11 | 腾讯科技(深圳)有限公司 | Method and system for automatic character judgment and correction |
US20140052794A1 (en) * | 2012-08-15 | 2014-02-20 | Imvu, Inc. | System and method for increasing clarity and expressiveness in network communications |
US9043196B1 (en) * | 2014-07-07 | 2015-05-26 | Machine Zone, Inc. | Systems and methods for identifying and suggesting emoticons |
CN106599998A (en) * | 2016-12-01 | 2017-04-26 | 竹间智能科技(上海)有限公司 | Method and system for adjusting response of robot based on emotion feature |
CN107122346A (en) * | 2016-12-28 | 2017-09-01 | 平安科技(深圳)有限公司 | The error correction method and device of a kind of read statement |
CN107147557A (en) * | 2016-10-25 | 2017-09-08 | 北京小米移动软件有限公司 | Change the method and device of session information |
CN108364650A (en) * | 2018-04-18 | 2018-08-03 | 北京声智科技有限公司 | The adjusting apparatus and method of voice recognition result |
-
2018
- 2018-12-12 CN CN201811515760.0A patent/CN111310460B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102566768A (en) * | 2010-12-13 | 2012-07-11 | 腾讯科技(深圳)有限公司 | Method and system for automatic character judgment and correction |
US20140052794A1 (en) * | 2012-08-15 | 2014-02-20 | Imvu, Inc. | System and method for increasing clarity and expressiveness in network communications |
US9043196B1 (en) * | 2014-07-07 | 2015-05-26 | Machine Zone, Inc. | Systems and methods for identifying and suggesting emoticons |
CN107147557A (en) * | 2016-10-25 | 2017-09-08 | 北京小米移动软件有限公司 | Change the method and device of session information |
CN106599998A (en) * | 2016-12-01 | 2017-04-26 | 竹间智能科技(上海)有限公司 | Method and system for adjusting response of robot based on emotion feature |
CN107122346A (en) * | 2016-12-28 | 2017-09-01 | 平安科技(深圳)有限公司 | The error correction method and device of a kind of read statement |
CN108364650A (en) * | 2018-04-18 | 2018-08-03 | 北京声智科技有限公司 | The adjusting apparatus and method of voice recognition result |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112017758A (en) * | 2020-09-15 | 2020-12-01 | 龙马智芯(珠海横琴)科技有限公司 | Emotion recognition method and device, emotion recognition system and analysis decision terminal |
Also Published As
Publication number | Publication date |
---|---|
CN111310460B (en) | 2022-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220180202A1 (en) | Text processing model training method, and text processing method and apparatus | |
CN111680159B (en) | Data processing method and device and electronic equipment | |
WO2021208722A1 (en) | Classification model training method, apparatus, terminal, and storage medium | |
CN112732911B (en) | Semantic recognition-based speaking recommendation method, device, equipment and storage medium | |
WO2020177282A1 (en) | Machine dialogue method and apparatus, computer device, and storage medium | |
CN110347792B (en) | Dialog generation method and device, storage medium and electronic equipment | |
WO2022095380A1 (en) | Ai-based virtual interaction model generation method and apparatus, computer device and storage medium | |
CN110717325B (en) | Text emotion analysis method and device, electronic equipment and storage medium | |
CN110990543A (en) | Intelligent conversation generation method and device, computer equipment and computer storage medium | |
US12008739B2 (en) | Automatic photo editing via linguistic request | |
CN113239169A (en) | Artificial intelligence-based answer generation method, device, equipment and storage medium | |
CN108959388B (en) | Information generation method and device | |
WO2024066920A1 (en) | Processing method and apparatus for dialogue in virtual scene, and electronic device, computer program product and computer storage medium | |
CN111666400A (en) | Message acquisition method and device, computer equipment and storage medium | |
CN109902273A (en) | The modeling method and device of keyword generation model | |
CN118378148A (en) | Training method of multi-label classification model, multi-label classification method and related device | |
WO2024109597A1 (en) | Training method for text merging determination model, and text merging determination method | |
CN114373443A (en) | Speech synthesis method and apparatus, computing device, storage medium, and program product | |
CN116913278B (en) | Voice processing method, device, equipment and storage medium | |
CN116913266B (en) | Voice detection method, device, equipment and storage medium | |
CN111310460B (en) | Statement adjusting method and device | |
CN111507849A (en) | Authority guaranteeing method and related device and equipment | |
CN110717022A (en) | Robot dialogue generation method and device, readable storage medium and robot | |
CN116955529A (en) | Data processing method and device and electronic equipment | |
CN113434630B (en) | Customer service evaluation method, customer service evaluation device, terminal equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province Applicant after: TCL Technology Group Co.,Ltd. Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District Applicant before: TCL Corp. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |