CN111913591A - Reply phrase generation method, pinyin input method and intelligent terminal - Google Patents

Reply phrase generation method, pinyin input method and intelligent terminal Download PDF

Info

Publication number
CN111913591A
CN111913591A CN202010578135.1A CN202010578135A CN111913591A CN 111913591 A CN111913591 A CN 111913591A CN 202010578135 A CN202010578135 A CN 202010578135A CN 111913591 A CN111913591 A CN 111913591A
Authority
CN
China
Prior art keywords
vector
words
character
reply
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010578135.1A
Other languages
Chinese (zh)
Other versions
CN111913591B (en
Inventor
申兴发
姚健
徐胜
赵庆彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010578135.1A priority Critical patent/CN111913591B/en
Publication of CN111913591A publication Critical patent/CN111913591A/en
Application granted granted Critical
Publication of CN111913591B publication Critical patent/CN111913591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a reply phrase generation method, a pinyin input method and an intelligent terminal. The method comprises the following steps: s1, acquiring the above information X input by the chat object and the character editing information P input by the user; and S2, generating a reply phrase Y according to the above information X and the character editing information P. The invention not only needs to obtain the character editing information input by the user, but also needs to obtain the text information input by the chat object, and generates the reply phrase according to the text information and the character editing information, so that the generated phrase can better accord with the current context after considering the text information, and the matching accuracy is improved.

Description

Reply phrase generation method, pinyin input method and intelligent terminal
Technical Field
The invention relates to the field of computer input methods, in particular to a reply phrase generation method, a pinyin input method and an intelligent terminal.
Background
The computer input method is an important mode of man-machine interaction, and a user inputs character information into a computer by using the input method. In order to improve the input efficiency of a user, the existing input method provides a predicted phrase for the user to select according to the matching of the current input characters of the user and the phrases containing the input characters, so that the user can input the phrases by inputting a small number of characters. However, this matching method only considers the currently input characters, but does not consider the content of the current dialog, i.e. does not contact the above information of the current dialog, resulting in some phrases unrelated to the current dialog and low matching accuracy.
Disclosure of Invention
The present invention provides a reply phrase generating method, a pinyin input method, and an intelligent terminal, aiming at the above-mentioned defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a reply phrase generation method is constructed, and comprises the following steps:
s1, acquiring the above information X input by the chat object and the character editing information P input by the user;
and S2, generating a reply phrase Y according to the above information X and the character editing information P.
Further, in the reply phrase generating method of the present invention, the step S2 includes:
s21, converting the above information X into a hidden vector M;
and S22, generating a character vector H by the hidden vector M and the character editing information P, and converting the character vector H into the reply phrase Y.
Further, in the reply phrase generating method of the present invention, the step S21 includes:
s211, splitting the above information X into word combinations (X)1、x2、···、xn) Arranging the split words according to the original text sequence, wherein n is a positive integer;
s212, combining the words x1Calculated as a vector m1Then using the function mi=f(mi-1,xi) Sequentially carrying out vector conversion on the split words to finally obtain words xnCorresponding vector mnThe vector mnAnd taking the hidden vector M as a target vector, wherein the function f is a nonlinear conversion function in the recurrent neural network, i is a positive integer and is more than 1 and less than or equal to n.
Further, in the reply phrase generating method of the present invention, the step S22 includes:
s221, calculating a vector m1、m1、···、mnObtaining a global vector c by the weighted average vector; from the functional relation st=f([yt-1;pt],st-1C) calculating the state stWherein the function f is a nonlinear conversion function in the recurrent neural network, t is a natural number, and yt-1To generate words, p, in the reply phrase YtEditing words of the message P for the characters;
s222, calculating a state StOutput probability distribution of (1)tIn the reply phrase Y, the word YtCorresponding output probability distribution:
Ot=P(yt|y1,y2,···,yt-1,c,pt)=softmax(w0st);
wherein w0Is a preset matrix;
s223, calculating each y1、y2,···,ytAnd then, sequentially arranging the reply phrases Y.
Further, in the reply phrase generating method of the present invention, if the word of the text editing information P is less than the word generated by the reply phrase Y, a preset character is used to replace the word P of the text editing information P in the calculation processt
If an end marker character is encountered in the calculation process of step S22, the calculation is ended.
Further, in the reply phrase generating method of the present invention, the text editing information P is pinyin;
the reply phrase Y is a Chinese character.
Further, in the method for generating a reply phrase according to the present invention, the text editing information P is processed by a reading gate before use, and the reading gate is defined as follows:
er(pt+1)=RELU(Wr*e(pt+1))
wherein WrFor weight values, the function RELU is a linear rectification function, erRepresenting the vector after passing through the reading gate.
In addition, the invention also provides a pinyin input method, which is applied to an intelligent terminal and comprises the following steps:
an input unit for inputting pinyin;
a first display unit for displaying Chinese characters corresponding to input pinyin;
a second display unit for displaying the reply phrase Y corresponding to the input pinyin; wherein the generation process of the reply phrase Y is as follows:
t1, acquiring the above information X input by the chat object and the character editing information P input by the user;
t2, sending the above information X and the character editing information P to a server;
and T3, the server generates a reply phrase Y according to the above information X and the character editing information P, and sends the reply phrase Y to the intelligent terminal.
Further, in the pinyin input method of the present invention, the first display unit further includes a third display unit for displaying english corresponding to each chinese character.
Further, in the pinyin input method of the invention, the first display unit further includes a fourth display unit for displaying pinyin corresponding to each chinese character.
Further, in the pinyin input method of the present invention, the step T3 includes:
t31, converting the above information X into a hidden vector M;
and T32, generating a character vector H by the hidden vector M and the character editing information P, and converting the character vector H into the reply phrase Y.
Further, in the pinyin input method of the present invention, the step T31 includes:
t311, splitting the above information X into word combinations (X)1、x2、···、xn) Arranging the split words according to the original text sequence, wherein n is a positive integer;
t312, will word x1Calculated as a vector m1Then using the function mi=f(mi-1,xi) Sequentially carrying out vector conversion on the split words to finally obtain words xnCorresponding vector mnThe vector mnAnd taking the hidden vector M as a target vector, wherein the function f is a nonlinear conversion function in the recurrent neural network, i is a positive integer and is more than 1 and less than or equal to n.
Further, in the pinyin input method of the present invention, the step T32 includes:
t321, calculating vector m1、m1、···、mnObtaining a global vector c by the weighted average vector; from the functional relation st=f([yt-1;pt],st-1C) calculating the state stWherein the function f is a nonlinear conversion function in the recurrent neural network, t is a natural number, and yt-1To generate words, p, in the reply phrase YtEditing words of the message P for the characters;
t322, calculating the state stOutput probability distribution of (1)tIn the reply phrase Y, the word YtCorresponding output probability distribution:
Ot=P(yt|y1,y2,···,yt-1,c,pt)=softmax(w0st);
wherein w0For presetting a matrix
T323, calculate each y1、y2,···,ytAnd then, sequentially arranging the reply phrases Y.
Further, in the pinyin input method of the present invention, if the words of the text editing information P are less than the words generated by the reply phrase Y, preset characters are used to replace the words P of the text editing information P in the calculation processt
If the end marker character is encountered in the calculation process of the step T32, the calculation is ended.
Further, in the pinyin input method of the present invention, the text editing information P is processed by a reading gate before use, and the reading gate is defined as follows:
er(pt+1)=RELU(Wr*e(pt+1))
wherein WrFor weight values, the function RELU is a linear rectification function, erRepresenting the vector after passing through the reading gate.
In addition, the invention also provides an intelligent terminal, and the intelligent terminal uses the pinyin input method.
The reply phrase generating method, the pinyin input method and the intelligent terminal have the following beneficial effects that: the invention not only needs to obtain the character editing information input by the user, but also needs to obtain the above information input by the chat object, and generates the reply phrase according to the above information and the character editing information, so that the generated phrase can better accord with the current context, and the matching accuracy is improved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flowchart of a reply phrase generation method provided in embodiment 1;
fig. 2 is a schematic structural diagram of an input interface of the pinyin input method provided in embodiment 2;
fig. 3 is a flowchart of a pinyin input method provided in embodiment 2.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, the reply phrase generating method of the present embodiment is applied to a chat process, such as a chat process of an instant messaging tool, where the chat includes two-person chat and multi-person chat, and chat content of each participant is displayed in a chat interface. Specifically, the reply phrase generating method includes the following steps:
and S1, acquiring the above information X input by the chat object and the character editing information P input by the user.
Specifically, the above information X acquired in this embodiment is to acquire text information of a chat that has been displayed in the chat interface, and may be one or more text information, especially the text information of the last chat content. Alternatively, if non-text information such as voice, image, video, emoticon and the like appears in the chat interface, the conversion of the voice, image, video and emoticon is tried, and if the conversion can be carried out into text information, the converted text information can also be used as the chat text information; if the text information can not be converted, the corresponding voice, image, video and emoticon are not used as the chat text information.
The text editing information P input by the user in this embodiment refers to the text being edited by the user using a keyboard, handwriting, or the like. Alternatively, various languages may be used for chatting in the present embodiment, such as chinese, english, japanese, chinese, spanish, etc.; preferably, the text editing information P is pinyin, and the reply phrase Y is hanzi.
And S2, generating a reply phrase Y according to the above information X and the character editing information P.
Specifically, after the text information X and the text editing information P are acquired, vectorization processing is performed on the text information X and the text editing information P, and the processing procedure includes the following steps:
s21, converting the above information X into a hidden vector M, wherein the conversion process is as follows:
s211, splitting the above information X into word combinations (X)1、x2、···、xn) The split words are arranged according to the original text sequence, wherein n is a positive integer, and the splitting principle of the words can be carried out according to the language ruleTo ensure the complete meaning of the words. For example, if the above message X "believes that someone else is good and can harvest the good reason", then the above message X "believes that someone else is good and can harvest the good reason" corresponding word combination can be (believing, someone else, good, talent, harvest, good reason).
S212, combining the words x1Calculated as a vector m1Then using the function mi=f(mi-1,xi) Sequentially carrying out vector conversion on the split words, wherein the sequentially carrying out vector conversion refers to x2Sum vector m1Generating a vector m from a function f2,x3Sum vector m2Generating a vector m from a function f3,x4Sum vector m3Generating a vector m from a function f4C until the last word xnGenerating a corresponding vector mnFinally, the word x is obtainednCorresponding vector mnVector mnIs a hidden vector M, wherein the function f is a nonlinear conversion function in the recurrent neural network, i is a positive integer and i is more than 1 and less than or equal to n. As can be seen from the above vector calculation process, the hidden vector M contains all the information of the above information X.
And S22, generating a character vector H by the hidden vector M and the character editing information P, and converting the character vector H into a reply phrase Y.
Specifically, the generating of the text vector H from the hidden vector M and the text editing information P includes the following steps:
s221, calculating a vector m1、m1、···、mnObtaining a global vector c by the weighted average vector; from the functional relation st=f([yt-1;pt],st-1C) calculating the state stWherein the function f is a nonlinear conversion function in the recurrent neural network, t is a natural number, and yt-1To generate words in the reply phrase Y, ptThe words of the text editing information P. Alternatively, if the words of the text editing information P are less than the words generated by the reply phrase Y, the preset characters are used to replace the words P of the text editing information P in the calculation processt. If the end marker character is encountered during the calculation of step S22Then the calculation is ended. Because of the dialog generation, it is not necessary to input all the pinyins of all the words of the target sentence as auxiliary information. The number of the pinyin as the auxiliary information is not fixed.
S222, calculating a state StOutput probability distribution of (1)tReply to word Y in phrase YtCorresponding output probability distribution:
Ot=P(yt|y1,y2,···,yt-1,c,pt)=softmax(w0st);
wherein w0Is a predetermined matrix.
S223, calculating each y1、y2,···,ytAnd then arranging the reply phrases Y in sequence. For example, the above information X "believes that someone else is good and good to gain good margin", and the user inputs the pinyin "chuanbo" using the keyboard, and then y is generated according to the above method1、y2,···,ytA reply phrase "propagating positive energy" is generated for "propagating positive energy".
Alternatively, this embodiment proposes a method of using a fixed symbol "END" to indicate pinyin without any effect. For example, in the case of only two valid pinyins, the remaining pinyin information is represented by "END" until an END marker "EOS" is generated. In order to ensure the consistency of input, even if pinyin is not needed as auxiliary information, the END special mark still needs to be input. This may have additional unknown effects on the generation of sentences. When the pinyin is used as auxiliary information, the pinyin needs to pass through a reading gate, and the effectiveness of the pinyin is controlled through the reading gate, for example, for the pinyin of 'END', the model does not need to consider the influence of the pinyin on the generation of candidate words. In the method for generating a reply phrase of this embodiment, the edited text message P is processed by a reading gate before use, and the reading gate is defined as follows:
er(pt+1)=RELU(Wr*e(pt+1))
wherein WrFor weight values, the function RELU is a linear rectification function, erRepresenting the vector after passing through the reading gate.
According to the embodiment, not only the character editing information input by the user needs to be acquired, but also the text information input by the chat object needs to be acquired, and the reply phrase is generated according to the text information and the character editing information, so that the generated phrase can better accord with the current context, and the matching accuracy is improved.
Example 2
Referring to fig. 2 and 3, the pinyin input method of the embodiment is applied to an intelligent terminal, where the intelligent terminal includes but is not limited to a smart phone, a tablet computer, a notebook computer, a desktop computer, a vehicle-mounted computer, an intelligent appliance, and the like, and the pinyin input method includes:
an input unit 500 for inputting pinyin;
a first display unit 100 for displaying Chinese characters corresponding to an input pinyin;
a second display unit 200 for displaying the reply phrase Y corresponding to the input pinyin; wherein the generation process of the reply phrase Y is as follows:
t1, acquiring the above information X input by the chat object and the text editing information P input by the user.
Specifically, the above information X acquired in this embodiment is to acquire text information of a chat that has been displayed in the chat interface, and may be one or more text information, especially the text information of the last chat content. Alternatively, if non-text information such as voice, image, video, emoticon and the like appears in the chat interface, the conversion of the voice, image, video and emoticon is tried, and if the conversion can be carried out into text information, the converted text information can also be used as the chat text information; if the text information can not be converted, the corresponding voice, image, video and emoticon are not used as the chat text information.
The text editing information P input by the user in this embodiment refers to the text being edited by the user using a keyboard, handwriting, or the like. Alternatively, various languages may be used for chatting in the present embodiment, such as chinese, english, japanese, chinese, spanish, etc.; preferably, the text editing information P is pinyin, and the reply phrase Y is hanzi.
And T2, sending the above information X and the character editing information P to the server.
And T3, the server generates a reply phrase Y according to the above information X and the character editing information P, and sends the reply phrase Y to the intelligent terminal.
Specifically, after the text information X and the text editing information P are acquired, vectorization processing is performed on the text information X and the text editing information P, and the processing procedure includes the following steps:
t31, converting the above information X into a hidden vector M, wherein the conversion process is as follows:
t311, splitting the above information X into word combinations (X)1、x2、···、xn) The split words are arranged according to the original text sequence, wherein n is a positive integer, and the splitting principle of the words can be carried out according to the language rule so as to ensure the complete meaning of the words. For example, if the above message X "believes that someone else is good and can harvest the good reason", then the above message X "believes that someone else is good and can harvest the good reason" corresponding word combination can be (believing, someone else, good, talent, harvest, good reason).
T312, will word x1Calculated as a vector m1Then using the function mi=f(mi-1,xi) Sequentially carrying out vector conversion on the split words, wherein the sequentially carrying out vector conversion refers to x2Sum vector m1Generating a vector m from a function f2,x3Sum vector m2Generating a vector m from a function f3,x4Sum vector m3Generating a vector m from a function f4C until the last word xnGenerating a corresponding vector mnFinally, the word x is obtainednCorresponding vector mnVector mnIs a hidden vector M, wherein the function f is a nonlinear conversion function in the recurrent neural network, i is a positive integer and i is more than 1 and less than or equal to n. As can be seen from the above vector calculation process, the hidden vector M contains all the information of the above information X.
And T32, generating a character vector H by the hidden vector M and the character editing information P, and converting the character vector H into a reply phrase Y.
Specifically, the generating of the text vector H from the hidden vector M and the text editing information P includes the following steps:
alternatively, in the pinyin input method of the present embodiment, step T32 includes:
t321, calculating vector m1、m1、···、mnObtaining a global vector c by the weighted average vector; from the functional relation st=f([yt-1;pt],st-1C) calculating the state stWherein the function f is a nonlinear conversion function in the recurrent neural network, t is a natural number, and yt-1To generate words in the reply phrase Y, ptThe words of the text editing information P.
T322, calculating the state stOutput probability distribution of (1)tReply to word Y in phrase YtCorresponding output probability distribution:
Ot=P(yt|y1,y2,···,yt-1,c,pt)=softmax(w0st);
wherein w0The default proof is formed during the input method model training process for the default matrix.
T323, calculate each y1、y2,···,ytAnd then arranging the reply phrases Y in sequence. For example, the above information X "believes that someone else is good and good to gain good margin", and the user inputs the pinyin "chuanbo" using the keyboard, and then y is generated according to the above method1、y2,···,ytA reply phrase "propagating positive energy" is generated for "propagating positive energy".
Alternatively, in the pinyin input method of the embodiment, if the words of the text editing information P are less than the words generated by the reply phrase Y, the preset characters are used to replace the words P of the text editing information P in the calculation processt. If the end marker character is encountered during the calculation of step T32, the calculation is ended.
Alternatively, the first display unit 100 in the pinyin input method of the present embodiment further includes a third display unit 300 for displaying english corresponding to each chinese character. It can be understood that the database corresponding to the input method at this time includes the corresponding relationship between the chinese characters and the english language.
Alternatively, the first display unit 100 in the pinyin input method of the present embodiment further includes a fourth display unit 400 for displaying the pinyin corresponding to each chinese character. It can be understood that the database corresponding to the input method at this time contains the corresponding relationship between the Chinese characters and the pinyin.
Alternatively, this embodiment proposes a method of using a fixed symbol "END" to indicate pinyin without any effect. For example, in the case of only two valid pinyins, the remaining pinyin information is represented by "END" until an END marker "EOS" is generated. In order to ensure the consistency of input, even if pinyin is not needed as auxiliary information, the END special mark still needs to be input. This may have additional unknown effects on the generation of sentences. When the pinyin is used as auxiliary information, the pinyin needs to pass through a reading gate, and the effectiveness of the pinyin is controlled through the reading gate, for example, for the pinyin of 'END', the model does not need to consider the influence of the pinyin on the generation of candidate words. In the pinyin input method of the embodiment, the character editing information P is processed by a reading gate before use, and the reading gate is defined as follows:
er(pt+1)=RELU(Wr*e(pt+1))
wherein WrFor weight values, the function RELU is a linear rectification function, erRepresenting the vector after passing through the reading gate.
According to the embodiment, not only the character editing information input by the user needs to be acquired, but also the text information input by the chat object needs to be acquired, and the reply phrase is generated according to the text information and the character editing information, so that the generated phrase can better accord with the current context, and the matching accuracy is improved.
Example 3
The intelligent terminal of the present embodiment uses the pinyin input method as described above. Alternatively, the smart terminal includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a vehicle-mounted computer, a smart appliance, and the like.
According to the embodiment, not only the character editing information input by the user needs to be acquired, but also the text information input by the chat object needs to be acquired, and the reply phrase is generated according to the text information and the character editing information, so that the generated phrase can better accord with the current context, and the matching accuracy is improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes and modifications made within the scope of the claims of the present invention should be covered by the claims of the present invention.

Claims (16)

1. A method for generating a reply phrase, comprising:
s1, acquiring the above information X input by the chat object and the character editing information P input by the user;
and S2, generating a reply phrase Y according to the above information X and the character editing information P.
2. The reply phrase generation method according to claim 1, wherein the step S2 includes:
s21, converting the above information X into a hidden vector M;
and S22, generating a character vector H by the hidden vector M and the character editing information P, and converting the character vector H into the reply phrase Y.
3. The reply phrase generation method according to claim 2, wherein the step S21 includes:
s211, splitting the above information X into word combinations (X)1、x2、···、xn) Arranging the split words according to the original text sequence, wherein n is a positive integer;
s212, combining the words x1Calculated as a vector m1Then using the function mi=f(mi-1,xi) Sequentially carrying out vector conversion on the split words to finally obtain words xnCorresponding vector mnThe vector mnAnd taking the hidden vector M as a target vector, wherein the function f is a nonlinear conversion function in the recurrent neural network, i is a positive integer and is more than 1 and less than or equal to n.
4. The reply phrase generation method according to claim 3, wherein the step S22 includes:
s221, calculating a vector m1、m1、···、mnObtaining a global vector c by the weighted average vector; from the functional relation st=f([yt-1;pt],st-1C) calculating the state stWherein the function f is a nonlinear conversion function in the recurrent neural network, t is a natural number, and yt-1To generate words, p, in the reply phrase YtEditing words of the message P for the characters;
s222, calculating a state StOutput probability distribution of (1)tIn the reply phrase Y, the word YtCorresponding output probability distribution:
Ot=P(yt|y1,y2,···,yt-1,c,pt)=softmax(w0st);
wherein w0Is a preset matrix;
s223, calculating each y1、y2,···,ytAnd then, sequentially arranging the reply phrases Y.
5. The method for generating reply phrases according to claim 4, wherein if the words of the text editing message P are less than the words generated by the reply phrase Y, the words P of the text editing message P are replaced by preset characters in the calculation processt
If an end marker character is encountered in the calculation process of step S22, the calculation is ended.
6. The reply phrase generation method according to claim 5, wherein the text editing information P is pinyin;
the reply phrase Y is a Chinese character.
7. The reply phrase generation method of claim 6, wherein the text editing message P is processed before use by a reading gate defined as follows:
er(pt+1)=RELU(Wr*e(pt+1))
wherein WrFor weight values, the function RELU is a linear rectification function, erIndicating past readingThe vector after the gate is read.
8. A Pinyin input method is applied to an intelligent terminal and is characterized by comprising the following steps:
an input unit for inputting pinyin;
a first display unit for displaying Chinese characters corresponding to input pinyin;
a second display unit for displaying the reply phrase Y corresponding to the input pinyin; wherein the generation process of the reply phrase Y is as follows:
t1, acquiring the above information X input by the chat object and the character editing information P input by the user;
t2, sending the above information X and the character editing information P to a server;
and T3, the server generates a reply phrase Y according to the above information X and the character editing information P, and sends the reply phrase Y to the intelligent terminal.
9. The pinyin input method of claim 8, wherein the first display unit further includes a third display unit for displaying each chinese character in english.
10. The pinyin input method of claim 8, wherein the first display unit further includes a fourth display unit for displaying a pinyin corresponding to each chinese character.
11. The pinyin input method of claim 8, wherein the step T3 includes:
t31, converting the above information X into a hidden vector M;
and T32, generating a character vector H by the hidden vector M and the character editing information P, and converting the character vector H into the reply phrase Y.
12. The pinyin input method of claim 11, wherein the step T31 includes:
t311, splitting the above information X into word combinations (X)1、x2、···、xn) Arranging the split words according to the original text sequence, wherein n is a positive integer;
t312, will word x1Calculated as a vector m1Then using the function mi=f(mi-1,xi) Sequentially carrying out vector conversion on the split words to finally obtain words xnCorresponding vector mnThe vector mnAnd taking the hidden vector M as a target vector, wherein the function f is a nonlinear conversion function in the recurrent neural network, i is a positive integer and is more than 1 and less than or equal to n.
13. The pinyin input method of claim 12, wherein the step T32 includes:
t321, calculating vector m1、m1、···、mnObtaining a global vector c by the weighted average vector; from the functional relation st=f([yt-1;pt],st-1C) calculating the state stWherein the function f is a nonlinear conversion function in the recurrent neural network, t is a natural number, and yt-1To generate words, p, in the reply phrase YtEditing words of the message P for the characters;
t322, calculating the state stOutput probability distribution of (1)tIn the reply phrase Y, the word YtCorresponding output probability distribution:
Ot=P(yt|y1,y2,···,yt-1,c,pt)=softmax(w0st);
wherein w0For presetting a matrix
T323, calculate each y1、y2,···,ytAnd then, sequentially arranging the reply phrases Y.
14. The pinyin input method of claim 13, wherein the reply phrase Y is generated if fewer words of the text editing information P are available than the number of words of the text editing information PThe word P of the text editing information P is replaced by the preset character in the calculation processt
If the end marker character is encountered in the calculation process of the step T32, the calculation is ended.
15. The pinyin input method of claim 14, wherein the text editing information P is processed before use by a reading gate defined as follows:
er(pt+1)=RELU(Wr*e(pt+1))
wherein WrFor weight values, the function RELU is a linear rectification function, erRepresenting the vector after passing through the reading gate.
16. An intelligent terminal, characterized in that the intelligent terminal uses the pinyin input method according to any one of claims 8 to 15.
CN202010578135.1A 2020-06-23 2020-06-23 Reply phrase generation method, pinyin input method and intelligent terminal Active CN111913591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010578135.1A CN111913591B (en) 2020-06-23 2020-06-23 Reply phrase generation method, pinyin input method and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010578135.1A CN111913591B (en) 2020-06-23 2020-06-23 Reply phrase generation method, pinyin input method and intelligent terminal

Publications (2)

Publication Number Publication Date
CN111913591A true CN111913591A (en) 2020-11-10
CN111913591B CN111913591B (en) 2023-10-20

Family

ID=73226457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010578135.1A Active CN111913591B (en) 2020-06-23 2020-06-23 Reply phrase generation method, pinyin input method and intelligent terminal

Country Status (1)

Country Link
CN (1) CN111913591B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013443A (en) * 2007-02-13 2007-08-08 北京搜狗科技发展有限公司 Intelligent word input method and input method system and updating method thereof
CN109039270A (en) * 2018-09-21 2018-12-18 杭州电子科技大学 A kind of header box circuit of intelligent photovoltaic power station analog meter
CN208971472U (en) * 2018-09-21 2019-06-11 杭州电子科技大学 The Array Control Circuit of intelligent photovoltaic power station analog meter

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013443A (en) * 2007-02-13 2007-08-08 北京搜狗科技发展有限公司 Intelligent word input method and input method system and updating method thereof
CN109039270A (en) * 2018-09-21 2018-12-18 杭州电子科技大学 A kind of header box circuit of intelligent photovoltaic power station analog meter
CN208971472U (en) * 2018-09-21 2019-06-11 杭州电子科技大学 The Array Control Circuit of intelligent photovoltaic power station analog meter

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜丽丽;赵德斌;: "基于复合上下文的自适应熵编码器设计与实现", 计算机应用与软件, no. 06 *

Also Published As

Publication number Publication date
CN111913591B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
Patel et al. ES2ISL: an advancement in speech to sign language translation using 3D avatar animator
CN111563390B (en) Text generation method and device and electronic equipment
US20220300718A1 (en) Method, system, electronic device and storage medium for clarification question generation
López-Ludeña et al. Methodology for developing an advanced communications system for the Deaf in a new domain
CN115309877A (en) Dialog generation method, dialog model training method and device
CN112711943B (en) Uygur language identification method, device and storage medium
CN111913591B (en) Reply phrase generation method, pinyin input method and intelligent terminal
CN115688703B (en) Text error correction method, storage medium and device in specific field
Monga et al. Speech to Indian Sign Language Translator
Nanayakkara et al. Context aware back-transliteration from english to sinhala
CN113705251A (en) Training method of machine translation model, language translation method and equipment
Agarwal et al. Generative Chatbot adaptation for Odia language: a critical evaluation
Gehrmann et al. TaTa: A multilingual table-to-text dataset for African languages
Singvongsa et al. Lao-Thai machine translation using statistical model
Zhang et al. Normalization of homophonic words in chinese microblogs
Suzuki et al. Automatic emoticon generation method for web community
JP3961858B2 (en) Transliteration device and program thereof
CN117174240B (en) Medical image report generation method based on large model field migration
CN112818108B (en) Text semantic misinterpretation chat robot based on shape and near words and data processing method thereof
Linn et al. Part of speech tagging for kayah language using hidden markov model
CN117273014B (en) Cross-domain semantic analysis method based on transfer learning
Giri et al. English Kashmiri Machine Translation System related to Tourism Domain
CN116312490A (en) Method and device for predicting pronunciation of polyphones
CN115859994A (en) Short text-based question-answering system-oriented entity linking method and device
Sosamphan SNET: a statistical normalisation method for Twitter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant