CN107609092B - Intelligent response method and device - Google Patents

Intelligent response method and device Download PDF

Info

Publication number
CN107609092B
CN107609092B CN201710805691.6A CN201710805691A CN107609092B CN 107609092 B CN107609092 B CN 107609092B CN 201710805691 A CN201710805691 A CN 201710805691A CN 107609092 B CN107609092 B CN 107609092B
Authority
CN
China
Prior art keywords
information
intelligent
answer
user
dialogue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710805691.6A
Other languages
Chinese (zh)
Other versions
CN107609092A (en
Inventor
董晋杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710805691.6A priority Critical patent/CN107609092B/en
Publication of CN107609092A publication Critical patent/CN107609092A/en
Application granted granted Critical
Publication of CN107609092B publication Critical patent/CN107609092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an intelligent response method and device. One embodiment of the method comprises: determining the current state of the input method; responding to the current state as a first intelligent response state, and analyzing the current input information of the user; inputting semantic information obtained by analyzing current input information into a candidate answer sentence generation model to obtain intelligent candidate answer sentence information; and responding to the selection of the user on the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as intelligent answer information. The method and the device are beneficial to improving the communication efficiency between the user and the conversation party and reducing the time for inputting information by the user through the input method, thereby being beneficial to reducing the use power consumption of the electronic equipment and prolonging the service life of the electronic equipment.

Description

Intelligent response method and device
Technical Field
The application relates to the technical field of computers, in particular to the technical field of input methods, and particularly relates to an intelligent response method and device.
Background
With the rapid development of the mobile internet and the increase of users of the mobile internet, the user social products installed on the mobile terminal become the most frequently used applications of the users, and even become almost necessities in life of people. Accordingly, the input method applied to the mobile terminal is also gradually becoming the most frequently used application in the mobile terminal tool class application. How to provide more convenience for users in social communication becomes a great breakthrough point of an input method applied to a mobile terminal in the mobile social era.
Unlike the conventional input method applied to a Personal Computer (PC), the input method applied to a mobile terminal is more intensively applied to mobile social applications. Compared with the writing of long text records or working manuscripts, the method is more used for inputting characters used when communicating with people. Therefore, the input method applied to the mobile terminal is a problem to be solved urgently, in addition to meeting the requirements of the traditional input method, such as convenience in input and use, an error correction function and an association function, how to make the input in the social contact more efficient and more personalized.
Disclosure of Invention
The present application aims to provide an improved intelligent response method and apparatus to solve the technical problems mentioned in the background section above.
In a first aspect, the present application provides an intelligent response method, including: determining the current state of the input method; responding to the current state as a first intelligent response state, and analyzing the current input information of the user; inputting semantic information obtained by analyzing current input information into a candidate answer sentence generation model to obtain intelligent candidate answer sentence information; and responding to the selection of the user on the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as intelligent answer information.
In some embodiments, the answer candidate sentence generation model is trained based on the user historical answer information as a training sample.
In some embodiments, the method further comprises: and responding to the selection operation of the user on the presented dialogue information, and calling the pre-associated search application to obtain a search result corresponding to the selected dialogue information.
In some embodiments, in response to a user selection operation of the presented dialog information, invoking a pre-associated search application to obtain a search result corresponding to the selected dialog information includes: determining semantic feature information from the selected dialog information; and calling a pre-associated search application, and taking the semantic feature information as search input information to obtain a search result corresponding to the selected dialogue information.
In some embodiments, the method further comprises: responding to the second intelligent response state of the current state, and analyzing the dialogue information sent by the dialogue party; and inputting an analysis result obtained by analyzing the dialogue information sent by the dialogue party into a pre-trained automatic answer sentence generation model to obtain an answer sentence.
In some embodiments, the auto-answer sentence generation model is trained as a training sample based on historical dialogue information of the user and the dialogue party.
In some embodiments, inputting a parsing result obtained by parsing the dialogue information sent by the dialogue party into a pre-trained auto-answer sentence generation model to obtain an answer sentence, including: and if the analysis result comprises preset keywords, generating reminding information.
In a second aspect, the present application provides an intelligent answering device, comprising: the determining unit is used for determining the current state of the input method; the analysis unit is used for responding to the current state as a first intelligent response state and analyzing the current input information of the user; the first generation unit is used for inputting semantic information obtained by analyzing current input information into the answer candidate statement generation model to obtain intelligent answer candidate statement information; and an intelligent answer information generation unit for responding to the selection of the user for the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as the intelligent answer information.
In some embodiments, the answer candidate sentence generation model is trained based on the user historical answer information as a training sample.
In some embodiments, the apparatus further comprises: and the search calling unit is used for responding to the selection operation of the user on the presented dialogue information and calling the pre-associated search application to obtain a search result corresponding to the selected dialogue information.
In some embodiments, the search invocation unit is further to: determining semantic feature information from the selected dialog information; and calling a pre-associated search application, and taking the semantic feature information as search input information to obtain a search result corresponding to the selected dialogue information.
In some embodiments, the parsing unit is further configured to parse, in response to the current state being the second intelligent response state, the dialog information sent by the dialog party; the device also comprises a response sentence generating unit which is used for inputting an analysis result obtained by analyzing the dialogue information sent by the dialogue party into a pre-trained automatic response sentence generating model so as to obtain a response sentence.
In some embodiments, the auto-answer sentence generation model is trained as a training sample based on historical dialogue information of the user and the dialogue party.
In some embodiments, the answer sentence generation unit is further to: and if the analysis result comprises preset keywords, generating reminding information.
In a third aspect, the present application provides an electronic device, comprising: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement the intelligent answering method as described above.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the intelligent answering method as above.
According to the intelligent response method and the intelligent response device, when the current state of the input method is the first intelligent response state (for example, the semi-automatic response state), the semantic information is obtained by analyzing the current input information of the user, the semantic information is input into the response candidate sentence generation model to obtain the intelligent response candidate sentence information matched with the semantic information, and finally the intelligent response information is determined from the intelligent response candidate sentence information based on the selection of the user. Therefore, the current input information of the user can be expanded and associated in a targeted manner, the communication efficiency between the user and a conversation party is improved, the time for inputting the information by using the input method by the user is shortened, the use power consumption of the electronic equipment is reduced, and the service life of the electronic equipment is prolonged.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an intelligent response method according to the present application;
FIG. 3 is a flow chart of yet another embodiment of an intelligent response method according to the present application;
fig. 4A to 4C are schematic views of application scenarios of the intelligent response method according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of a smart responder according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the smart answering method or the smart answering device of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an instant messenger, a web browser application search application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and an input method application installed thereon, including, but not limited to, a smart phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts Group Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts Group Audio Layer 4), a laptop portable computer, a desktop computer, and the like.
The server 105 may be a server that provides various services, such as a server that provides support for input method applications installed on the terminal apparatuses 101, 102, 103. The server may parse information input by the user through the input method application and feed back the parsing result to the terminal devices 101, 102, 103.
It should be noted that the intelligent response method provided in the embodiment of the present application is generally executed by the terminal devices 101, 102, and 103, or alternatively, a part of the intelligent response method may be executed by the terminal devices 101, 102, and 103 and another part of the intelligent response method may be executed by the server 105. Accordingly, the intelligent answering machine is typically provided in the terminal equipment 101, 102, 103, or one part is provided in the terminal equipment 101, 102, 103 and another part is provided in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an intelligent answering method according to the present application is shown. The intelligent response method comprises the following steps:
at step 210, the current state of the input method is determined.
In this embodiment, an electronic device (e.g., a server shown in fig. 1) on which the smart response method operates may receive information input by a user through an input method and present the information input by the user in an interface of an application (e.g., a social application, an instant messaging application, etc.) using the input method.
In some optional implementations of the embodiment, the operation interface applied to the input method may provide a control operable by the user in addition to characters such as pinyin, letters, numbers, symbols and the like to be input by the user. The user can modify the current state of the input method by operating these controls.
In some application scenarios of these alternative implementations, the current state of the input method may be a state for characterizing an association between an input operation of a user and output information presented on the input method operation interface.
Alternatively, in other application scenarios of these alternative implementations, the current state of the input method may also be used to characterize the state of the association between the information received by the instant messaging application using the input method and the output information presented on the input method operation interface.
In other alternative implementations of this embodiment, the electronic device to which the intelligent response method of this embodiment is applied may also determine the current state of the input method by detecting an operation of the electronic device by the user within a predetermined period of time.
For example, in some application scenarios of these alternative implementations, if the user has not operated the electronic device for a period of time (e.g., 5 minutes), the current state of the input method may be determined to be an "auto-answer" state. If the user operates the electronic device for a period of time, it may be determined that the current state of the input method is a "semi-auto answer" state or a "manual answer" state.
Step 220, in response to the current state being the first intelligent response state, analyzing the current input information of the user.
Here, the first smart answer state may be, for example, a "semi-automatic answer" state as described above. When the input method is in a semi-automatic response state, the input method can be expanded and supplemented on the basis of information input by a user, so that the input workload of the user is reduced, and the interaction efficiency of the user when the user uses an instant messaging tool to interact with other people is improved.
In some alternative implementations, existing and/or yet to be developed natural language processing techniques may be utilized to parse the user's current input information.
In some application scenarios, for example, if the current input information of the user is text information, the current input information of the user may be segmented (for example, the current input information is segmented by using a full segmentation algorithm), and then a keyword of the currently input text information is determined from a segmentation result. Alternatively, a Word vector corresponding to the currently input text information may be generated based on the Word segmentation result (for example, Word2vec algorithm is used to generate the Word vector), so as to complete the parsing of the currently input information.
In other application scenarios, if the current input information of the user is image information, the feature of the image information may be extracted through an image feature extraction algorithm, or characters presented by the image information may be identified by using an Optical Character Recognition (OCR) technique, and then the characters may be analyzed by using the above analysis method.
Step 230, inputting semantic information obtained by analyzing the current input information into the candidate sentence generation model for response to obtain the intelligent candidate sentence information for response.
In some alternative implementations, the answer candidate statement generation model may be a model built based on a rule base. Semantic information and intelligent answer candidate sentences can be stored in the rule base in a pre-association mode. The semantic information analyzed in step 220 is input into the rule base, and intelligent answer candidate statement information matched with the semantic information can be correspondingly determined.
Alternatively, in other alternative implementations, the answer candidate sentence generation model may be a pre-trained multi-layer neural network model. Here, the multi-layer Neural Network model may be an existing or future-developed Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Deep Neural Network (DNN), or the like.
The multi-layer neural network model is trained in advance by using the sample data, so that the trained model can simulate the corresponding relation between input and output in the sample data, and corresponding output is obtained based on the input of the model.
In addition, in the step, by responding to the candidate sentence generation model, at least one complete sentence can be obtained as the candidate sentence responding information based on the semantic information of the input model, so that the efficiency of communication with a conversation party when the user uses the input method is greatly improved.
In some optional implementations of the present embodiment, the answer candidate sentence generation model may be provided on an electronic device to which the intelligent answer method of the present embodiment is applied. In these optional implementation manners, after the semantic information is obtained through analysis, the semantic information may be directly input into the candidate sentence generating model for response, so as to obtain corresponding intelligent candidate sentence information for response.
In other alternative implementations of this embodiment, the answer candidate sentence generation model may be provided in a server that is in communication connection with the electronic device to which the intelligent answer method of this embodiment is applied. In these optional implementation manners, after obtaining the semantic information by parsing, the electronic device may upload the semantic information to the server, obtain corresponding intelligent answer candidate statement information from the candidate statement generation model set in the server, and then send the obtained intelligent answer candidate statement information to the electronic device.
Further, in some alternative implementations, the answer candidate sentence generation model may be updated such that the answer candidate sentence generation model may generate intelligent answer candidate sentences according to current environmental information (e.g., including but not limited to current weather information, traffic information, real-time news information, etc.).
And step 240, responding to the selection of the user on the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as intelligent answer information.
It is understood that in some application scenarios, there may be more than one intelligent answer candidate statement information corresponding to the semantic information obtained through step 230. The user can select a plurality of intelligent answer candidate sentence information presented on the electronic equipment, so as to determine the final intelligent answer information.
Alternatively, in other application scenarios, although only one piece of intelligent answer candidate statement information corresponding to the semantic information is obtained in step 230, the piece of intelligent answer candidate statement information presented on the electronic device is not selected by the user, and the piece of intelligent answer candidate statement information cannot be used as the final intelligent answer information.
In the intelligent response method of this embodiment, when the current state of the input method is the first intelligent response state (for example, a semi-automatic response state), the current input information of the user is analyzed to obtain semantic information, the semantic information is input into the response candidate sentence generation model to obtain intelligent response candidate sentence information matched with the semantic information, and finally, based on the selection of the user, the intelligent response information is determined from the intelligent response candidate sentence information. Therefore, the current input information of the user can be expanded and associated in a targeted manner, the communication efficiency between the user and a conversation party is improved, the time for inputting the information by using the input method by the user is shortened, the use power consumption of the electronic equipment is reduced, and the service life of the electronic equipment is prolonged.
In some optional implementation manners of this embodiment, if the candidate sentence response generation model is a multilayer neural network model, before the multilayer neural network model is used to generate the intelligent candidate sentence response information, the multilayer neural network model may be trained.
For example, in some application scenarios, the answer candidate sentence generation model is provided in the electronic device. In the process of inputting by a user through an input method, the electronic equipment can collect historical input information input by the user and train the multilayer neural network model by taking the historical input information as a training sample. Specifically, each piece of historical input information can be split into a front part and a rear part according to semantics, the front part of one piece of historical input information is used as an input sample, and the rear part of the same piece of historical input information is used as an output sample to train the multilayer neural network model, so that the multilayer neural network model can learn the language style and habit of the user.
Alternatively, in other application scenarios, the answer candidate sentence generation model is provided in a server communicatively connected to the electronic device. The server can obtain historical input information of a plurality of users who input by using the input method, and train the multilayer neural network model by using the historical input information as training samples. In this way, a larger amount of training sample data can be obtained, so that the training process of the answer candidate sentence generation model can be shortened. In addition, due to the diversity of the training sample data sources, the robustness of the response candidate sentence generation model obtained through training can be stronger.
In some optional implementations, the intelligent response method of this embodiment may further include:
in response to the user's selection operation of the presented dialog information, a pre-associated search application is invoked to obtain a search result corresponding to the selected dialog information, step 250.
For example, assuming that a user has a conversation with a party to a conversation via an instant messaging application, the information entered by the two parties may be presented on the screen of the electronic device accordingly. If the user selects a certain piece of dialogue information through a certain selection operation (for example, long-press), the user can be considered to have corresponding search intention on the piece of information at the moment. At this time, a search application associated in advance may be called, so as to perform a search operation on the piece of information and obtain a corresponding search result.
In some application scenarios of these alternative implementations, the above step 250 may be implemented as follows:
at step 251, semantic feature information is determined from the selected session information.
Step 252, a search application associated in advance is called, and the semantic feature information is used as search input information to obtain a search result corresponding to the selected dialog information.
In these application scenarios, the semantic feature information determined from the selected dialog information may be obtained, for example, in a manner similar to parsing the current input information of the user in step 220 as described above.
By using the semantic feature information as the search input information, the matching degree of the search result and the selected dialogue information can be higher, and the search result related to the dialogue information can be obtained accurately.
Referring to fig. 3, a schematic flow chart diagram 300 of another embodiment of the intelligent response method of the present application is shown.
The intelligent response method of the embodiment may include:
at step 310, the current state of the input method is determined.
And step 320, responding to the current state being the first intelligent response state, and analyzing the current input information of the user.
Step 330, inputting semantic information obtained by analyzing the current input information into the candidate answer sentence generation model to obtain the intelligent candidate answer sentence information.
Step 340, responding to the selection of the user for the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as the intelligent answer information.
The steps 310 to 340 can be performed in a manner similar to the steps 210 to 240 in the embodiment shown in fig. 2, and are not described herein again.
Further, the intelligent response method of this embodiment may further include:
and step 350, responding to the current state being the second intelligent response state, and analyzing the dialogue information sent by the dialogue party.
Here, the second smart answer state is another current state of the input method that is different from the first smart answer state.
In some alternative implementations, the second intelligent answer state may be, for example, an "auto answer" state. In this state, the input method itself can complete the dialogue exchange with the dialogue party in the instant messaging application without the user's own involvement.
In a manner similar to the manner of determining that the input method is in the first intelligent response state in the embodiment shown in fig. 2, in this step, it may also be determined whether the current state of the input method is in the second intelligent response state based on the user operating a certain preset control. Alternatively, it may be determined whether the current state of the input method is the second smart response state based on whether the user has operated the electronic device within a predetermined period of time.
And step 360, inputting an analysis result obtained by analyzing the dialogue information sent by the dialogue party into a pre-trained automatic answer sentence generation model to obtain an answer sentence.
Similarly to the answer candidate sentence generation model, the auto-answer sentence generation model in this step may also be a multi-layer neural network model trained in advance.
In some optional implementations, the auto-answer sentence generation model may be provided in the electronic device to which the intelligent answer method of this embodiment is applied, and in these optional implementations, the auto-answer sentence generation model may be trained based on historical dialogue information of a user and a dialogue party using the electronic device as a training sample. Specifically, the automatic answer sentence generation model may be trained using, as an input sample, input information of a conversation partner in the historical conversation information and input information of a user corresponding thereto as an output sample, so that the trained automatic answer sentence generation model may better simulate language habits and styles of users using the electronic device.
In other alternative implementations of this embodiment, the automatic answer sentence generation model may be provided in a server that is in communication connection with the electronic device to which the intelligent answer method of this embodiment is applied. In these alternative implementations, the server may obtain historical dialogue information of each user of the plurality of electronic devices and the dialogue party as training samples to train the auto-answer sentence generation model, so that a larger amount of training samples may be obtained, and the training process of the auto-answer sentence generation model may be shortened. In addition, due to the diversity of the training sample data sources, the robustness of the trained automatic answer sentence generation model can be stronger.
Furthermore, in some alternative implementations, the auto-answer sentence generation model may also be updated so that the auto-answer sentence generation model may generate an auto-answer sentence according to current environmental information (e.g., including but not limited to current weather information, traffic information, real-time news information, etc.).
In some optional implementations of this embodiment, step 360 may further include: and if the analysis result comprises preset keywords, generating reminding information.
Here, the content that is more interesting and important to the user may be set in advance. In this way, when the analysis result includes such content, it is possible to generate a reminder message to remind the user, thereby avoiding missing important information sent by the conversation partner.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the intelligent response method in this embodiment highlights how to automatically obtain the response statement in the second intelligent response state. Therefore, the scheme described in the embodiment can further reduce the workload of user input and improve the interaction efficiency when the user uses the instant messaging tool to interact with other people.
Fig. 4A to 4C are diagrams illustrating several application scenarios of the intelligent response method of the present application, respectively.
First, please refer to fig. 4A, which shows an exemplary application scenario when the input method is in a "semi-automatic answer" state. The user receives the message "good morning" sent by the conversation party and inputs the answer "good morning" of himself in the input field 410. At this time, three pieces of intelligent answer candidate sentence information 420 may be obtained based on the analysis of "good morning" input by the user. At this time, the user can select according to needs and own preferences, for example, the user can select the intelligent answer candidate sentence information of "today's wind is large and attention is paid". At this time, the intelligent answer candidate sentence information can be added as intelligent answer information to the information "good morning" input by the user autonomously, and then sent to the conversation party.
Next, referring to FIG. 4B, the counterparty sends dialogue information 430 "how well the book was committed to, the user can select this dialogue information, invoke the pre-associated search application, and present the search results in the area indicated by reference numeral 440.
Continuing with FIG. 4C, an illustrative application scenario is shown in which the input method is in an "auto-answer" state. The conversing party sends a message "no longer think that the chinese team won korea team after all" 450, and then, by parsing the message, a corresponding answer "yes, win korea in the world cup pre-contest or first time" 460 can be automatically generated.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of an intelligent response apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied in various electronic devices.
As shown in fig. 5, the intelligent answering machine 500 according to this embodiment includes a determining unit 510, a parsing unit 520, a first generating unit 530 and an intelligent answering information generating unit 540.
The determination unit 510 may be used to determine the current state of the input method.
The parsing unit 520 may be configured to parse current input information of the user in response to the current state being the first intelligent answering state.
The first generating unit 530 may be configured to input semantic information obtained by analyzing the current input information into the candidate sentence generating model for response, so as to obtain intelligent candidate sentence information for response.
The intelligent answer information generation unit 540 may be configured to, in response to a selection of the obtained at least one piece of intelligent answer candidate sentence information by the user, take the selected intelligent answer candidate sentence information as the intelligent answer information.
In some alternative implementations, the answer candidate sentence generation model is trained based on the user historical answer information as a training sample.
In some optional implementations, the smart responder 500 further comprises: and a search invoking unit (not shown in the figure) for invoking a pre-associated search application to obtain a search result corresponding to the selected dialog information in response to a selection operation of the presented dialog information by the user.
In some optional implementations, the search invoking unit is further configured to: determining semantic feature information from the selected dialog information; and calling a pre-associated search application, and taking the semantic feature information as search input information to obtain a search result corresponding to the selected dialogue information.
In some optional implementations, the parsing unit 520 may be further configured to parse, in response to that the current state is the second intelligent response state, the dialog information sent by the dialog party; the device can also comprise a response sentence generating unit which is used for inputting an analysis result obtained by analyzing the dialogue information sent by the dialogue party into a pre-trained automatic response sentence generating model so as to obtain a response sentence.
In some alternative implementations, the auto-answer sentence generation model is trained as a training sample based on historical dialogue information of the user and the dialogue party.
In some optional implementations, the answer sentence generation unit may be further to: and if the analysis result comprises preset keywords, generating reminding information.
Those skilled in the art will appreciate that the smart responder 500 described above may also include some other well-known structures, such as a processor, memory, etc., which are not shown in fig. 5 in order to not unnecessarily obscure embodiments of the present disclosure.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a terminal device or server of an embodiment of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a determination unit, an analysis unit, a first generation unit, and an intelligent response information generation unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, the determination unit may also be described as a "unit that determines the current state of the input method".
As another aspect, the present application also provides a non-volatile computer storage medium, which may be the non-volatile computer storage medium included in the apparatus in the above-described embodiments; or it may be a non-volatile computer storage medium that exists separately and is not incorporated into the terminal. The non-transitory computer storage medium stores one or more programs that, when executed by a device, cause the device to: determining the current state of the input method; responding to the current state as a first intelligent response state, and analyzing the current input information of the user; inputting semantic information obtained by analyzing current input information into a candidate answer sentence generation model to obtain intelligent candidate answer sentence information; and responding to the selection of the user on the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as intelligent answer information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. An intelligent answering method, comprising:
determining the current state of the input method;
responding to the current state as a first intelligent response state, and analyzing the current input information of the user;
inputting semantic information obtained by analyzing the current input information into a candidate answer sentence generation model to obtain intelligent candidate answer sentence information; and
responding to the selection of the user on the obtained at least one piece of intelligent answer candidate statement information, and taking the selected intelligent answer candidate statement information as intelligent answer information;
responding to the current state as a second intelligent response state, and analyzing conversation information sent by a conversation party;
and inputting an analysis result obtained by analyzing the dialogue information sent by the dialogue party into a pre-trained automatic answer sentence generation model to obtain an answer sentence, wherein the answer sentence is obtained by the automatic answer sentence generation model according to the current environment information.
2. The method of claim 1, wherein:
and the response candidate sentence generation model is obtained by training based on the historical response information of the user as a training sample.
3. The method of claim 1, further comprising:
and responding to the selection operation of the user on the presented dialogue information, and calling the pre-associated search application to obtain a search result corresponding to the selected dialogue information.
4. The method of claim 3, wherein invoking a pre-associated search application to obtain search results corresponding to the selected dialog information in response to a user selection operation of the presented dialog information comprises:
determining semantic feature information from the selected dialog information;
and calling the pre-associated search application, and taking the semantic feature information as search input information to obtain a search result corresponding to the selected dialogue information.
5. The method of claim 1, wherein:
the automatic answer sentence generation model is obtained by training based on historical dialogue information of a user and a dialogue party as training samples.
6. The method according to claim 1, wherein the inputting a parsing result obtained by parsing the dialogue information sent by the dialogue party into a pre-trained auto-answer sentence generation model to obtain an answer sentence comprises:
and if the analysis result comprises preset keywords, generating reminding information.
7. An intelligent answering device, comprising:
the determining unit is used for determining the current state of the input method;
the analysis unit is used for responding to the current state as a first intelligent response state, analyzing the current input information of the user, and responding to the current state as a second intelligent response state, and analyzing the conversation information sent by the conversation party;
a first generating unit, configured to input semantic information obtained by analyzing the current input information into a candidate sentence generation model for response, so as to obtain intelligent candidate sentence information for response; and
an intelligent answer information generation unit, configured to respond to a selection of the user for the obtained at least one piece of intelligent answer candidate sentence information, and use the selected intelligent answer candidate sentence information as intelligent answer information;
and the answer sentence generating unit is used for inputting an analysis result obtained by analyzing the dialogue information sent by the dialogue party into a pre-trained automatic answer sentence generating model so as to obtain an answer sentence, and the answer sentence is obtained by the automatic answer sentence generating model according to the current environment information.
8. The apparatus of claim 7, wherein:
and the response candidate sentence generation model is obtained by training based on the historical response information of the user as a training sample.
9. The apparatus of claim 7, further comprising:
and the search calling unit is used for responding to the selection operation of the user on the presented dialogue information and calling the pre-associated search application to obtain a search result corresponding to the selected dialogue information.
10. The apparatus of claim 9, wherein the search invoking unit is further configured to:
determining semantic feature information from the selected dialog information; and
and calling the pre-associated search application, and taking the semantic feature information as search input information to obtain a search result corresponding to the selected dialogue information.
11. The apparatus of claim 7, wherein:
the automatic answer sentence generation model is obtained by training based on historical dialogue information of a user and a dialogue party as training samples.
12. The apparatus of claim 7, wherein the answer sentence generation unit is further configured to:
and if the analysis result comprises preset keywords, generating reminding information.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the intelligent response method of any of claims 1-6.
14. A computer-readable storage medium having stored thereon a computer program, characterized in that:
the program, when executed by a processor, implements the intelligent response method of any of claims 1-6.
CN201710805691.6A 2017-09-08 2017-09-08 Intelligent response method and device Active CN107609092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710805691.6A CN107609092B (en) 2017-09-08 2017-09-08 Intelligent response method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710805691.6A CN107609092B (en) 2017-09-08 2017-09-08 Intelligent response method and device

Publications (2)

Publication Number Publication Date
CN107609092A CN107609092A (en) 2018-01-19
CN107609092B true CN107609092B (en) 2021-03-09

Family

ID=61062587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710805691.6A Active CN107609092B (en) 2017-09-08 2017-09-08 Intelligent response method and device

Country Status (1)

Country Link
CN (1) CN107609092B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472223A (en) * 2018-05-10 2019-11-19 北京搜狗科技发展有限公司 A kind of input configuration method, device and electronic equipment
CN108877795B (en) * 2018-06-08 2020-03-10 百度在线网络技术(北京)有限公司 Method and apparatus for presenting information
CN108897872B (en) * 2018-06-29 2022-09-27 北京百度网讯科技有限公司 Dialogue processing method, device, computer equipment and storage medium
CN110737756B (en) 2018-07-03 2023-06-23 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for determining answer to user input data
CN110851574A (en) * 2018-07-27 2020-02-28 北京京东尚科信息技术有限公司 Statement processing method, device and system
CN109377152A (en) * 2018-09-03 2019-02-22 三星电子(中国)研发中心 A kind of method and device of scheduling application
CN109542249A (en) * 2018-11-17 2019-03-29 北京智合大方科技有限公司 A kind of Intelligent dialogue guidance system based on mobile phone phonetic input method
CN109783621B (en) * 2018-12-17 2021-10-08 北京百度网讯科技有限公司 Dialog generation method, device and equipment
CN111368040B (en) * 2018-12-25 2021-01-26 马上消费金融股份有限公司 Dialogue processing method, model training method and related equipment
CN111477231B (en) * 2019-01-24 2023-12-01 科沃斯商用机器人有限公司 Man-machine interaction method, device and storage medium
CN110851581B (en) * 2019-11-19 2022-11-11 东软集团股份有限公司 Model parameter determination method, device, equipment and storage medium
CN111752437B (en) * 2020-06-29 2021-07-16 上海寻梦信息技术有限公司 Comment method and device, electronic equipment and storage medium
CN112269509B (en) * 2020-10-29 2022-11-25 维沃移动通信(杭州)有限公司 Information processing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096564A1 (en) * 2013-12-25 2015-07-02 北京百度网讯科技有限公司 On-line voice translation method and device
CN105630917A (en) * 2015-12-22 2016-06-01 成都小多科技有限公司 Intelligent answering method and intelligent answering device
CN105930452A (en) * 2016-04-21 2016-09-07 北京紫平方信息技术股份有限公司 Smart answering method capable of identifying natural language
CN106372059A (en) * 2016-08-30 2017-02-01 北京百度网讯科技有限公司 Information input method and information input device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096564A1 (en) * 2013-12-25 2015-07-02 北京百度网讯科技有限公司 On-line voice translation method and device
CN105630917A (en) * 2015-12-22 2016-06-01 成都小多科技有限公司 Intelligent answering method and intelligent answering device
CN105930452A (en) * 2016-04-21 2016-09-07 北京紫平方信息技术股份有限公司 Smart answering method capable of identifying natural language
CN106372059A (en) * 2016-08-30 2017-02-01 北京百度网讯科技有限公司 Information input method and information input device

Also Published As

Publication number Publication date
CN107609092A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN107609092B (en) Intelligent response method and device
US10146768B2 (en) Automatic suggested responses to images received in messages using language model
US20220247701A1 (en) Chat management system
CN107846350B (en) Method, computer readable medium and system for context-aware network chat
US11651002B2 (en) Method for providing intelligent service, intelligent service system and intelligent terminal based on artificial intelligence
US10360300B2 (en) Multi-turn cross-domain natural language understanding systems, building platforms, and methods
RU2637874C2 (en) Generation of interactive recommendations for chat information systems
US10628474B2 (en) Probabalistic generation of diverse summaries
US11068474B2 (en) Sequence to sequence conversational query understanding
US11050685B2 (en) Method for determining candidate input, input prompting method and electronic device
CN103365833B (en) A kind of input candidate word reminding method based on context and system
CN111428010B (en) Man-machine intelligent question-answering method and device
CN107305575B (en) Sentence-break recognition method and device of man-machine intelligent question-answering system
CN111666380A (en) Intelligent calling method, device, equipment and medium
CN108268450B (en) Method and apparatus for generating information
CN111753551B (en) Information generation method and device based on word vector generation model
CN111312233A (en) Voice data identification method, device and system
CN115309877A (en) Dialog generation method, dialog model training method and device
CN112100353A (en) Man-machine conversation method and system, computer device and medium
CN108306813B (en) Session message processing method, server and client
CN111506718A (en) Session message determining method, device, computer equipment and storage medium
US20220308828A1 (en) Voice assistant-enabled client application with user view context
US11902223B2 (en) Intelligent assistant content generation
CN116913278B (en) Voice processing method, device, equipment and storage medium
KR102638460B1 (en) Method, apparatus and program for providing age-based personalized messaging service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant