CN109241269B - Task-based robot word slot filling method - Google Patents

Task-based robot word slot filling method Download PDF

Info

Publication number
CN109241269B
CN109241269B CN201810856020.7A CN201810856020A CN109241269B CN 109241269 B CN109241269 B CN 109241269B CN 201810856020 A CN201810856020 A CN 201810856020A CN 109241269 B CN109241269 B CN 109241269B
Authority
CN
China
Prior art keywords
word
word slot
task
slot
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810856020.7A
Other languages
Chinese (zh)
Other versions
CN109241269A (en
Inventor
叶俊鹏
徐易楠
刘云峰
吴悦
陈正钦
胡晓
汶林丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN201810856020.7A priority Critical patent/CN109241269B/en
Publication of CN109241269A publication Critical patent/CN109241269A/en
Priority to PCT/CN2019/089954 priority patent/WO2020019878A1/en
Application granted granted Critical
Publication of CN109241269B publication Critical patent/CN109241269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Machine Translation (AREA)

Abstract

The application relates to a task-based robot word slot filling method, which comprises the following steps: collecting training data, wherein the training data comprises a task and a word slot corresponding to the task; training a machine learning model according to the training data; performing word slot matching filling on a user input sentence by using a trained machine learning model to obtain a first word slot corresponding to the user input sentence; filtering the first word groove by using a filtering rule to obtain a second word groove; and matching and filling the user input sentence and the second word slot to obtain the task required to be executed by the robot. According to the method and the device, the task corresponding word slot is generated according to rule matching in the stage of collecting training data, the first word slot is filtered by using the filtering rule to obtain the second word slot, the error case can be repaired, and therefore the word slot filling effect is improved.

Description

Task-based robot word slot filling method
Technical Field
The application relates to the technical field of internet, in particular to a task type robot word slot filling method.
Background
Task robot service is a very important one of robot service. The general robot customer service mainly solves FAQ, inquires about data, etc. for the user, and does not involve the processing task flow. The task robot is mainly responsible for completing specific tasks, and the flow for completing the specific tasks is complex, so that the task robot customer service needs to collect necessary customer information to complete the tasks in multiple rounds of conversations with customers, and the process may also involve interactive logics such as calling background system information for customer selection and the like. Word slot filling (slot filling) is an important component module in a task-based robot, and is mainly used for extracting entities required for completing tasks from user sentences and filling the entities into corresponding word slots for use in the following text.
In the related art, the problem of word slot filling is mainly solved by using a machine learning model to predict end to end, for example, a deep learning technology is used to solve the problem of sequence labeling, which word corresponds to which word slot in a user sentence is directly predicted, but model prediction can cause each task to require a large amount of data to train the model, and at the initial stage of system use, training data are less, a good effect cannot be achieved by using the deep learning technology, the model depends on labeling data, if a labeling error affects the model learning effect, and the wrong labeling data are not easy to repair. Therefore, how to improve the early learning effect of the word slot filling system and repair the error data becomes a problem to be solved urgently by the related technical personnel.
Disclosure of Invention
To overcome, at least in part, the problems of the related art, the present application provides a task-based robotic word slot filling method, comprising:
collecting training data, wherein the training data comprises a task and a word slot corresponding to the task;
training a machine learning model according to the training data;
performing word slot matching filling on a user input sentence by using a trained machine learning model to obtain a first word slot corresponding to the user input sentence;
filtering the first word groove by using a filtering rule to obtain a second word groove;
and matching and filling the user input sentence and the second word slot to obtain the task required to be executed by the robot.
Further, the generating step of the word slot corresponding to the task includes:
carrying out rule matching on user statements and tasks to obtain screened tasks;
independently configuring each word slot corresponding to the screened task;
and matching the user sentences with each word slot which is configured independently, and screening out the word slots corresponding to the tasks.
Further, the matching the user statement and the task to obtain the screened task includes:
dividing words of a user question sentence, and converting the words into corresponding word vector representations;
calculating the similarity between the user question and the task similar question, wherein the similarity is the distance between the user question and the task similar question;
sorting the similarity;
and selecting a preset number of tasks with the similarity ranking at the top.
Further, the filtering rule includes:
soft filtering rules and hard filtering rules;
the soft filtering rules comprise conventional corresponding rules between tasks and word slots;
the hard filter rules include special task processing rules and error case repair rules.
Further, the filtering the first word slot by using the filtering rule to obtain a second word slot includes:
and fusing the first word slot distribution probability obtained according to the machine learning model and the second word slot distribution probability calculated according to the filtering rule to obtain a second word slot.
Further, the second word slot distribution probability includes:
soft rule filtering is carried out on the user input sentence and the first word groove to obtain a first intermediate word groove distribution probability;
carrying out hard rule filtering on the user input sentence and the first word slot to obtain a second intermediate word slot distribution probability;
and fusing the first intermediate word slot distribution probability and the second intermediate word slot distribution probability to obtain a second word slot distribution probability.
Further, the fusing includes performing a hadamard product on the word bin distribution probability.
Further, the training the machine learning model according to the training data includes:
performing word segmentation on user sentences in the training data;
performing word embedding on the participles to obtain a word vector tensor;
inputting the word vector tensor into a machine learning model to obtain an initial predicted word slot;
calculating the probability distribution of each word slot in the initial prediction word slots;
and analyzing the final predicted word slot and the word slot corresponding to the task, and updating the machine learning model according to the analysis result.
Further, the calculation of the probability distribution of each word slot in the initial predicted word slot is obtained according to a conditional random field.
Further, the analyzing the final predicted word slot and the word slot corresponding to the task includes calculating the cross entropy of the final predicted word slot and the word slot corresponding to the task to obtain a loss value, and performing back propagation to update the machine learning model according to the loss value.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the method and the device, the task corresponding word slot is generated according to rule matching in the stage of collecting training data, the first word slot is filtered by using the filtering rule to obtain the second word slot, the error case can be repaired, and therefore the word slot filling effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of a task-based robotic word slot filling method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a task-based robotic word slot filling method according to another embodiment of the present application.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
Fig. 1 is a schematic flowchart of a task-based robotic word slot filling method according to an embodiment of the present application.
As shown in fig. 1, the method of the present embodiment includes:
s11: training data is collected, wherein the training data comprises a task and a word slot corresponding to the task.
For example, in the task of ticket change, it is necessary to collect the number of a city from which the user arrives or the number of the ticket, and the word library of the city may be used to match the user sentence and the regular expression may be used to match the user sentence according to the format of the ticket number, and the matched entity (keyword) is filled in the word slot corresponding to the task of ticket change.
The generating step of the word slot corresponding to the task comprises the following steps:
carrying out rule matching on user statements and tasks to obtain screened tasks;
independently configuring each word slot corresponding to the screened task;
for example, the identification number may be matched using a regular expression (17 digits of a number plus one letter or number); cities may be matched by a list look-up table, etc. The number of matched word slots is further reduced.
And matching the user sentences with each word slot which is configured independently, and screening out the word slots corresponding to the tasks.
And generating a task corresponding word slot through rule matching, and after the robot finishes the task, taking the task and the corresponding word slot as training data so as to accumulate the training data, thereby solving the problems that the initial training data of the task is less, and error cases exist in the training data so that the filling effect of the machine learning model word slot is poor and the like.
The matching of the user statement and the task to obtain the screened task comprises the following steps:
dividing words of a user question sentence, and converting the words into corresponding word vector representations;
and calculating the similarity between the user question and the task similar question, wherein the similarity is the distance between the user question and the task similar question, for example, the cos distance is calculated, and the similarity is higher when the cos distance is larger.
Sorting the similarity;
and selecting a preset number of tasks with the similarity ranking at the top, for example, if the preset number is 3, selecting a top3 task with higher similarity.
The number of the candidate word slots can be reduced through the screening task, and the word slots corresponding to the task can be screened out as soon as possible.
S12: and training the machine learning model according to the training data.
At the initial stage of system operation, no training data exists, the training data is accumulated by using rule matching, and after a preset number of training data are obtained, the machine learning model starts to be trained, so that the accuracy of model input data is improved, the model learning speed is accelerated, and the learning effect is improved.
S13: and performing word slot matching filling on the user input sentence by using the trained machine learning model to obtain a first word slot corresponding to the user input sentence.
Because the deep learning model has better effect in the task of text processing than the traditional method, a classical sequence labeling framework of the deep learning model is adopted, and a model combining a cyclic neural network model and a conditional random field is adopted as a specific model. The word slot distribution probability is calculated through a model combining a cyclic neural network model and a conditional random field, so that the corresponding word slot in the user sentence is accurately predicted, and end-to-end prediction is realized.
S14: filtering the first word groove by using a filtering rule to obtain a second word groove;
the filtering rule includes:
soft filtering rules and hard filtering rules;
the soft filtering rules comprise conventional corresponding rules between tasks and word slots, for example, in an air ticket booking task, a word bank of a city is used for matching user sentences, and departure cities and arrival cities of air tickets are screened out;
the hard filter rules include special task processing rules and error case repair rules. For example, if an airline company does not have a flight abroad, when the machine learning model gives an identification result of the flight abroad, the identification result of the machine learning model can be changed by using a hard filtering rule, the changing method is to manually check whether the identification result belongs to a city identification error or a city nationality membership error, and the like, if the city membership nationality error exists, the error case is manually repaired, and when similar contents are asked next time, the machine learning model can give a correct identification result of the repair.
The filtering the first word groove by using the filtering rule to obtain a second word groove comprises:
and fusing the first word slot distribution probability obtained according to the machine learning model and the second word slot distribution probability calculated according to the filtering rule to obtain a second word slot.
The second word slot distribution probability includes:
soft rule filtering is carried out on the user input sentence and the first word groove to obtain a first intermediate word groove distribution probability;
carrying out hard rule filtering on the user input sentence and the first word slot to obtain a second intermediate word slot distribution probability;
and fusing the first intermediate word slot distribution probability and the second intermediate word slot distribution probability to obtain a second word slot distribution probability.
The fusion is carried out, and the fusion method comprises the following steps: and performing Hadamard product (Hadamard products) on the word slot distribution probability.
For example, the distribution probability of the original three word slots is (1, 1, 1), and the soft filtering rule is: and (3) reducing the probability of the third word slot to half of the original probability, wherein the distribution probability of the first intermediate word slot is (1, 1, 0.5), the hard rule filtering rule has no influence on the three word slots, the distribution probability of the second intermediate word slot is (1, 1, 1), the distribution probability of the first intermediate word slot and the distribution probability of the second intermediate word slot are fused to obtain the distribution probability of the first word slot is (1, 1, 0.5), the machine learning model outputs the distribution probability of the word slot to obtain the distribution probability of the word slot to be (0.1, 0.4, 0.5), and the two are subjected to Hardman product to obtain (0.1, 0.4, 0.25), and the second word slot is screened out.
The filtering rules are fused with the machine learning model, so that the priori knowledge can be inserted into the machine learning model, and the error cases in the machine learning model can be repaired, thereby further improving the word slot filling result. Furthermore, the fusion of the word slot distribution probability is realized by calculating the word slot distribution probability and by the Hadamard product, and the fusion of the filtering rule and the machine learning model is quantitatively realized, so that the calculation is convenient.
S15: and matching and filling the user input sentence and the second word slot to obtain the task required to be executed by the robot.
In this embodiment, a word slot corresponding to a task is generated according to rule matching in a stage of collecting training data, the trained machine learning model is used to perform word slot matching filling on the task to obtain a first word slot corresponding to the user input sentence, and a filtering rule is used to filter the first word slot to obtain a second word slot, so that an error case in the machine learning model is repaired, and a word slot filling effect is improved.
Fig. 2 is a schematic flowchart of a task-based robotic word slot filling method according to another embodiment of the present application.
As shown in fig. 2, the training of the machine learning model according to the training data includes:
s21: performing word segmentation on user sentences in the training data;
s22: performing word embedding on the participles to obtain a word vector tensor;
the word segmentation and word embedding vector tensor can be realized by a word2vec model, it should be noted that the word segmentation and word embedding method is not limited to the word2vec model method and can be selected according to an actual scene, and the word2vec model belongs to the prior art and is not described in detail here.
S23: inputting the word vector tensor into a machine learning model to obtain an initial predicted word slot;
the machine learning model selects, for example, a recurrent neural network model, and outputs an initial predicted word bin in one-to-one correspondence with the word vector using the conditional random field.
S24: calculating the probability distribution of each word slot in the initial prediction word slots;
and calculating the probability distribution of each word slot in the initial predicted word slot according to the conditional random field.
Because the conditional random field has no strict independence hypothesis condition, arbitrary context information can be accommodated, and the feature design is flexible; the conditional random field calculates the conditional probability of the global optimal output node, so that the defect of mark offset is overcome; the conditional random field calculates the joint probability distribution of the whole marker sequence under the condition of giving an observation sequence needing to be marked, and the prediction is not only carried out by considering local information, so that the prediction result is more accurate.
S25: and analyzing the final predicted word slot and the word slot corresponding to the task, and updating the machine learning model according to the analysis result.
And analyzing the final predicted word slot and the word slot corresponding to the task, wherein the step of calculating the cross entropy of the final predicted word slot and the word slot corresponding to the task to obtain a loss value, and updating the machine learning model according to the loss value.
For example, a recurrent neural network model is trained on training data.
Segmenting user sentences in the training data, and obtaining (x) a word vector tensor obtained by embedding words in the segmented words1,x2,...,xn) Inputting the word vector tensor into a recurrent neural network model, and performing the following operations by the model:
fi=σ(wxfxt+whfht-1+bf)
it=σ(wxixt+whiht-1+bi)
ot=σ(wxoxt+whoht-1+bo)
Figure BDA0001746898030000081
Figure BDA0001746898030000082
obtaining an initial prediction word slot (h) after model operation1,h2,...,hn) Wherein σ is sigmoid function.
Figure BDA0001746898030000083
Is a Hadamard product, and the other parameters are internal calculation parameters of the recurrent neural network model, which are not described in detail here. Will (h)1,h2,...,hn) Obtaining probability distribution (p) of each word slot in initial prediction word slot in input conditional random field1,p2,...,pn) And analyzing the final predicted word slot and the word slot corresponding to the task, namely calculating the cross entropy of the final predicted word slot and the word slot corresponding to the task to obtain a loss value, updating the parameters of the cyclic neural network model and the conditional random field, and finally obtaining an accurate word slot.
In the embodiment, parameters of the cyclic neural network model and the conditional random field are updated by using training data, the results are predicted by combining the cyclic neural network model and the conditional random field, the results are continuously predicted and evaluated, and the learning effect of the machine learning model is improved by continuously iterating, so that the word slot filling accuracy is improved.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example" or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
It should be noted that the present invention is not limited to the above-mentioned preferred embodiments, and those skilled in the art can obtain other products in various forms without departing from the spirit of the present invention, but any changes in shape or structure can be made within the scope of the present invention with the same or similar technical solutions as those of the present invention.

Claims (8)

1. A task-based robot word slot filling method is characterized by comprising the following steps:
collecting training data, wherein the training data comprises a task and a word slot corresponding to the task;
training a machine learning model according to the training data;
performing word slot matching filling on a user input sentence by using a trained machine learning model to obtain a first word slot corresponding to the user input sentence;
fusing the first word slot distribution probability of the first word slot obtained according to the machine learning model and the second word slot distribution probability calculated according to the filtering rule through a Hardman product to obtain a second word slot;
and matching and filling the user input sentence and the second word slot to obtain the task required to be executed by the robot.
2. The method of claim 1, wherein the task corresponds to a word slot, and the generating step comprises:
carrying out rule matching on user statements and tasks to obtain screened tasks;
independently configuring each word slot corresponding to the screened task; and matching the user sentences with each word slot which is configured independently, and screening out the word slots corresponding to the tasks.
3. The method of claim 2, wherein the matching of the user statement and the task to obtain the filtered task comprises:
dividing words of a user question sentence, and converting the words into corresponding word vector representations;
calculating the similarity between the user question and the task similar question, wherein the similarity is the distance between the user question and the task similar question;
sorting the similarity;
and selecting a preset number of tasks with the similarity ranking at the top.
4. The method of claim 1, wherein the filtering rules comprise:
soft filtering rules and hard filtering rules; the soft filtering rules comprise conventional corresponding rules between tasks and word slots;
the hard filter rules include special task processing rules and error case repair rules.
5. The method of claim 1, wherein the second word bin distribution probability comprises:
soft rule filtering is carried out on the user input sentence and the first word groove to obtain a first intermediate word groove distribution probability;
carrying out hard rule filtering on the user input sentence and the first word slot to obtain a second intermediate word slot distribution probability;
and fusing the first intermediate word slot distribution probability and the second intermediate word slot distribution probability to obtain a second word slot distribution probability.
6. The method of claim 1, wherein training a machine learning model from training data comprises:
performing word segmentation on user sentences in the training data;
performing word embedding on the participles to obtain a word vector tensor;
inputting the word vector tensor into a machine learning model to obtain an initial predicted word slot;
calculating the probability distribution of each word slot in the initial prediction word slots;
and analyzing the final predicted word slot and the word slot corresponding to the task, and updating the machine learning model according to the analysis result.
7. The method of claim 6, wherein said computing an initial predicted word bin probability distribution for each word bin is based on conditional random fields.
8. The method of claim 6, wherein analyzing the final predicted word slot and the word slot corresponding to the task comprises calculating cross entropy of the final predicted word slot and the word slot corresponding to the task to obtain a loss value, and updating the machine learning model according to the loss value.
CN201810856020.7A 2018-07-27 2018-07-27 Task-based robot word slot filling method Active CN109241269B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810856020.7A CN109241269B (en) 2018-07-27 2018-07-27 Task-based robot word slot filling method
PCT/CN2019/089954 WO2020019878A1 (en) 2018-07-27 2019-06-04 Slot filling method for task-oriented robot, computer apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810856020.7A CN109241269B (en) 2018-07-27 2018-07-27 Task-based robot word slot filling method

Publications (2)

Publication Number Publication Date
CN109241269A CN109241269A (en) 2019-01-18
CN109241269B true CN109241269B (en) 2020-07-17

Family

ID=65073271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810856020.7A Active CN109241269B (en) 2018-07-27 2018-07-27 Task-based robot word slot filling method

Country Status (2)

Country Link
CN (1) CN109241269B (en)
WO (1) WO2020019878A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241269B (en) * 2018-07-27 2020-07-17 深圳追一科技有限公司 Task-based robot word slot filling method
CN110472113B (en) * 2019-02-26 2022-08-26 杭州蓦然认知科技有限公司 Intelligent interaction engine optimization method, device and equipment
CN109947920A (en) * 2019-03-14 2019-06-28 百度在线网络技术(北京)有限公司 For obtaining the method and device of information
CN110370317B (en) * 2019-07-24 2022-01-11 广东工业大学 Robot repairing method and device
CN110674314B (en) * 2019-09-27 2022-06-28 北京百度网讯科技有限公司 Sentence recognition method and device
CN111159999B (en) * 2019-12-05 2023-04-18 中移(杭州)信息技术有限公司 Method and device for filling word slot, electronic equipment and storage medium
CN111737990B (en) * 2020-06-24 2023-05-23 深圳前海微众银行股份有限公司 Word slot filling method, device, equipment and storage medium
CN112597288B (en) * 2020-12-23 2023-07-25 北京百度网讯科技有限公司 Man-machine interaction method, device, equipment and storage medium
CN113312891B (en) * 2021-04-22 2022-08-26 北京墨云科技有限公司 Automatic payload generation method, device and system based on generative model

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040111248A1 (en) * 2002-09-04 2004-06-10 Granny Nicola V. Polymorphic computational system and method
US20070143296A1 (en) * 2005-12-15 2007-06-21 Kanoodle.Com, Inc. Taxonomy-based method and system for targeted advertising
US9171067B2 (en) * 2008-04-23 2015-10-27 Raytheon Company HLA to XML conversion
US10691737B2 (en) * 2013-02-05 2020-06-23 Intel Corporation Content summarization and/or recommendation apparatus and method
CN104462378B (en) * 2014-12-09 2017-11-21 北京国双科技有限公司 Data processing method and device for text identification
CN105955965A (en) * 2016-06-21 2016-09-21 上海智臻智能网络科技股份有限公司 Question information processing method and device
CN106547733A (en) * 2016-10-19 2017-03-29 中国国防科技信息中心 A kind of name entity recognition method towards particular text
CN106503254A (en) * 2016-11-11 2017-03-15 上海智臻智能网络科技股份有限公司 Language material sorting technique, device and terminal
CN106844512B (en) * 2016-12-28 2020-06-19 竹间智能科技(上海)有限公司 Intelligent question and answer method and system
CN107273350A (en) * 2017-05-16 2017-10-20 广东电网有限责任公司江门供电局 A kind of information processing method and its device for realizing intelligent answer
CN107463301A (en) * 2017-06-28 2017-12-12 北京百度网讯科技有限公司 Conversational system construction method, device, equipment and computer-readable recording medium based on artificial intelligence
CN107315737B (en) * 2017-07-04 2021-03-23 北京奇艺世纪科技有限公司 Semantic logic processing method and system
CN107423432B (en) * 2017-08-03 2020-05-12 当家移动绿色互联网技术集团有限公司 Method and system for distinguishing professional problems and small talk problems by robot
CN107729549B (en) * 2017-10-31 2021-05-11 深圳追一科技有限公司 Robot customer service method and system including element extraction
CN107862027B (en) * 2017-10-31 2019-03-12 北京小度信息科技有限公司 Retrieve intension recognizing method, device, electronic equipment and readable storage medium storing program for executing
CN107861951A (en) * 2017-11-17 2018-03-30 康成投资(中国)有限公司 Session subject identifying method in intelligent customer service
CN108021556A (en) * 2017-12-20 2018-05-11 北京百度网讯科技有限公司 For obtaining the method and device of information
CN109241269B (en) * 2018-07-27 2020-07-17 深圳追一科技有限公司 Task-based robot word slot filling method

Also Published As

Publication number Publication date
CN109241269A (en) 2019-01-18
WO2020019878A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
CN109241269B (en) Task-based robot word slot filling method
CN106777013A (en) Dialogue management method and apparatus
US20180025121A1 (en) Systems and methods for finer-grained medical entity extraction
CN113407694B (en) Method, device and related equipment for detecting ambiguity of customer service robot knowledge base
CN109598517B (en) Commodity clearance processing, object processing and category prediction method and device thereof
CN109783637A (en) Electric power overhaul text mining method based on deep neural network
CN110532563A (en) The detection method and device of crucial paragraph in text
CN109948160A (en) Short text classification method and device
CN110322206A (en) A kind of reagent information input method and device based on OCR identification
CN111143517B (en) Human selection label prediction method, device, equipment and storage medium
CN110046633A (en) A kind of data quality checking method and device
Porro et al. A multi-attribute group decision model based on unbalanced and multi-granular linguistic information: An application to assess entrepreneurial competencies in secondary schools
Hong et al. Determining construction method patterns to automate and optimise scheduling–a graph-based approach
CN109726288A (en) File classification method and device based on artificial intelligence process
CN109583473A (en) A kind of generation method and device of characteristic
CN111666748A (en) Construction method of automatic classifier and method for recognizing decision from software development text product
Amin Cases without borders: automating knowledge acquisition approach using deep autoencoders and siamese networks in case-based reasoning
CN115757695A (en) Log language model training method and system
US11836657B2 (en) Resource management planning support device, resource management planning support method, and programs
KR102282328B1 (en) System and Method for Predicting Preference National Using Long Term Short Term Memory
CN116562284B (en) Government affair text automatic allocation model training method and device
Kumar et al. Flight Delay Prediction Based On Aviation Big Data And Machine Learning
Mohsin et al. Dependency and Coreference-boosted Multi-Sentence Preference model
KR101697992B1 (en) System and Method for Recommending Bug Fixing Developers based on Multi-Developer Network
Hanga A deep learning approach to business process mining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant