CN112101423A - Multi-model fused FAQ matching method and device - Google Patents
Multi-model fused FAQ matching method and device Download PDFInfo
- Publication number
- CN112101423A CN112101423A CN202010852824.7A CN202010852824A CN112101423A CN 112101423 A CN112101423 A CN 112101423A CN 202010852824 A CN202010852824 A CN 202010852824A CN 112101423 A CN112101423 A CN 112101423A
- Authority
- CN
- China
- Prior art keywords
- model
- matching
- faq
- sentence pair
- questions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000012552 review Methods 0.000 claims abstract description 8
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 6
- 238000013136 deep learning model Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 abstract description 12
- 238000003058 natural language processing Methods 0.000 abstract description 3
- 238000012417 linear regression Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本发明适用于自然语言处理技术领域,提供了一种多模型融合的FAQ匹配的方法及装置,通过依次获得待处理问题的训练文本集,结合Bert‑encoder+DBScan聚类辅助提取并归纳出理财教育知识点,从而构建理财教育FAQ,人工标注少量相似问题,接着根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集,使用无监督模型和有监督深度学习模型训练句对匹配模型,最后句对匹配模型训练完成后,接收用户输入的问题,识别与输入问题最匹配的问题及其相应的答案输出,本发明并采用多个模型融合,训练或预训练模型,提取文本,匹配标准问题,并回复相应答案,以解决FAQ用户查看繁琐和人工客服效率低的问题。
The invention is applicable to the technical field of natural language processing, and provides a method and device for FAQ matching of multi-model fusion. By sequentially obtaining the training text set of the problem to be processed, combined with Bert-encoder+DBScan clustering to assist in extracting and summarizing financial management Educating knowledge points to build a financial education FAQ, manually annotating a small number of similar questions, then generating a large number of similar questions based on a small number of similar questions that have been labeled, and doing manual review, constructing a sentence pair matching dataset, using unsupervised models and supervised depth The learning model trains the sentence pair matching model. After the final sentence pair matching model training is completed, the question input by the user is received, and the question that best matches the input question and its corresponding answer output are identified. Train the model, extract text, match standard questions, and reply with corresponding answers to solve the problem of cumbersome viewing by FAQ users and low efficiency of manual customer service.
Description
技术领域technical field
本发明属于自然语言处理技术领域,尤其涉及一种多模型融合的FAQ匹配的方法及装置。The invention belongs to the technical field of natural language processing, and in particular relates to a method and device for FAQ matching of multi-model fusion.
背景技术Background technique
理财教育行业人工智能领域落地情况并不多,尤其是在FAQ上行业语料私密,不方便开源,限制了发展。There are not many cases in the field of artificial intelligence in the financial education industry, especially the private industry corpus on the FAQ, which is inconvenient to open source, which limits the development.
近年来NLP领域发展迅速,但是能落地到理财教育并取得较好效果的并不多,最前沿的FAQ句对匹配算法在理财教育领域落地情况并不好。In recent years, the field of NLP has developed rapidly, but there are not many that can be implemented in financial education and achieve good results. The most cutting-edge FAQ sentence pair matching algorithm is not well implemented in the field of financial education.
智能FAQ在知识点较多、问题语义极其相似的情况下,难以取得较好的效果。Intelligent FAQ is difficult to achieve better results when there are many knowledge points and the semantics of the questions are very similar.
发明内容SUMMARY OF THE INVENTION
本发明提供一种多模型融合的FAQ匹配的方法及装置,旨在解决现有技术存在的问题。The present invention provides a method and device for FAQ matching of multi-model fusion, aiming at solving the problems existing in the prior art.
本发明是这样实现的,一种多模型融合的FAQ匹配的方法及装置,包括以下步骤:The present invention is achieved in this way, a method and device for FAQ matching of multi-model fusion, comprising the following steps:
S1、获得待处理问题的训练文本集,结合Bert-encoder+DBScan聚类辅助提取并归纳出理财教育知识点,从而构建理财教育FAQ,人工标注少量相似问题;S1. Obtain the training text set of the problem to be processed, and combine Bert-encoder+DBScan clustering to assist in extracting and summarizing financial education knowledge points, thereby constructing a financial education FAQ, and manually marking a small number of similar problems;
S2、使用相似问题生成模块,根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集;S2. Use the similar problem generation module to generate a large number of similar problems according to a small number of similar problems that have been marked, and do manual review to construct a sentence pair matching data set;
S3、构建预训练模型数据集;S3. Build a pre-training model dataset;
S4、使用无监督模型和有监督深度学习模型训练句对匹配模型;S4. Use an unsupervised model and a supervised deep learning model to train a sentence pair matching model;
S5、句对匹配模型训练完成后,接收用户输入的问题,并将输入问题的文本到句对匹配模型,识别与输入问题最匹配的问题及其相应的答案输出,并回复给用户。S5. After the training of the sentence pair matching model is completed, the question input by the user is received, and the text of the input question is matched to the sentence pair matching model to identify the question that best matches the input question and its corresponding answer output, and reply to the user.
优选的,所述训练文本集包括文本长度限制3-50之间、删除表情、数字和邮件的文本。Preferably, the training text set includes texts with a length limit of 3-50, and texts with expressions, numbers and emails deleted.
优选的,所述无监督模型包括WMD模型、SIF模型;Preferably, the unsupervised model includes a WMD model and a SIF model;
所述有监督模型包括bert模型、albert模型、roberta模型。The supervised model includes a bert model, an albert model, and a roberta model.
本发明还提供一种多模型融合的FAQ匹配的装置,包括:The present invention also provides a multi-model fusion FAQ matching device, comprising:
理财教育语料数据库,其用于储存预输入的FAQ语料数据,并生成训练文本集;Financial education corpus database, which is used to store pre-input FAQ corpus data and generate training text sets;
人工标注模块,其用于供操作人员在训练文本集中人工标注少量相似问题;Manual labeling module, which is used for operators to manually label a small number of similar problems in the training text set;
相似问题生产模块,其包括相似问题生成模型,所述相似问题生成模型用于根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集;A similar problem generation module, which includes a similar problem generation model, and the similar problem generation model is used to generate a large number of similar problems according to a small number of similar problems that have been marked, and perform manual review to construct a sentence pair matching data set;
NLU模块,其用于训练句对匹配模型,并利用训练好的模型匹配用户输入的问题,找出最匹配的问题,将所述最匹配的问题及其相应的答案输出,并回复给用户。The NLU module is used to train a sentence pair matching model, and use the trained model to match the questions input by the user, find the most matching question, output the most matching question and its corresponding answer, and reply to the user.
优选的,所述训练文本集包括文本长度限制3-50之间、删除表情、数字和邮件的文本。Preferably, the training text set includes texts with a length limit of 3-50, and texts with expressions, numbers and emails deleted.
优选的,所述NLU模块包括无监督模型和有监督模型;Preferably, the NLU module includes an unsupervised model and a supervised model;
所述无监督模型包括WMD模型、SIF模型;Described unsupervised model includes WMD model, SIF model;
所述有监督模型包括bert模型、albert模型、roberta模型。The supervised model includes a bert model, an albert model, and a roberta model.
与现有技术相比,本发明的有益效果是:本发明的一种多模型融合的FAQ匹配的方法及装置,通过依次获得待处理问题的训练文本集,结合Bert-encoder+DBScan聚类辅助提取并归纳出理财教育知识点,从而构建理财教育FAQ,人工标注少量相似问题,接着根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集,使用无监督模型和有监督深度学习模型训练句对匹配模型,最后句对匹配模型训练完成后,接收用户输入的问题,识别与输入问题最匹配的问题及其相应的答案输出,本发明并采用多个模型融合,训练或预训练模型,提取文本,匹配标准问题,并回复相应答案,以解决FAQ用户查看繁琐和人工客服效率低的问题。Compared with the prior art, the beneficial effects of the present invention are: a method and device for FAQ matching of multi-model fusion of the present invention, by sequentially obtaining the training text set of the problem to be processed, combined with Bert-encoder+DBScan clustering assistance Extract and summarize financial education knowledge points to build a financial education FAQ, manually mark a small number of similar questions, and then generate a large number of similar questions based on a small number of similar questions that have been marked, and do manual review to construct a sentence pair matching dataset, using unsupervised The model and the supervised deep learning model train the sentence pair matching model. After the final sentence pair matching model training is completed, the question input by the user is received, and the question that best matches the input question and its corresponding answer output are identified. The present invention adopts multiple models. Fusion, training or pre-training models, extracting text, matching standard questions, and replying to the corresponding answers to solve the problem of cumbersome FAQ users and low efficiency of manual customer service.
附图说明Description of drawings
图1为本发明的一种多模型融合的FAQ匹配的装置的整体系统原理图。FIG. 1 is a schematic diagram of an overall system of a multi-model fusion FAQ matching device of the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
请参阅图1,本发明提供一种技术方案:一种多模型融合的FAQ匹配的方法及装置,多模型融合的FAQ匹配的方法包括以下步骤:Referring to Fig. 1, the present invention provides a technical solution: a method and device for FAQ matching of multi-model fusion, and the method for FAQ matching of multi-model fusion comprises the following steps:
S1、获得待处理问题的训练文本集,结合Bert-encoder+DBScan聚类辅助提取并归纳出理财教育知识点,从而构建理财教育FAQ,人工标注少量相似问题。其中,训练文本集包括文本长度限制3-50之间、删除表情、数字和邮件的文本。S1. Obtain the training text set of the problem to be processed, and combine Bert-encoder+DBScan clustering to assist in extracting and summarizing financial education knowledge points, thereby constructing a financial education FAQ, and manually annotating a small number of similar problems. Among them, the training text set includes texts with a text length limit of 3-50, and texts with expressions, numbers, and emails deleted.
在本实施方式中,问题为:A股和B股区别是什么?In this embodiment, the question is: what is the difference between A shares and B shares?
人工增加相似问题为:①怎么区别A股、B股?②A股好还是B股好?The similar questions for artificial increase are: ① How to distinguish between A shares and B shares? ②A shares are better or B shares are better?
S2、使用相似问题生成模块,根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集。S2. Use the similar problem generation module to generate a large number of similar problems according to a small number of similar problems that have been marked, and do manual review to construct a sentence pair matching data set.
在本实施方式中,句对匹配数据集如下表1所示:In this embodiment, the sentence pair matching dataset is shown in Table 1 below:
表1Table 1
S3、根据句对匹配数据集构建预训练模型数据集。S3. Build a pre-training model data set according to the sentence pair matching data set.
S4、使用无监督模型和有监督深度学习模型训练句对匹配模型。无监督模型包括WMD模型、SIF模型。有监督模型包括bert模型、albert模型、roberta模型。进一步根据模型预测概率,线上使用线性回归+XGBOOST的方案融合,其中线性回归可以实时训练并更新模型占比40%,XGBOOST使用提前训练好的模型占比60%,这样,60%的XGBOOST保证了模型的稳定性,40%的线性回归保证了模型的灵活性。S4. Use an unsupervised model and a supervised deep learning model to train a sentence pair matching model. Unsupervised models include WMD models and SIF models. Supervised models include bert model, albert model, and roberta model. Further predict the probability according to the model, and use the linear regression + XGBOOST solution online, in which the linear regression can train and update the model in real time, accounting for 40%, and XGBOOST uses the pre-trained model, accounting for 60%. In this way, 60% of the XGBOOST guarantee The stability of the model is guaranteed, and the 40% linear regression ensures the flexibility of the model.
S5、句对匹配模型训练完成后,接收用户输入的问题,并将输入问题的文本到句对匹配模型,识别与输入问题最匹配的问题及其相应的答案输出,并回复给用户。S5. After the training of the sentence pair matching model is completed, the question input by the user is received, and the text of the input question is matched to the sentence pair matching model to identify the question that best matches the input question and its corresponding answer output, and reply to the user.
本发明的一种多模型融合的FAQ匹配的装置包括人工标注模块、相似问题生产模块、NLU模块。A multi-model fusion FAQ matching device of the present invention includes a manual labeling module, a similar problem producing module, and an NLU module.
理财教育语料数据库用于储存预输入的FAQ语料数据,并生成训练文本集。训练文本集包括文本长度限制3-50之间、删除表情、数字和邮件的文本。The financial education corpus database is used to store pre-input FAQ corpus data and generate training text sets. The training text set includes texts with a text length limit of 3-50, removing expressions, numbers and emails.
人工标注模块用于供操作人员在训练文本集中人工标注少量相似问题;The manual labeling module is used for operators to manually label a small number of similar problems in the training text set;
相似问题生产模块,其包括相似问题生成模型,所述相似问题生成模型用于根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集。The similar question generation module includes a similar question generation model, and the similar question generation model is used to generate a large number of similar questions according to a small number of similar questions that have been marked, and conduct manual review to construct a sentence pair matching data set.
NLU模块用于训练句对匹配模型,并利用训练好的模型匹配用户输入的问题,找出最匹配的问题,将所述最匹配的问题及其相应的答案输出,并回复给用户。NLU模块包括无监督模型和有监督模型。无监督模型包括WMD模型、SIF模型。有监督模型包括bert、albert、roberta。The NLU module is used to train a sentence pair matching model, and use the trained model to match the questions input by the user, find the most matching question, output the most matching question and its corresponding answer, and reply to the user. NLU modules include unsupervised models and supervised models. Unsupervised models include WMD models and SIF models. Supervised models include bert, albert, and roberta.
上述各模块均在线上环境部署,部署在两台RTX60024G的GPU服务器上。线上环境,优化服务性能解决高并发问题,使其响应速度控制在300ms以内。优化过程包括:bert预处理过程并行计算,热加载训练好的模型,多模型并行计算等。The above modules are all deployed in an online environment on two RTX60024G GPU servers. Online environment, optimize service performance to solve high concurrency problems, and control the response speed within 300ms. The optimization process includes: parallel computing of the bert preprocessing process, hot loading of the trained model, and parallel computing of multiple models.
本发明的一种多模型融合的FAQ匹配的方法及装置,通过依次获得待处理问题的训练文本集,结合Bert-encoder+DBScan聚类辅助提取并归纳出理财教育知识点,从而构建理财教育FAQ,人工标注少量相似问题,接着根据已标注的少量相似问题,生成大量相似问题,并做人工审核,构建句对匹配数据集,使用无监督模型和有监督深度学习模型训练句对匹配模型,最后句对匹配模型训练完成后,接收用户输入的问题,识别与输入问题最匹配的问题及其相应的答案输出,本发明并采用多个模型融合,训练或预训练模型,提取文本,匹配标准问题,并回复相应答案,以解决FAQ用户查看繁琐和人工客服效率低的问题。A method and device for FAQ matching of multi-model fusion of the present invention, by sequentially obtaining the training text set of the problem to be processed, combined with Bert-encoder+DBScan clustering to assist in extracting and summarizing financial education knowledge points, thereby constructing a financial education FAQ , manually mark a small number of similar problems, then generate a large number of similar problems according to the small number of similar problems that have been marked, and do manual review, build a sentence pair matching dataset, use an unsupervised model and a supervised deep learning model to train the sentence pair matching model, and finally After the training of the sentence pair matching model is completed, the question input by the user is received, and the question that best matches the input question and its corresponding answer output are identified. , and reply with the corresponding answer to solve the problem that FAQ users are cumbersome to view and the efficiency of manual customer service is low.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010852824.7A CN112101423A (en) | 2020-08-22 | 2020-08-22 | Multi-model fused FAQ matching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010852824.7A CN112101423A (en) | 2020-08-22 | 2020-08-22 | Multi-model fused FAQ matching method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112101423A true CN112101423A (en) | 2020-12-18 |
Family
ID=73754202
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010852824.7A Pending CN112101423A (en) | 2020-08-22 | 2020-08-22 | Multi-model fused FAQ matching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112101423A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505207A (en) * | 2021-07-02 | 2021-10-15 | 中科苏州智能计算技术研究院 | Machine reading understanding method and system for financial public opinion research and report |
CN114117022A (en) * | 2022-01-26 | 2022-03-01 | 杭州远传新业科技有限公司 | FAQ similarity problem generation method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347835A (en) * | 2019-07-11 | 2019-10-18 | 招商局金融科技有限公司 | Text Clustering Method, electronic device and storage medium |
CN110727779A (en) * | 2019-10-16 | 2020-01-24 | 信雅达系统工程股份有限公司 | Question-answering method and system based on multi-model fusion |
CN111191442A (en) * | 2019-12-30 | 2020-05-22 | 杭州远传新业科技有限公司 | Similar problem generation method, device, equipment and medium |
-
2020
- 2020-08-22 CN CN202010852824.7A patent/CN112101423A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347835A (en) * | 2019-07-11 | 2019-10-18 | 招商局金融科技有限公司 | Text Clustering Method, electronic device and storage medium |
CN110727779A (en) * | 2019-10-16 | 2020-01-24 | 信雅达系统工程股份有限公司 | Question-answering method and system based on multi-model fusion |
CN111191442A (en) * | 2019-12-30 | 2020-05-22 | 杭州远传新业科技有限公司 | Similar problem generation method, device, equipment and medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505207A (en) * | 2021-07-02 | 2021-10-15 | 中科苏州智能计算技术研究院 | Machine reading understanding method and system for financial public opinion research and report |
CN113505207B (en) * | 2021-07-02 | 2024-02-20 | 中科苏州智能计算技术研究院 | Machine reading understanding method and system for financial public opinion research report |
CN114117022A (en) * | 2022-01-26 | 2022-03-01 | 杭州远传新业科技有限公司 | FAQ similarity problem generation method and system |
CN114117022B (en) * | 2022-01-26 | 2022-05-06 | 杭州远传新业科技有限公司 | FAQ similarity problem generation method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109255119B (en) | Sentence trunk analysis method and system of multi-task deep neural network based on word segmentation and named entity recognition | |
US20220006761A1 (en) | Systems and processes for operating and training a text-based chatbot | |
CN110555095B (en) | Man-machine conversation method and device | |
CN107122416B (en) | A Chinese event extraction method | |
CN106844368B (en) | Method for man-machine conversation, neural network system and user equipment | |
CN108763510B (en) | Intention recognition method, device, equipment and storage medium | |
CN104050160B (en) | Interpreter's method and apparatus that a kind of machine is blended with human translation | |
CN112818106B (en) | Evaluation method for generating question and answer | |
WO2019232893A1 (en) | Method and device for text emotion analysis, computer apparatus and storage medium | |
CN107562863A (en) | Method and system for automatic generation of chat robot reply | |
CN107562792A (en) | A kind of question and answer matching process based on deep learning | |
CN109857846B (en) | Method and device for matching user question and knowledge point | |
CN110555206A (en) | named entity identification method, device, equipment and storage medium | |
US10691900B2 (en) | Adaptable text analytics platform | |
CN117493513A (en) | Question-answering system and method based on vector and large language model | |
CN113204967B (en) | Resume Named Entity Recognition Method and System | |
CN107247739A (en) | A kind of financial publication text knowledge extracting method based on factor graph | |
CN110781681A (en) | Translation model-based elementary mathematic application problem automatic solving method and system | |
CN114625858A (en) | A kind of intelligent reply method and device for government question and answer based on neural network | |
CN112101423A (en) | Multi-model fused FAQ matching method and device | |
CN112115229A (en) | Text intention recognition method, device and system and text classification system | |
CN118861325A (en) | A factory document information retrieval method and device based on large language model | |
CN115080688B (en) | Cross-domain emotion analysis method and device for few samples | |
Kumari et al. | Let's All Laugh Together: A Novel Multitask Framework for Humor Detection in Internet Memes | |
WO2022227196A1 (en) | Data analysis method and apparatus, computer device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201218 |
|
RJ01 | Rejection of invention patent application after publication |