WO2022198750A1 - 语义识别方法 - Google Patents

语义识别方法 Download PDF

Info

Publication number
WO2022198750A1
WO2022198750A1 PCT/CN2021/091024 CN2021091024W WO2022198750A1 WO 2022198750 A1 WO2022198750 A1 WO 2022198750A1 CN 2021091024 W CN2021091024 W CN 2021091024W WO 2022198750 A1 WO2022198750 A1 WO 2022198750A1
Authority
WO
WIPO (PCT)
Prior art keywords
intent
model
text
semantic
training
Prior art date
Application number
PCT/CN2021/091024
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
张晖
李吉媛
赵海涛
孙雁飞
朱洪波
Original Assignee
南京邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京邮电大学 filed Critical 南京邮电大学
Priority to JP2022512826A priority Critical patent/JP7370033B2/ja
Publication of WO2022198750A1 publication Critical patent/WO2022198750A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the present application relates to the field of natural language processing, and in particular, to a natural language semantic analysis method in a human-machine dialogue system.
  • semantic analysis is divided into two basic subtasks: intent recognition and semantic slot filling. For these two subtasks, the traditional research method is to treat the two tasks as two independent problems to solve, and then connect the results of the two tasks.
  • Each exemplary embodiment of the present application provides a semantic recognition method, including:
  • S102 construct a multi-intent recognition model based on clustering pre-analysis, and recognize multiple intentions of the user according to the intention text vector;
  • the step of constructing the multi-intent recognition model based on the cluster pre-analysis, and the step of recognizing the multiple intentions of the user includes:
  • the first stage use the K-means clustering algorithm to divide the input intent text vector into a single-intent category intent text vector and a multi-intent category intent text vector;
  • the second stage classify the intent text vector of the single-intent category through a softmax classifier to identify the multiple intents; and classify the multiple-intent intent text vector through a sigmoid classifier to identify the multiple intents multiple intents.
  • the distance function in the K-means clustering algorithm is:
  • f Sim (x i , x j ) represents the distance between the schematic text vector x i and the intent text vector x j
  • f 1 (x i , x j ) represents the distance between the schematic text vector x i and the intent text vector x j
  • the cosine similarity between , f 2 ( xi , x j ) represents the Euclidean distance between the schematic text vector x i and the intent text vector x j .
  • performing optimization training on the joint model in step S104 includes:
  • the loss function Loss intent of the multi-intent recognition model satisfies the following formula:
  • Loss intent (Loss multi ) k (Loss single ) 1-k
  • k represents the category of the schematic text, k is 1 when the intent text contains multiple intents, and k is 0 when the intent text is a single intent; is the cross-entropy loss for multi-intent recognition, is the cross-entropy loss for single-intent recognition, y I is the predicted output of the intent, y intent is the real intent, and T is the number of training texts.
  • the loss function Loss slot of the semantic slot filling model satisfies the following formula:
  • This application fully considers the connection between intent recognition and semantic slot filling, constructs a joint recognition model, combines two semantic analysis subtasks into one task, and shares the underlying semantic features of BERT. Then, the Slot-Gated correlation gate is used to generate the intent-semantic slot joint feature vector, which is then used for the semantic slot filling task.
  • BiLSTM is used to capture the word order features of the text to obtain contextual semantic information
  • CRF is used as a decoder to consider the dependencies before and after the label, so that the semantic slot labeling is more reasonable.
  • an algorithm based on clustering pre-analysis is proposed for the uncertainty of user input intentions to determine the number of intentions.
  • the traditional measurement method of semantic similarity is improved, and a new measurement method is proposed.
  • the new measurement method can measure the similarity between intent texts more effectively and improve the accuracy of the judgment of the number of intents.
  • improve the robustness of the algorithm In order to improve the ability of semantic analysis and make full use of intent semantic information to guide the filling of semantic slots, based on the idea of iteration, a step-by-step iterative training method is proposed, which can make full use of the relationship between intent and semantic slots. Fill in the accuracy while improving the accuracy of the multi-intent recognition model, thereby improving the effect of semantic analysis.
  • FIG. 1 is a block diagram of the overall structure of a joint modeling method according to an embodiment of the application.
  • FIG. 3 is a structural diagram of a semantic slot recognition model according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a step-by-step iterative training method of a joint identification model according to an embodiment of the present application.
  • the traditional research method is to treat the two tasks of intent recognition and semantic slot filling as two independent problems to solve, and then connect the results of the two tasks.
  • intent recognition is a judgment on the type of user needs, while semantic slot filling is to concretize user needs. Therefore, user intent and the slot to be identified are strongly correlated, and the purpose of intent identification is to better fill the semantic slot.
  • the traditional separate modeling method does not fully consider the connection between the two tasks, so that the semantic information cannot be effectively utilized.
  • the problem of multi-intent recognition is often faced in human-machine dialogue systems, that is, the intention text input by the user may not only contain one kind of intention, but may also have multiple intentions.
  • the research on the problem of intent recognition mainly focuses on the recognition of single-intent. Compared with single-intent recognition, multi-intent recognition is not only more complex to recognize, but also requires a higher degree of semantic understanding.
  • the inventor found that, for the semantic analysis problem in the human-machine dialogue system, how to propose a joint modeling method to effectively solve the multi-intent recognition and semantic slot filling based on the existing technology is an urgent need for those skilled in the art to solve one of the problems.
  • an embodiment of the present application discloses a method for joint identification of multi-intent and semantic slots based on clustering pre-analysis, including:
  • Step S101 obtaining the multi-intent text input by the current user in real time and preprocessing
  • Step S102 constructing a multi-intent recognition model based on the clustering pre-analysis, which is used to recognize the multiple intentions of the user;
  • Step S103 constructing a BiLSTM-CRF semantic slot filling model based on the Slot-Gated correlation gate mechanism, and making full use of the result of the intent recognition to guide the filling of the semantic slot;
  • Step S104 optimize the constructed joint model of multi-intent recognition and semantic slot filling.
  • the preprocessing of the multi-intent text input by the current user is to represent the multi-intent text in a vectorized manner so as to be input into the neural network model for semantic feature extraction.
  • the vectorized representation method in the embodiment of the present application is to first use unsupervised corpus of massive texts in the same field (such as Chinese, English, and texts in other languages) to train a BERT (Bidirectional Encoder Representations from Transformer) model. Then, the obtained BERT pretrained model is used to vectorize the multi-intent text.
  • the purpose of constructing the multi-intent recognition model based on clustering pre-analysis in the above step S102 is to fill the semantic slot.
  • the accuracy of multi-intent recognition will directly affect the filling of semantic slots.
  • the embodiment of the present application proposes a method based on clustering pre-analysis, that is, the text of the intention is analyzed before the intention is recognized, and it is determined that the intention belongs to Single intent or multiple intent.
  • the intent recognition of the cluster pre-analysis-based method includes the following steps.
  • the first stage uses the K-means clustering algorithm to determine the type of the input intent text.
  • the input intent texts are classified according to the number of judged intents.
  • a multi-intent classifier is used for classification. That is, a fully connected layer is added after the BERT pre-training model. Each node of the fully connected layer is connected to all nodes of the previous layer, which is used to fuse the previously extracted semantic features. Then the intent text vector output by the BERT model is input into the sigmoid classifier, and the classifier is used to perform binary classification on each label to output multiple intent labels.
  • the calculation formula of label prediction is as follows:
  • y I is the predicted probability
  • W I is the weight of intent recognition
  • C is the intent text vector
  • b I is the bias of intent recognition
  • the softmax classifier is used to directly input the first sentence vector C marked as ([CLS]) by BERT into the classifier for classification, and the predicted intent label can be obtained according to the following formula:
  • y I is the predicted probability
  • W I is the weight of intent recognition
  • C is the intent text vector
  • b I is the bias of intent recognition
  • a BiLSTM-CRF semantic slot filling model is constructed based on the Slot-Gated correlation gate mechanism, and the result of intention recognition is fully utilized to guide the filling of the semantic slot.
  • the Slot-Gated association gate mechanism is shown in Figure 3, which can link the task of intent recognition with the task of semantic slot filling. That is, the weighted sum of the intent vector for intent recognition and the intent text vector for semantic slot filling. Then through the activation function tanh, the intent-semantic slot joint feature vector g is obtained.
  • the calculation method of the intent-semantic slot joint feature vector g is as follows:
  • c I represents the schematic vector, Having the same dimensions as c I , v and W are trainable vectors and matrices, respectively.
  • the intent-semantic slot joint feature vector g is input into the BiLSTM (Bi-directional Long Short-Term Memory) neural network to extract the word order features of the text and capture the deep context. semantic information. Then a linear layer is added behind the BiLSTM network to map the dimensions of the neural network output vector for semantic slot decoding. Finally use CRF
  • the decoding layer is used as a decoding unit to output the slot label corresponding to each word in the sequence.
  • the calculation method is as follows:
  • step S104 the constructed joint model of multi-intent recognition and semantic slot filling is optimized. .
  • the performance of the joint recognition model is jointly determined by the two subtasks.
  • the joint probability of multi-intent recognition and semantic slot filling is as follows:
  • the joint conditional probability of , T is the length of the input text sequence
  • t is the t-th character in the text sequence
  • the training goal is to maximize the joint probability of outputting multi-intent recognition and semantic slot filling.
  • the joint recognition model is optimized by making full use of intent semantic information for filling semantic slots.
  • the traditional method of simply adding multiple task loss functions is changed.
  • a step-by-step iterative training method combining multi-intent recognition and semantic slot filling is proposed.
  • the training data is input into the joint recognition model.
  • a multi-intent recognition model is trained first.
  • the multi-intent recognition model parameters and the underlying BERT model parameters are updated through backpropagation.
  • the updated model is used to transfer the semantic features of the multi-intent recognition results to the Slot-Gated correlation gate.
  • the semantic features of the intent are fused with the semantic slot features generated by using the updated BERT model to generate an intent-semantic slot joint feature vector.
  • the generated intent-semantic slot joint feature vector is used to train the semantic slot filling model.
  • the semantic slot filling model parameters and the underlying BERT model parameters are updated through backpropagation. Repeat the training until the optimum is reached.
  • the two tasks of multi-intent recognition and semantic slot filling share the underlying parameters of the BERT model during training, that is, when training one model, the training results of another model are used to initialize the underlying model.
  • the upstream tasks are trained separately, while the results of intent recognition are transferred to the semantic slot filling task. Improve the accuracy of multi-intent recognition model while improving the accuracy of semantic slot filling.
  • the loss function is very important for model parameter update. If the loss function is chosen unreasonably, no matter how powerful the model is, the final result will not be good.
  • the multi-intent recognition loss function Loss intent in the joint recognition model is as follows:
  • Loss intent (Loss multi ) k (Loss single ) 1-k
  • k represents the category of the schematic text
  • k is 1 when the intent text contains multiple intents
  • k is 0 when the intent text is a single intent.
  • Loss multi is the cross-entropy of multi-intent recognition
  • Loss single is the cross-entropy of single-intent recognition. The specific calculation is as follows:
  • y I is the predicted output of the intent
  • y intent is the real intent
  • j is a certain text in the training text
  • T 1 represents the number of training texts for multi-intent recognition.
  • the loss function Loss slot of the semantic slot filling task in the joint recognition model is calculated as follows:
  • W11 and W12 represent the weights of multi-intent recognition
  • Ws1 and Ws2 represent the weights of semantic slot filling
  • steps in the flowcharts of FIGS. 1-4 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated in the embodiments of the present application, the execution of these steps is not strictly limited in sequence, and these steps may be executed in other sequences. Moreover, at least a part of the steps in FIGS. 1-4 may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. These sub-steps or stages The order of execution of the steps is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of sub-steps or stages of other steps.
  • the present application also provides a multi-intent and semantic slot joint identification system for cluster pre-analysis, comprising: a memory and a processor; the memory stores a computer program, and when the computer program is executed by the processor, The above-mentioned multi-intent and semantic slot joint recognition method is realized.
  • the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned method for joint identification of multiple intents and semantic slots are implemented.
  • the computer-readable storage medium may include: a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, etc. that can store program codes medium.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
PCT/CN2021/091024 2021-03-26 2021-04-29 语义识别方法 WO2022198750A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022512826A JP7370033B2 (ja) 2021-03-26 2021-04-29 セマンティック認識方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110325369.X 2021-03-26
CN202110325369.XA CN113204952B (zh) 2021-03-26 2021-03-26 一种基于聚类预分析的多意图与语义槽联合识别方法

Publications (1)

Publication Number Publication Date
WO2022198750A1 true WO2022198750A1 (zh) 2022-09-29

Family

ID=77025737

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/091024 WO2022198750A1 (zh) 2021-03-26 2021-04-29 语义识别方法

Country Status (3)

Country Link
JP (1) JP7370033B2 (ja)
CN (1) CN113204952B (ja)
WO (1) WO2022198750A1 (ja)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795886A (zh) * 2023-07-13 2023-09-22 杭州逍邦网络科技有限公司 用于销售数据的数据分析引擎及方法
CN117435716A (zh) * 2023-12-20 2024-01-23 国网浙江省电力有限公司宁波供电公司 电网人机交互终端的数据处理方法及系统
CN117765949A (zh) * 2024-02-22 2024-03-26 青岛海尔科技有限公司 一种基于语义依存分析的语句多意图识别方法及装置
CN117909508A (zh) * 2024-03-20 2024-04-19 成都赛力斯科技有限公司 意图识别方法、模型训练方法、装置、设备及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887237A (zh) * 2021-09-29 2022-01-04 平安普惠企业管理有限公司 多意图文本的槽位预测方法、装置及计算机设备
CN114372474A (zh) * 2021-12-30 2022-04-19 思必驰科技股份有限公司 抽象标签转移建模方法、语义理解系统及电子设备
CN115292463B (zh) * 2022-08-08 2023-05-12 云南大学 一种基于信息抽取的联合多意图检测和重叠槽填充的方法
CN115359786A (zh) * 2022-08-19 2022-11-18 思必驰科技股份有限公司 多意图语义理解模型训练和使用方法以及装置
CN115273849B (zh) * 2022-09-27 2022-12-27 北京宝兰德软件股份有限公司 一种关于音频数据的意图识别方法及装置
CN117435738B (zh) * 2023-12-19 2024-04-16 中国人民解放军国防科技大学 一种基于深度学习的文本多意图分析方法与系统
CN118037362B (zh) * 2024-04-12 2024-07-05 中国传媒大学 基于用户多意图对比的序列推荐方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008476A (zh) * 2019-04-10 2019-07-12 出门问问信息科技有限公司 语义解析方法、装置、设备及存储介质
CN110321418A (zh) * 2019-06-06 2019-10-11 华中师范大学 一种基于深度学习的领域、意图识别和槽填充方法
US20200257856A1 (en) * 2019-02-07 2020-08-13 Clinc, Inc. Systems and methods for machine learning based multi intent segmentation and classification
CN112035626A (zh) * 2020-07-06 2020-12-04 北海淇诚信息科技有限公司 一种大规模意图的快速识别方法、装置和电子设备
CN112183062A (zh) * 2020-09-28 2021-01-05 云知声智能科技股份有限公司 一种基于交替解码的口语理解方法、电子设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767408B (zh) * 2020-05-27 2023-06-09 青岛大学 一种基于多种神经网络集成的因果事理图谱构建方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200257856A1 (en) * 2019-02-07 2020-08-13 Clinc, Inc. Systems and methods for machine learning based multi intent segmentation and classification
CN110008476A (zh) * 2019-04-10 2019-07-12 出门问问信息科技有限公司 语义解析方法、装置、设备及存储介质
CN110321418A (zh) * 2019-06-06 2019-10-11 华中师范大学 一种基于深度学习的领域、意图识别和槽填充方法
CN112035626A (zh) * 2020-07-06 2020-12-04 北海淇诚信息科技有限公司 一种大规模意图的快速识别方法、装置和电子设备
CN112183062A (zh) * 2020-09-28 2021-01-05 云知声智能科技股份有限公司 一种基于交替解码的口语理解方法、电子设备和存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795886A (zh) * 2023-07-13 2023-09-22 杭州逍邦网络科技有限公司 用于销售数据的数据分析引擎及方法
CN116795886B (zh) * 2023-07-13 2024-03-08 杭州逍邦网络科技有限公司 用于销售数据的数据分析引擎及方法
CN117435716A (zh) * 2023-12-20 2024-01-23 国网浙江省电力有限公司宁波供电公司 电网人机交互终端的数据处理方法及系统
CN117435716B (zh) * 2023-12-20 2024-06-11 国网浙江省电力有限公司宁波供电公司 电网人机交互终端的数据处理方法及系统
CN117765949A (zh) * 2024-02-22 2024-03-26 青岛海尔科技有限公司 一种基于语义依存分析的语句多意图识别方法及装置
CN117765949B (zh) * 2024-02-22 2024-05-24 青岛海尔科技有限公司 一种基于语义依存分析的语句多意图识别方法及装置
CN117909508A (zh) * 2024-03-20 2024-04-19 成都赛力斯科技有限公司 意图识别方法、模型训练方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113204952A (zh) 2021-08-03
JP7370033B2 (ja) 2023-10-27
CN113204952B (zh) 2023-09-15
JP2023522502A (ja) 2023-05-31

Similar Documents

Publication Publication Date Title
WO2022198750A1 (zh) 语义识别方法
WO2022037256A1 (zh) 文本语句处理方法、装置、计算机设备和存储介质
US11941366B2 (en) Context-based multi-turn dialogue method and storage medium
Zhou et al. A C-LSTM neural network for text classification
CN111783462A (zh) 基于双神经网络融合的中文命名实体识别模型及方法
CN110263325B (zh) 中文分词系统
CN113255320A (zh) 基于句法树和图注意力机制的实体关系抽取方法及装置
CN113190656A (zh) 一种基于多标注框架与融合特征的中文命名实体抽取方法
CN114398881A (zh) 基于图神经网络的交易信息识别方法、系统及介质
CN115600597A (zh) 基于注意力机制和词内语义融合的命名实体识别方法、装置、系统及存储介质
CN112699685A (zh) 基于标签引导的字词融合的命名实体识别方法
CN111145914A (zh) 一种确定肺癌临床病种库文本实体的方法及装置
CN113239694B (zh) 一种基于论元短语的论元角色识别的方法
CN114692624A (zh) 一种基于多任务迁移的信息抽取方法、装置及电子设备
CN113178189A (zh) 一种信息分类方法及装置、信息分类模型训练方法及装置
CN115186670B (zh) 一种基于主动学习的领域命名实体识别方法及系统
CN116756605A (zh) 一种基于ernie_cn-gru语步自动识别方法、系统、设备及介质
CN116432755A (zh) 一种基于动态实体原型的权重网络推理方法
CN116362242A (zh) 一种小样本槽值提取方法、装置、设备及存储介质
CN115964497A (zh) 一种融合注意力机制与卷积神经网络的事件抽取方法
CN113408289B (zh) 一种多特征融合的供应链管理实体知识抽取的方法及系统
Wu et al. A Text Emotion Analysis Method Using the Dual‐Channel Convolution Neural Network in Social Networks
CN113255342B (zh) 一种5g移动业务产品名称识别方法及系统
CN114564943B (zh) 一种基于融合特征的海事海商长文本分类方法、装置及介质
VEENA et al. DETECTION OF SARCASTIC SENTIMENT ANALYSIS IN TWEETS USING LSTM WITH IMPROVED ATTENTION BASED FEATURE EXTRACTION (IATEN)

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022512826

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932367

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21932367

Country of ref document: EP

Kind code of ref document: A1