WO2019196236A1 - Procédé d'analyse de rôle sémantique, support d'informations lisible, dispositif terminal et appareil - Google Patents

Procédé d'analyse de rôle sémantique, support d'informations lisible, dispositif terminal et appareil Download PDF

Info

Publication number
WO2019196236A1
WO2019196236A1 PCT/CN2018/096258 CN2018096258W WO2019196236A1 WO 2019196236 A1 WO2019196236 A1 WO 2019196236A1 CN 2018096258 W CN2018096258 W CN 2018096258W WO 2019196236 A1 WO2019196236 A1 WO 2019196236A1
Authority
WO
WIPO (PCT)
Prior art keywords
participle
vector
speech
word
input matrix
Prior art date
Application number
PCT/CN2018/096258
Other languages
English (en)
Chinese (zh)
Inventor
张依
汪伟
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019196236A1 publication Critical patent/WO2019196236A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Definitions

  • the present application belongs to the field of computer technology, and in particular, to a semantic role analysis method, a computer readable storage medium, a terminal device and a device.
  • the mainstream semantic role analysis research mainly focuses on the use of various machine learning techniques, using multiple linguistic features to identify and classify semantic roles.
  • the usual practice is to first use a neural network model to perform the participle of each participle. Determine, and then use a neural network model to determine the semantic role of each word segment. Because in the calculation process, the impact of the whole sentence on the word segmentation result needs to be considered in a single neural network model, the neural network model is often very complicated to construct. The calculation is huge and the efficiency is low.
  • the embodiments of the present application provide a semantic role analysis method, a computer readable storage medium, a terminal device, and a device, so as to solve the current semantic role analysis method, and the entire sentence is determined in a single neural network model.
  • neural network models are often constructed with very complex, computationally intensive and inefficient problems.
  • the first aspect of the embodiment of the present application provides a semantic role analysis method, which may include:
  • the second input matrix of each participle is input into a preset second neural network model to obtain a second output vector of each participle, and the second neural network model is a neural network model for performing reverse-sequence part-of-speech analysis;
  • the part-of-speech vector database is a recorded part of speech type a database of correspondences with part of speech vectors;
  • the third input matrix of each participle is input into a preset third neural network model to obtain a third output vector of each participle, and the third neural network model is a neural network model for performing positive sequence semantic role analysis;
  • the fourth input matrix of each participle is respectively input into a preset fourth neural network model to obtain a fourth output vector of each participle, and the fourth neural network model is a neural network model for performing reverse order semantic role analysis;
  • the semantic role type of each word segment is determined according to the third output vector and the fourth output vector of each participle.
  • a second aspect of embodiments of the present application provides a computer readable storage medium storing computer readable instructions that, when executed by a processor, implement the semantic role analysis method described above step.
  • a third aspect of embodiments of the present application provides a semantic role analysis terminal device including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor executing The computer readable instructions implement the steps of the semantic role analysis method described above.
  • a fourth aspect of the embodiments of the present application provides a semantic role analysis apparatus, which may include a module for implementing the steps of the semantic role analysis method described above.
  • the embodiment of the present application splits the originally complicated neural network model into a relatively simple neural network model, and then comprehensively processes the output of each neural network model.
  • the embodiment of the present application splits the originally complicated neural network model into a relatively simple neural network model, and then comprehensively processes the output of each neural network model.
  • FIG. 1 is a flowchart of an embodiment of a semantic role analysis method according to an embodiment of the present application
  • FIG. 2 is a schematic flow chart of a processing procedure of a first neural network model
  • FIG. 3 is a schematic flow chart of a processing procedure of a second neural network model
  • FIG. 4 is a schematic flow chart of a processing procedure of a third neural network model
  • FIG. 5 is a schematic flow chart of a processing procedure of a fourth neural network model
  • FIG. 6 is a structural diagram of an embodiment of a semantic role analysis apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of a semantic role analysis terminal device according to an embodiment of the present application.
  • an embodiment of a semantic role analysis method in an embodiment of the present application may include:
  • Step S101 performing word segmentation on the sentence text to obtain each word segment constituting the sentence text.
  • the word processing refers to dividing a sentence text into a single word, that is, each of the word segments.
  • the sentence text can be segmented according to the general dictionary to ensure that the words that are separated are normal words. If the words are not in the dictionary, the words are separated.
  • the current rear direction can be a word, for example, "require God”, it will be divided according to the size of the statistical word frequency. For example, if the word "required” is high, the word “requirement/god” will be separated. / Ask God.”
  • Step S102 Search for a word vector of each word segment in a preset word vector database, and respectively construct a first input matrix and a second input matrix of each word segment according to the word vector.
  • the word vector database is a database for recording a correspondence between words and word vectors, and the word vectors may be corresponding word vectors obtained by training words according to the word2vec model.
  • the first input matrix of each word segment can be separately constructed according to the following formula:
  • n is the serial number of the word segmentation in order of precedence, 1 ⁇ n ⁇ N, N is the total number of word segmentation of the sentence text, cl is the line number of the first input matrix, 1 ⁇ cl ⁇ CoupLen, CoupLen For a preset coupling length, wvl is the column number of the first input matrix, 1 ⁇ wvl ⁇ wVecLen, wVecLen is the length of the word vector of any one of the participles, and the word vector of the nth participle is WordVec n , and
  • WordVec n (WdVecEm n,1 , WdVecEm n,2 ,...,WdVecEm n,vl ,...,WdVecEm n,wVecLen ),
  • FwWdMatrix n is the first input matrix of the nth participle.
  • the second input matrix of each participle is constructed according to the following formula:
  • BkWdMatrix n is the second input matrix of the nth participle.
  • Step S103 the first input matrix of each word segment is separately input into a preset first neural network model, and a first output vector of each word segment is obtained.
  • the first neural network model is a neural network model for performing positive-sequence part-of-speech analysis, and the processing process of the first neural network model may specifically include the steps shown in FIG. 2:
  • step S1031 the first composite vector of each participle is calculated separately.
  • the first composite vector of each participle can be separately calculated according to the following formula:
  • FwWdCpVec n (FwWdCpEm n,1 , FwWdCpEm n,2 ,..., FwWdCpEm n,wvl ,...,FwWdCpEm n,wVecLen )
  • Ln is a natural logarithm function
  • tanh is a hyperbolic tangent function
  • FwWdWt wvl and FwWdWt' wvl are preset weight coefficients.
  • Step S1032 respectively calculating a first probability value of each part of speech type.
  • the first probability value of each part of speech type may be separately calculated according to the following formula:
  • n is the number of the part of speech type, 1 ⁇ m ⁇ M
  • M is the number of part of speech type
  • FwWdWtVec m is the preset weight vector corresponding to the mth part of speech type.
  • FwWdProb n,m is the first probability that the nth participle is the mth part of speech type.
  • step S1033 a first output vector of each participle is constructed.
  • the first output vector of each participle can be constructed according to the following formula:
  • FwWdVec n (FwWdProb n,1 , FwWdProb n,2 ,...,FwWdProb n,m ,...,FwWdProb n,M )
  • FwWdVec n is the first output vector of the nth participle.
  • Step S104 the second input matrix of each word segment is separately input into a preset second neural network model, and a second output vector of each word segment is obtained.
  • the second neural network model is a neural network model for performing reverse-sequence part-of-speech analysis, and the processing process of the second neural network model may specifically include the steps shown in FIG. 3:
  • step S1041 the second composite vector of each participle is calculated separately.
  • the second composite vector of each participle can be separately calculated according to the following formula:
  • BkWdCpVec n (BkWdCpEm n,1 , BkWdCpEm n,2 ,..., BkWdCpEm n,wvl ,..., BkWdCpEm n,wVecLen )
  • BkWdWt wvl and BkWdWt' wvl are preset weight coefficients.
  • Step S1042 respectively calculating a second probability value of each part of speech type.
  • the second probability value of each part of speech type may be separately calculated according to the following formula:
  • BkWdWtVec m is a preset weight vector corresponding to the mth part of speech type
  • BkWdProb n,m is a second probability value that the nth participle is the mth part of speech type.
  • Step S1043 constructing a second output vector of each participle.
  • the second output vector of each participle can be constructed according to the following formula:
  • BkWdVec n (BkWdProb n,1 , BkWdProb n,2 ,..., BkWdProb n,m ,...,BkWdProb n,M )
  • BkWdVec n is the second output vector of the nth participle.
  • Step S105 determining the part of speech type of each participle according to the first output vector and the second output vector of each participle.
  • the part-of-speech probability vector of each participle can be calculated according to the following formula:
  • WdProbVec n (WdProb n,1 , WdProb n,2 ,...,WdProb n,m ,...,WdProb n,M )
  • WdProb n, m ⁇ 1 * FwWdProb n, m + ⁇ 2 * BkWdProb n, m , ⁇ 1 , ⁇ 2 are preset weight coefficients, and WdProbVec n is the part-of-sense probability vector of the nth participle.
  • argmax is the largest independent variable function and CharSeq n is the part-of-speech type number of the nth participle. It is also determined that the part of speech type corresponding to the element having the largest value in the part-of-speech probability vector of the nth participle is determined as the part of speech type of the nth participle.
  • Step S106 Search for a part-of-speech vector corresponding to the part-of-speech type of each participle in the preset part-of-speech vector database, and construct a third input matrix and a fourth input matrix of each participle according to the part-of-speech vector respectively.
  • the part of speech vector database is a database for recording the correspondence between the part of speech type and the part of speech vector.
  • the part of speech vector is a vector form corresponding to each part of speech type, that is, the probability of occurrence of the part of speech type is represented according to context information of the part of speech type.
  • the training of part-of-speech vectors first expresses each part of speech type into a 0-1 vector (one-hot) form, and then performs model training, using the part of speech type of n-1 words to predict the part of speech type of the nth word, neural network
  • the intermediate process obtained after the model is predicted is used as a part of speech vector.
  • the one-hot vector of the part-of-speech type "noun” is set to [1, 0, 0, 0, ..., 0]
  • the one-hot vector of the part of speech type "adjective” is [0, 1, 0. ,0,...,0]
  • the one-hot vector of the part of speech type "verb” is [0,0,1,0,...,0]
  • the model is trained to generate a coefficient matrix W of the hidden layer
  • the product of the one-hot vector of each part of speech type and the coefficient matrix is the part of speech vector of the part of speech type, and the final form will be similar to "[ A multidimensional vector such as -0.11, 0.26, -0.03, ..., 0.71]".
  • the third input matrix of each word segment can be separately constructed according to the following formula:
  • n is the sequence number of the word segmentation in order of precedence, 1 ⁇ n ⁇ N, N is the total number of word segmentation of the sentence text, cl is the line number of the third input matrix, 1 ⁇ cl ⁇ CoupLen, CoupLen For a preset coupling length, cvl is the column number of the third input matrix, 1 ⁇ cvl ⁇ cVecLen, cVecLen is the length of the part of speech vector of any one of the participles, and the part of speech vector of the nth participle is CharVec n , and
  • CharVec n (CrVecEm n,1 ,CrVecEm n,2 ,...,CrVecEm n,cvl ,...,CrVecEm n,cVecLen ),
  • FwCrMatrix n is the third input matrix of the nth participle.
  • the fourth input matrix of each participle is constructed according to the following formula:
  • BkCrMatrix n is the fourth input matrix of the nth participle.
  • Step S107 the third input matrix of each participle is respectively input into a preset third neural network model, and a third output vector of each participle is obtained.
  • the third neural network model is a neural network model for performing positive sequence semantic role analysis, and the processing process of the third neural network model may specifically include the steps shown in FIG. 4:
  • Step S1071 respectively calculating a third composite vector of each participle.
  • the third composite vector of each participle can be separately calculated according to the following formula:
  • FwCrCpVec n (FwCrCpEm n,1 , FwCrCpEm n,2 , . . . , FwCrCpEm n, vl , . . . , FwCrCpEm n, cVecLen )
  • Ln is a natural logarithm function
  • tanh is a hyperbolic tangent function
  • FwCrWt cvl and FwCrWt' cvl are preset weight coefficients.
  • Step S1072 respectively calculating a first probability value of each semantic role type.
  • the first probability value of each semantic role type may be separately calculated according to the following formula:
  • l is the sequence number of the semantic role type, 1 ⁇ l ⁇ L, L is the number of semantic role types, and FwCrWtVec l is the preset weight vector corresponding to the first semantic role type.
  • FwCrProb n,l is the first probability that the nth participle is the first semantic role type.
  • step S1073 a third output vector of each participle is constructed.
  • the third output vector of each participle can be constructed according to the following formula:
  • FwCrVec n (FwCrProb n,1 , FwCrProb n,2 ,...,FwCrProb n,l ,...,FwCrProb n,L )
  • FwCrVec n is the third output vector of the nth participle.
  • Step S108 the fourth input matrix of each word segment is separately input into a preset fourth neural network model, and a fourth output vector of each word segment is obtained.
  • the fourth neural network model is a neural network model for performing reverse-sequence semantic role analysis, and the processing process of the third neural network model may specifically include the steps shown in FIG. 5:
  • step S1081 the fourth composite vector of each participle is calculated separately.
  • the fourth composite vector of each participle can be separately calculated according to the following formula:
  • BkCrCpVec n (BkCrCpEm n,1 , BkCrCpEm n,2 , . . . , BkCrCpEm n, cvl , . . . , BkCrCpEm n, cVecLen ), wherein
  • BkCrWt cvl and BkCrWt' cvl are preset weight coefficients.
  • Step S1082 respectively calculating a second probability value of each semantic role type.
  • the second probability value of each semantic role type may be separately calculated according to the following formula:
  • BkCrWtVec l is a preset weight vector corresponding to the first semantic role type
  • BkCrProb n, l is the second probability that the nth participle is the first semantic role type.
  • Step S1083 constructing a fourth output vector of each participle.
  • the fourth output vector of each participle can be constructed according to the following formula:
  • BkCrVec n (BkCrProb n,1 , BkCrProb n,2 ,...,BkCrProb n,l ,...,BkCrProb n,L )
  • BkCrVec n is the fourth output vector of the nth participle.
  • Step S109 determining a semantic role type of each word segment according to the third output vector and the fourth output vector of each participle.
  • semantic role probability vector of each word segment can be separately calculated according to the following formula:
  • CrProbVec n (CrProb n,1 ,CrProb n,2 ,...,CrProb n,l ,...,CrProb n,L )
  • CrProb n,l ⁇ 1 *FwCrProb n,l + ⁇ 2 *BkCrProb n,l , ⁇ 1 , ⁇ 2 are preset weight coefficients, and CrProbVec n is the semantic role probability vector of the nth participle.
  • argmax is the largest independent variable function and RoleSeq n is the semantic role type number of the nth participle. It is also determined that the semantic role type corresponding to the element with the largest value among the semantic role probability vectors of the nth participle is determined as the semantic role type of the nth participle. It is also determined that the semantic role type corresponding to the element with the largest value among the semantic role probability vectors of the nth participle is determined as the semantic role type of the nth participle.
  • the two embodiments of the present application use two neural network models for processing in the two most critical processes, and the previously complex neural network model is divided into relatively simple neural network models, and then The output of each neural network model is processed comprehensively to obtain the result. Due to the simplification of the neural network model structure, the calculation amount is greatly reduced, and the analysis efficiency is improved.
  • FIG. 6 is a structural diagram of an embodiment of a semantic role analysis apparatus provided by an embodiment of the present application.
  • a semantic role analysis apparatus may include:
  • the word processing module 601 is configured to perform word segmentation on the sentence text to obtain each word segment constituting the text of the sentence;
  • the word vector searching module 602 is configured to separately search for a word vector of each word segment in a preset word vector database, where the word vector database is a database for recording a correspondence between words and word vectors;
  • a word vector matrix construction module 603 configured to respectively construct a first input matrix and a second input matrix of each word segment according to the word vector;
  • the first processing module 604 is configured to input the first input matrix of each word segment into the preset first neural network model to obtain a first output vector of each word segment, where the first neural network model performs positive sequence word Analytical neural network model;
  • the second processing module 605 is configured to input the second input matrix of each participle into the preset second neural network model to obtain a second output vector of each participle, and the second neural network model is to perform reverse word analysis.
  • Neural network model
  • the part of speech type determining module 606 is configured to determine a part of speech type of each participle according to the first output vector and the second output vector of each participle;
  • the part of speech vector search module 607 is configured to search for a part of speech vector corresponding to the part of speech type of each participle in a preset part of speech vector database, where the part of speech vector database is a database for recording the correspondence between the part of speech type and the part of speech vector;
  • the part of speech vector matrix construction module 608 is configured to respectively construct a third input matrix and a fourth input matrix of each word segment according to the part of speech vector;
  • the third processing module 609 is configured to input the third input matrix of each word segment into the preset third neural network model to obtain a third output vector of each word segment, and the third neural network model performs positive sequence semantics Neural network model for role analysis;
  • the fourth processing module 610 is configured to input the fourth input matrix of each participle into a preset fourth neural network model to obtain a fourth output vector of each participle, and the fourth neural network model is to perform a reverse order semantic role.
  • Analytical neural network model ;
  • the semantic role type determining module 611 is configured to determine a semantic role type of each word segment according to the third output vector and the fourth output vector of each word segment.
  • the specific embodiment of the semantic role analysis device is substantially the same as the foregoing embodiments of the semantic role analysis method. Reference may be made to the related description in the foregoing method embodiments, and details are not described herein.
  • FIG. 7 is a schematic block diagram of a semantic role analysis terminal device provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown.
  • the semantic role analysis terminal device 7 may be a computing device such as a mobile phone, a tablet computer, a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the semantic role analysis terminal device 7 may include a processor 70, a memory 71, and computer readable instructions 72 stored in the memory 71 and executable on the processor 70, such as performing the semantic role analysis method described above.
  • the processor 70 executes the steps in the embodiments of the various semantic role analysis methods described above when the computer readable instructions 72 are executed.
  • the functional units in the various embodiments of the present application may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of computer readable instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.

Abstract

La présente invention concerne le domaine technique des ordinateurs et, en particulier, un procédé d'analyse de rôle sémantique, un support d'informations lisible par ordinateur, un dispositif terminal et un appareil. Le procédé comprend les étapes suivantes : dans le processus d'analyse de parties de discours, un modèle de réseau neuronal est utilisé pour une analyse prospective de parties de discours et un autre modèle de réseau neuronal est utilisé pour une analyse inverse de parties de discours. Dans le processus d'analyse de rôle sémantique, un modèle de réseau neuronal est utilisé pour une analyse prospective de rôle sémantique et un autre modèle de réseau neuronal est utilisé pour une analyse inverse de rôle sémantique. Ainsi, un modèle de réseau neuronal initialement complexe est divisé en modèles de réseau neuronal relativement simples, puis la sortie de chaque modèle de réseau neuronal est traitée de manière exhaustive pour obtenir un résultat. En raison de la simplification de la structure des modèles de réseau neuronal, la charge de calcul est fortement réduite et l'efficacité d'analyse est améliorée.
PCT/CN2018/096258 2018-04-09 2018-07-19 Procédé d'analyse de rôle sémantique, support d'informations lisible, dispositif terminal et appareil WO2019196236A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810309685.6A CN108804411B (zh) 2018-04-09 2018-04-09 一种语义角色分析方法、计算机可读存储介质及终端设备
CN201810309685.6 2018-04-09

Publications (1)

Publication Number Publication Date
WO2019196236A1 true WO2019196236A1 (fr) 2019-10-17

Family

ID=64095371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096258 WO2019196236A1 (fr) 2018-04-09 2018-07-19 Procédé d'analyse de rôle sémantique, support d'informations lisible, dispositif terminal et appareil

Country Status (2)

Country Link
CN (1) CN108804411B (fr)
WO (1) WO2019196236A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110164450B (zh) * 2019-05-09 2023-11-28 腾讯科技(深圳)有限公司 登录方法、装置、播放设备及存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662931A (zh) * 2012-04-13 2012-09-12 厦门大学 一种基于协同神经网络的语义角色标注方法
CN104462066A (zh) * 2014-12-24 2015-03-25 北京百度网讯科技有限公司 语义角色标注方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180633B2 (en) * 2007-03-08 2012-05-15 Nec Laboratories America, Inc. Fast semantic extraction using a neural network architecture
US8392436B2 (en) * 2008-02-07 2013-03-05 Nec Laboratories America, Inc. Semantic search via role labeling
CN104021115A (zh) * 2014-06-13 2014-09-03 北京理工大学 基于神经网络的中文比较句识别方法及装置
CN107480122B (zh) * 2017-06-26 2020-05-08 迈吉客科技(北京)有限公司 人工智能交互方法及人工智能交互装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662931A (zh) * 2012-04-13 2012-09-12 厦门大学 一种基于协同神经网络的语义角色标注方法
CN104462066A (zh) * 2014-12-24 2015-03-25 北京百度网讯科技有限公司 语义角色标注方法及装置

Also Published As

Publication number Publication date
CN108804411B (zh) 2019-10-29
CN108804411A (zh) 2018-11-13

Similar Documents

Publication Publication Date Title
US9613024B1 (en) System and methods for creating datasets representing words and objects
WO2019196314A1 (fr) Procédé et appareil de rapprochement de similitudes d'informations textuelles, dispositif informatique et support de stockage
WO2020062770A1 (fr) Procédé et appareil de construction de dictionnaire de domaine et dispositif et support d'enregistrement
Fan et al. Apply word vectors for sentiment analysis of APP reviews
Firmanto et al. Prediction of movie sentiment based on reviews and score on rotten tomatoes using sentiwordnet
WO2021000497A1 (fr) Procédé et appareil de récupération, dispositif informatique et support de stockage
US10915707B2 (en) Word replaceability through word vectors
US20200278976A1 (en) Method and device for evaluating comment quality, and computer readable storage medium
WO2021169423A1 (fr) Procédé, appareil et dispositif de test de qualité pour l'enregistrement d'un service client, et support de stockage
CN111813895B (zh) 一种基于层次注意力机制和门机制的属性级别情感分析方法
US20200073890A1 (en) Intelligent search platforms
WO2020063071A1 (fr) PROCÉDÉ DE CALCUL DE VECTEUR DE PHRASE FONDÉ SUR UN TEST χ2, PROCÉDÉ ET SYSTÈME DE CLASSIFICATION DE TEXTE
US20240111956A1 (en) Nested named entity recognition method based on part-of-speech awareness, device and storage medium therefor
CN111291177A (zh) 一种信息处理方法、装置和计算机存储介质
Chang et al. A METHOD OF FINE-GRAINED SHORT TEXT SENTIMENT ANALYSIS BASED ON MACHINE LEARNING.
WO2022228127A1 (fr) Procédé et appareil de traitement de texte d'élément, dispositif électronique et support de stockage
US11829722B2 (en) Parameter learning apparatus, parameter learning method, and computer readable recording medium
CN109344246B (zh) 一种电子问卷生成方法、计算机可读存储介质及终端设备
CN115168580A (zh) 一种基于关键词提取与注意力机制的文本分类方法
US20220222442A1 (en) Parameter learning apparatus, parameter learning method, and computer readable recording medium
WO2019196236A1 (fr) Procédé d'analyse de rôle sémantique, support d'informations lisible, dispositif terminal et appareil
CN108694176B (zh) 文档情感分析的方法、装置、电子设备和可读存储介质
CN112632272A (zh) 基于句法分析的微博情感分类方法和系统
Sajeevan et al. An enhanced approach for movie review analysis using deep learning techniques
CN109670171B (zh) 一种基于词对非对称共现的词向量表示学习方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18914135

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18914135

Country of ref document: EP

Kind code of ref document: A1