WO2020147409A1 - Procédé et appareil de classification de texte, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de classification de texte, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2020147409A1
WO2020147409A1 PCT/CN2019/118342 CN2019118342W WO2020147409A1 WO 2020147409 A1 WO2020147409 A1 WO 2020147409A1 CN 2019118342 W CN2019118342 W CN 2019118342W WO 2020147409 A1 WO2020147409 A1 WO 2020147409A1
Authority
WO
WIPO (PCT)
Prior art keywords
classifier
sentiment
level
word vector
classifiers
Prior art date
Application number
PCT/CN2019/118342
Other languages
English (en)
Chinese (zh)
Inventor
金戈
徐亮
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020147409A1 publication Critical patent/WO2020147409A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Definitions

  • the natural language processing field of this application relates to a text classification method, device, computer equipment and storage medium based on contextual word vectors and deep learning.
  • connection classifier has the problems of excessive calculation, long training time, and insufficient accuracy and generalization ability.
  • the purpose of this application is to provide a multi-loss function text classification method, device, computer equipment and storage medium, which are used to solve the problems existing in the prior art and have better learning and generalization capabilities.
  • this application provides a method for text classification with multiple loss functions, which includes the following steps:
  • S20 Input the word vectors in S10 to at least two sets of emotion classifiers for training. After the emotion classifiers train the word vectors, they output the respective fully connected layers to their respective loss functions. Each emotion The classifier selects different emotional characteristics according to different classification requirements of the business;
  • S30 Cross-learn and update the sentiment classifier, according to the number of sentiment classifiers, add each loss function to LOSSes with equal weight as the overall loss function, and update the sentiment classifiers according to the overall loss function , Until the overall loss function no longer decreases.
  • the present application also provides a text classification device, which includes
  • the word vector building module which is used to convert the input text into a word vector form
  • the word vector input module and the preliminary classification module are used to input the word vectors into at least two sets of sentiment classifiers respectively, and output the respective fully connected layers of the sentiment classifiers to their respective loss functions.
  • the sentiment classifier selects different sentiment features according to different classification requirements of the business;
  • the overall loss function acquisition and update module is used to add the weights of the loss functions to form LOSSes as the overall loss function, and update the sentiment classifiers based on the overall loss function until the overall loss function is no longer Until lowered.
  • this application also provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor implements the text classification method when the computer program is executed. The following steps:
  • the word vector building module which is used to convert the input text into a word vector form
  • the word vector input module and the preliminary classification module are used to input the word vectors into at least two sets of sentiment classifiers respectively, and output the respective fully connected layers of the sentiment classifiers to their respective loss functions.
  • the sentiment classifier selects different sentiment features according to different classification requirements of the business;
  • the overall loss function acquisition and update module is used to add the weights of the loss functions to form LOSSes as the overall loss function, and update the sentiment classifiers based on the overall loss function until the overall loss function is no longer Until lowered.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps of the entity identification and linking method are realized:
  • the word vector building module which is used to convert the input text into a word vector form
  • the word vector input module and the preliminary classification module are used to input the word vectors into at least two sets of sentiment classifiers respectively, and output the respective fully connected layers of the sentiment classifiers to their respective loss functions.
  • the sentiment classifier selects different sentiment features according to different classification requirements of the business;
  • the overall loss function acquisition and update module is used to add the weights of the loss functions to form LOSSes as the overall loss function, and update the sentiment classifiers based on the overall loss function until the overall loss function is no longer Until lowered.
  • This application provides a text classification method, device, computer equipment, and storage medium, which converts text input into at least two branches.
  • the sentiment classifier of each branch determines multiple sets of different sentiment features according to different classification requirements of the business.
  • the sentiment classifiers are finally merged in the fully connected layer and trained with different independent loss functions, and then through the cross learning of multiple classifiers (that is, the sentiment classifiers are updated according to the overall loss function until the overall loss function No more reductions so far), the model of each channel can be updated at the same time, so it has higher accuracy. At the same time, it can predict the emotional collocations that have not appeared in the training set. Compared with the original model predicting n types, It can predict n*n emotional collocations, so it has good generalization ability.
  • FIG. 1 is a flowchart of an embodiment of a text classification method according to this application.
  • FIG. 3 is a schematic diagram of program modules of an embodiment of a text classification device of this application.
  • FIG. 4 is a schematic diagram of the hardware structure of an embodiment of the text classification device of this application.
  • This application discloses a method for text classification, which is based on multiple loss functions, as shown in Fig. 1, and includes the following steps:
  • step S10 Construct a word vector and convert the input text into a vector form; in step S10, use the word2vec tool to obtain the text semantic word vector.
  • the CBOW model is used, and the maximum language prediction model is realized by adjusting the value of the hidden matrix of the neural network.
  • S20 Input the word vectors in S10 to at least two sets of emotion classifiers for training. After the emotion classifiers train the word vectors, they output the respective fully connected layers to their respective loss functions. Each emotion The classifier selects different emotion features according to different classification requirements of the business; among them, the loss function of each emotion classifier can be a cross-entropy loss function.
  • S30 Cross-learn and update the sentiment classifier, according to the number of sentiment classifiers, add each loss function to LOSSes with equal weight as the overall loss function, and update the sentiment classifiers according to the overall loss function , Until the overall loss function no longer decreases.
  • the text classification method provided in this application converts the text input into at least two branches.
  • the sentiment classifier of each branch determines multiple sets of different sentiment features according to different classification requirements of the business, and the sentiment classifiers are finally merged in the fully connected layer And using different independent loss functions for training, and then through the cross learning of multiple classifiers can achieve multi-label classification, with better generalization or calibration.
  • the above text classification method can update the models of multiple channels at the same time, so it has higher accuracy. At the same time, it can predict the emotion collocations that have not appeared in the training set, and respectively predict n kinds of phases with the original model. In comparison, it can predict n*n kinds of emotional collocations, so it has better generalization ability.
  • a total of two sets of sentiment classifiers are set up, namely, the first-level sentiment classifier and the second-level sentiment classifier.
  • the first-level sentiment classifier is used to classify the positive and negative sentiments of the input text; the second-level sentiment classifier is used to input the specific emotion of the text
  • the text classification method shown in this embodiment includes the following steps:
  • S21 Use the word vector in S11 as the input of the first-level emotion classifier and the second-level emotion classifier, and output the fully connected layer of the first-level emotion classifier and the second-level emotion classifier to their respective loss functions
  • the first-level emotion classifier can be used to classify the positive and negative emotions of the input text; the second-level emotion classifier can be used to classify the specific emotion types of the input text; the first-level emotion classifier and the second-level emotion classifier perform word vectors After training, the respective fully connected layers are output to their respective loss functions.
  • a first-level emotion classifier is established based on TextRNN and attention mechanism.
  • TextCNN or TextRCNN can also be used instead of the above-mentioned TextRNN and attention scheme; a second-level emotion classifier is established based on TextCNN.
  • TextRNN instead of the above TextCNN solution.
  • TextCNN is composed of Conv and activation function, BN, and MaxPooling.
  • Conv is a convolutional layer, which is used to capture the local correlation of text.
  • the activation function is to add nonlinear transformation to the network to make the network universal BN is to prevent gradient dispersion so that the model can converge better and faster. Maxpooling maximizes local features and reduces the amount of calculation.
  • ⁇ in X2 is the mean
  • is the variance, which is the mean and variance of a hidden variable of a selected sample in a certain calculation
  • ⁇ and ⁇ in X3 are hyperparameters for offset and scaling.
  • the fully connected layers of the first-level emotion classifier and the second-level emotion classifier are output to their respective loss functions.
  • the loss of the first-level emotion classifier and the second-level emotion classifier The functions are all cross entropy loss functions.
  • S31 Cross learning and update the first-level emotion classifier and the second-level emotion classifier: add the loss functions of the first-level emotion classifier and the second-level emotion classifier to LOSSes according to equal weight as the overall loss function, and then according to The overall loss function is to update the hyperparameters of the two channels of the first-level sentiment classifier and the second-level sentiment classifier until the overall loss function no longer decreases. At this time, the model converges and the training is completed. among them:
  • Losses 0.5*Loss RNN +0.5*Loss CNN .
  • Loss RNN and Loss CNN are the cross entropy of the first-level emotion classifier and the second-level emotion classifier, respectively.
  • this method can update the models of the two channels at the same time, so it has higher accuracy, and this method can predict the emotional collocation that has not appeared in the training set, for example, there are 10 kinds of'sad + sadness' in the training set.
  • Our two models predict 4 kinds respectively, then we can predict 4*4 kinds instead of 10 kinds, so it has better generalization ability.
  • the text input is transformed into two branches, one of which is used to classify the positive and negative emotions of the text, and the other part is used to classify the specific emotion type (sadness, gentleness, etc.) of the text, and finally merge in the fully connected layer , Using two independent loss functions for training, and then through the cross learning of the two classifiers to achieve classification, so that it has a better learning ability, generalization ability; and in the case that the two channel classifications are similar A certain calibration effect can improve the accuracy of the model.
  • the text classification method 10 may include or be divided into one or more program modules, and the one or more program modules are stored in storage. In the medium, and executed by one or more processors, to complete the application, and implement the above text classification device and method.
  • the program module referred to in the present application refers to a series of computer program instruction segments that can complete specific functions, and is more suitable for describing the execution process of the text classification method 10 in the storage medium than the program itself. The following description will specifically introduce the functions of each program module in this embodiment:
  • the application also discloses a text classification device, including
  • the word vector construction module 11 is used to convert the input text into a word vector form.
  • the word vector input module 21 is used to input the word vectors into at least two sets of sentiment classifiers, and output the respective fully connected layers of the sentiment classifiers to their respective loss functions.
  • the sentiment classifier selects different sentiment features according to different classification requirements of the business.
  • the overall loss function acquisition and update module is used to add the weights of the loss functions to form LOSSes as the overall loss function, and update the sentiment classifiers based on the overall loss function until the overall loss function is no longer Until lowered.
  • the text classification device converts the text input into at least two branches.
  • the sentiment classifier of each branch determines multiple sets of different sentiment features according to different classification requirements of the business, and the sentiment classifiers are finally merged in the fully connected layer And use different independent loss functions for training, and then through the cross learning of multiple classifiers to achieve classification, can update the models of two channels at the same time, so it has higher accuracy, and at the same time, it can predict that it does not appear in the training set Compared with the original model predicting n kinds of emotion collocations, it can predict n*n kinds of emotion collocations, so it has better generalization ability.
  • word2vec is used to construct the word vector.
  • word2vec tool to get text semantic word vector.
  • the CBOW model is used, and the maximum language prediction model is realized by adjusting the value of the hidden matrix of the neural network.
  • the loss functions in the word vector input module 21 are all cross-entropy loss functions.
  • a first-level emotion classifier and a second-level emotion classifier are set, and the word vector in S1 is used as the input of the first-level emotion classifier and the second-level emotion classifier, and all The fully connected layers of the first-level emotion classifier and the second-level emotion classifier are output to their respective loss functions.
  • the first-level emotion classifier can be used to classify the positive and negative emotions of the input text; the second-level emotion classifier can be used to classify the specific emotion types of the input text.
  • TextCNN or TextRCNN can also be used instead of the above-mentioned TextRNN combined with the attention scheme; a secondary emotion classifier is established based on TextCNN, and in addition, TextRNN can also be used instead of the above-mentioned TextCNN scheme.
  • u t tanh(W w h t +b w )
  • W w , U w and b w are all Attention weights and biases.
  • TextCNN is composed of Conv and activation function, BN, and MaxPooling.
  • Conv is a convolutional layer, which is used to capture the local correlation of text.
  • the activation function is to add nonlinear transformation to the network to make the network universal The ability to transform is enhanced.
  • BN is to prevent gradient dispersion so that the model can converge better and faster.
  • Maxpooling is to maximize local features and reduce the amount of calculation.
  • Conv is used to convolve the input word vector.
  • 1D filters of 6 sizes are selected for convolution.
  • ⁇ in X2 is the mean value
  • is the variance
  • ⁇ and ⁇ in X3 are hyperparameters of offset and scaling.
  • the fully connected layers of the first-level emotion classifier and the second-level emotion classifier are output to their respective loss functions.
  • the loss functions of the above-mentioned first-level emotion classifier and the second-level emotion classifier are all crosses Entropy loss function.
  • the overall loss function acquisition and update module 31 cross-learn and update the first-level emotion classifier and the second-level emotion classifier: add the loss functions of the first-level emotion classifier and the second-level emotion classifier according to equal weight Into LOSSes as the overall loss function, where:
  • Losses 0.5*Loss RNN +0.5*Loss CNN .
  • the two classifiers are updated.
  • the text input is transformed into two branches, one of which is used to classify the positive and negative emotions of the text, and the other part is used to classify the specific emotion type (sadness, gentleness, etc.) of the text, and finally merge in the fully connected layer , Using two independent loss functions for training, and then through the cross learning of the two classifiers to achieve classification, so that it has a better learning ability, generalization ability; and in the case that the two channel classifications are similar A certain calibration effect can improve the accuracy of the model.
  • this embodiment also provides a computer device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server or a cabinet server ( Including independent servers, or server clusters composed of multiple servers) and so on.
  • the computer device 20 in this embodiment at least includes but is not limited to: a memory 21 and a processor 22 that can be communicably connected to each other through a system bus, as shown in FIG. 3. It should be pointed out that FIG. 3 only shows the computer device 20 with components 21-22, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
  • the memory 21 (ie, readable storage medium) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), Read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 21 may be an internal storage unit of the computer device 20, such as a hard disk or memory of the computer device 20.
  • the memory 21 may also be an external storage device of the computer device 20, for example, a plug-in hard disk, a smart media card (SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 21 may also include both the internal storage unit of the computer device 20 and its external storage device.
  • the memory 21 is generally used to store an operating system and various application software installed in the computer device 20, such as the program code of the text classification apparatus 10 in the first embodiment.
  • the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 22 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 22 is generally used to control the overall operation of the computer device 20.
  • the processor 22 is used to run the program code or process data stored in the memory 21, for example, to run the text classification device 10, so as to implement the text classification device method of the first embodiment.
  • this embodiment also provides a computer-readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, Disk, Optical Disk, Server, App Store, etc.
  • a computer program is stored on it, and when the program is executed by the processor, the corresponding function is realized.
  • the computer-readable storage medium of this embodiment is used to store the text classification device 10, and when executed by a processor, it implements the text classification device method of the first embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un procédé et un appareil de classification de texte, un dispositif informatique et un support de stockage. Le procédé comporte les étapes suivantes : S10, construire un vecteur de mot et convertir un texte d'entrée en une forme de vecteur ; S20, entrer séparément le vecteur de mot dans S10 dans au moins deux ensembles de classificateurs de sentiments et délivrer en sortie des couches respectives complètement connectées des classificateurs de sentiments à des fonctions de perte respectives de ceux-ci, de façon à sélectionner, par chacun des classificateurs de sentiments, des caractéristiques de sentiment différentes selon différents besoins de classification de services ; et S30, effectuer un apprentissage croisé et mettre à jour les classificateurs de sentiments et ajouter, sur la base de poids égaux, chacune des fonctions de perte à des pertes en tant que fonction de perte globale en fonction du nombre de classificateurs de sentiments, de telle sorte qu'une classification multi-étiquette puisse être mise en œuvre au moyen d'un apprentissage croisé de multiples classificateurs, de façon à obtenir de meilleurs effets de généralisation ou d'étalonnage.
PCT/CN2019/118342 2019-01-14 2019-11-14 Procédé et appareil de classification de texte, dispositif informatique et support de stockage WO2020147409A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910038962.9A CN109918499A (zh) 2019-01-14 2019-01-14 一种文本分类方法、装置、计算机设备及存储介质
CN201910038962.9 2019-01-14

Publications (1)

Publication Number Publication Date
WO2020147409A1 true WO2020147409A1 (fr) 2020-07-23

Family

ID=66960273

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118342 WO2020147409A1 (fr) 2019-01-14 2019-11-14 Procédé et appareil de classification de texte, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN109918499A (fr)
WO (1) WO2020147409A1 (fr)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019216504A1 (fr) * 2018-05-09 2019-11-14 한국과학기술원 Procédé et système d'estimation d'émotion humaine en utilisant le réseau d'affect physiologique profond pour la reconnaissance d'émotion humaine
CN109918499A (zh) * 2019-01-14 2019-06-21 平安科技(深圳)有限公司 一种文本分类方法、装置、计算机设备及存储介质
CN110309308A (zh) * 2019-06-27 2019-10-08 北京金山安全软件有限公司 一种文字信息的分类方法、装置及电子设备
CN110297907B (zh) * 2019-06-28 2022-03-08 谭浩 生成访谈报告的方法、计算机可读存储介质和终端设备
CN110489545A (zh) * 2019-07-09 2019-11-22 平安科技(深圳)有限公司 文本分类方法及装置、存储介质、计算机设备
CN110442723B (zh) * 2019-08-14 2020-05-15 山东大学 一种基于多步判别的Co-Attention模型用于多标签文本分类的方法
CN110837561A (zh) * 2019-11-18 2020-02-25 苏州朗动网络科技有限公司 文本的分析方法、设备和存储介质
CN112446217B (zh) * 2020-11-27 2024-05-28 广州三七互娱科技有限公司 情感分析方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608956A (zh) * 2017-09-05 2018-01-19 广东石油化工学院 一种基于cnn‑grnn的读者情绪分布预测算法
CN107908635A (zh) * 2017-09-26 2018-04-13 百度在线网络技术(北京)有限公司 建立文本分类模型以及文本分类的方法、装置
CN108460089A (zh) * 2018-01-23 2018-08-28 哈尔滨理工大学 基于Attention神经网络的多元特征融合中文文本分类方法
CN108595601A (zh) * 2018-04-20 2018-09-28 福州大学 一种融入Attention机制的长文本情感分析方法
CN109918499A (zh) * 2019-01-14 2019-06-21 平安科技(深圳)有限公司 一种文本分类方法、装置、计算机设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108351B (zh) * 2017-12-05 2020-05-22 华南理工大学 一种基于深度学习组合模型的文本情感分类方法
CN108596470A (zh) * 2018-04-19 2018-09-28 浙江大学 一种基于TensorFlow框架的电力设备缺陷文本处理方法
CN108960317B (zh) * 2018-06-27 2021-09-28 哈尔滨工业大学 基于词向量表示和分类器联合训练的跨语言文本分类方法
CN109034281A (zh) * 2018-07-18 2018-12-18 中国科学院半导体研究所 加速基于卷积神经网络的中文手写体识别的方法
GB201818237D0 (en) * 2018-11-08 2018-12-26 Polyal A dialogue system, a dialogue method, a method of generating data for training a dialogue system, a system for generating data for training a dialogue system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608956A (zh) * 2017-09-05 2018-01-19 广东石油化工学院 一种基于cnn‑grnn的读者情绪分布预测算法
CN107908635A (zh) * 2017-09-26 2018-04-13 百度在线网络技术(北京)有限公司 建立文本分类模型以及文本分类的方法、装置
CN108460089A (zh) * 2018-01-23 2018-08-28 哈尔滨理工大学 基于Attention神经网络的多元特征融合中文文本分类方法
CN108595601A (zh) * 2018-04-20 2018-09-28 福州大学 一种融入Attention机制的长文本情感分析方法
CN109918499A (zh) * 2019-01-14 2019-06-21 平安科技(深圳)有限公司 一种文本分类方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN109918499A (zh) 2019-06-21

Similar Documents

Publication Publication Date Title
WO2020147409A1 (fr) Procédé et appareil de classification de texte, dispositif informatique et support de stockage
US20240078386A1 (en) Methods and systems for language-agnostic machine learning in natural language processing using feature extraction
WO2018040068A1 (fr) Système et procédé d'analyse sémantique sur la base d'un graphique de connaissances
Chen et al. Mining user requirements to facilitate mobile app quality upgrades with big data
CN112231569B (zh) 新闻推荐方法、装置、计算机设备及存储介质
US20230385549A1 (en) Systems and methods for colearning custom syntactic expression types for suggesting next best corresponence in a communication environment
WO2022048363A1 (fr) Procédé et appareil de classification de site web, dispositif informatique et support de stockage
CN113722438B (zh) 基于句向量模型的句向量生成方法、装置及计算机设备
WO2019154411A1 (fr) Procédé et dispositif de mise à niveau de vecteur de mots
US10678831B2 (en) Page journey determination from fingerprint information in web event journals
US10831809B2 (en) Page journey determination from web event journals
CN111126067B (zh) 实体关系抽取方法及装置
Sun et al. Feature-frequency–adaptive on-line training for fast and accurate natural language processing
CN112995414B (zh) 基于语音通话的行为质检方法、装置、设备及存储介质
CN112926308B (zh) 匹配正文的方法、装置、设备、存储介质以及程序产品
CN112287069A (zh) 基于语音语义的信息检索方法、装置及计算机设备
Aziguli et al. A robust text classifier based on denoising deep neural network in the analysis of big data
CN115730597A (zh) 多级语义意图识别方法及其相关设备
JP7291181B2 (ja) 業界テキスト増分方法、関連装置、およびコンピュータプログラム製品
CN112199954B (zh) 基于语音语义的疾病实体匹配方法、装置及计算机设备
Andriyanov Combining Text and Image Analysis Methods for Solving Multimodal Classification Problems
CN112948561A (zh) 一种问答知识库自动扩建的方法和装置
WO2021056740A1 (fr) Système et procédé de construction de modèle linguistique, dispositif informatique et support de stockage lisible
CN110909777A (zh) 一种多维特征图嵌入方法、装置、设备及介质
CN112732913B (zh) 一种非均衡样本的分类方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19909782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19909782

Country of ref document: EP

Kind code of ref document: A1