WO2023173593A1 - 文本分类方法、文本分类装置、存储介质及电子装置 - Google Patents

文本分类方法、文本分类装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2023173593A1
WO2023173593A1 PCT/CN2022/095743 CN2022095743W WO2023173593A1 WO 2023173593 A1 WO2023173593 A1 WO 2023173593A1 CN 2022095743 W CN2022095743 W CN 2022095743W WO 2023173593 A1 WO2023173593 A1 WO 2023173593A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
text
layer
convolutional
feature data
Prior art date
Application number
PCT/CN2022/095743
Other languages
English (en)
French (fr)
Inventor
刘建国
彭强
Original Assignee
青岛海尔科技有限公司
海尔智家股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛海尔科技有限公司, 海尔智家股份有限公司 filed Critical 青岛海尔科技有限公司
Publication of WO2023173593A1 publication Critical patent/WO2023173593A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present disclosure relates to the field of smart home, specifically, to a text classification method, text classification device, storage medium and electronic device.
  • the user's voice interaction instructions are usually converted into text, and then the text is parsed to obtain the user's intention and control the smart home device. Parsing text requires using text classification methods to classify the text at the semantic level, calculating the probability that the text corresponds to each intent, and selecting the most likely intent among all supported intents.
  • the pre-training models used for text classification have complex structures, require a lot of computing resources during training and use, and have problems of slow training and classification.
  • neural networks with simple structures have low accuracy when classifying text. low question.
  • the present disclosure provides a text classification method, text classification device, storage medium and electronic device to solve the problem in the prior art that the accuracy of classification cannot be improved while improving the training and classification speed of the model.
  • a text classification method including: receiving voice interaction instructions issued by a user, and converting the voice interaction instructions into text to be classified; using a trained improved convolutional neural network model. At least two different convolutional networks respectively extract sub-feature data of the text to be classified and splice the sub-feature data to obtain target feature data.
  • the convolutional network includes a pooling layer and at least one convolution layer. The cumulative layer; input the target feature data into the fully connected layer of the trained improved convolutional neural network model, and classify the text to be classified according to the target feature data to determine the corresponding target intention category; according to the The above-mentioned target intent categories control the target home device to perform corresponding operations.
  • a text classification device including: receiving voice interaction instructions issued by a user, and converting the voice interaction instructions into text to be classified; using a trained improved convolutional neural network model. At least two different convolutional networks respectively extract sub-feature data of the text to be classified and splice the sub-feature data to obtain target feature data.
  • the convolutional network includes a pooling layer and at least one convolution layer. The cumulative layer; input the target feature data into the fully connected layer of the trained improved convolutional neural network model, and classify the text to be classified according to the target feature data to determine the corresponding target intention category; according to the The above-mentioned target intent categories control the target home device to perform corresponding operations.
  • a computer-readable storage medium includes a stored program, and when the program is run, the method as described in the first aspect is executed.
  • an electronic device including a memory, a processor, and an input device.
  • the memory stores a computer program.
  • the input device is used to receive voice interaction instructions issued by a user.
  • the processor is arranged to perform a method as described in the first aspect by said computer program.
  • a computer program product including a computer program that implements the method described in the first aspect when executed by a processor.
  • a computer program including when the computer program is executed by a processor, the method as described in the first aspect is implemented.
  • the text classification method, text classification device, storage medium and electronic device provided by the present disclosure use at least two different convolutional networks in the trained improved convolutional neural network model to respectively extract sub-feature data of the text to be classified. It can extract sub-feature data of different dimensions and depths of the text to be classified, increase the number of features of the text to be classified, and obtain more features of the text to be classified, thereby improving the accuracy of text classification by the improved convolutional neural network model.
  • different convolutional networks extract features of the text to be classified in parallel without increasing the network depth, the training and classification speed of the model is guaranteed. Therefore, the accuracy of classification can be improved while ensuring the training and classification speed of the model. .
  • Figure 1 is an application scenario diagram of a text classification method provided according to an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of a text classification method according to the first embodiment of the present disclosure
  • Figure 3 is a network structure of an improved convolutional neural network model provided according to an embodiment of the present disclosure
  • Figure 4 is a schematic flowchart of a text classification method according to a third embodiment of the present disclosure.
  • Figure 5 is a schematic flowchart of a text classification method provided according to the fourth embodiment of the present disclosure.
  • Figure 6 is a block diagram of a text classification device provided according to the sixth embodiment of the present disclosure.
  • FIG. 7 is a block diagram of an electronic device according to a seventh embodiment of the present disclosure.
  • a text classification method is provided. This text classification method is widely used in intelligent digital control application scenarios such as Smart Home, Smart Home, Smart Home Equipment Ecology, and Intelligence House Ecology.
  • FIG 1 is a schematic diagram of an application scenario of a text classification method according to an embodiment of the present disclosure.
  • an application scenario provided by an embodiment of the present disclosure includes: a user and an electronic device 102.
  • Users can send voice interaction instructions according to their own needs.
  • the electronic device may be a smart home device that the user wants to control by voice command, or it may be other smart home device or server that can communicate with the smart home device that the user wants to control by voice command. Therefore, the electronic device can directly receive the voice interaction instructions issued by the user, or can also receive the voice interaction instructions issued by the user by communicating with the smart home device.
  • the electronic device 102 is configured with a trained improved neural network model.
  • the electronic device 102 receives the voice interaction instructions issued by the user and converts the voice interaction instructions into text to be classified; using at least two different convolutional networks in the trained improved convolutional neural network model to respectively extract sub-feature data of the text to be classified , and splice each sub-feature data to obtain the target feature data; input the target feature data into the fully connected layer of the trained improved convolutional neural network model, and classify the text to be classified according to the intent category based on the target feature data, so as to Determine the corresponding target intention category; control the home device to perform corresponding operations according to the target intention category.
  • the electronic device 102 When the electronic device 102 is a smart home device that is to be controlled by the user's voice command, it can directly control itself to perform corresponding operations.
  • the electronic device 102 When the electronic device 102 is a server that can communicate with the smart home device that the user's voice command wants to control, it can send instructions to the smart home device that the user's voice command wants to control, so as to control the smart home device that the user's voice command wants to control.
  • Home devices perform corresponding operations.
  • the electronic device 102 When the electronic device 102 is another smart home device that can communicate with the smart home device that the user wants to control by the voice command, it can send instructions to the smart home device that the user wants to control by the voice command to control the smart home device that the user wants to control by the voice command.
  • the controlled smart home device performs the corresponding operation.
  • a database may be provided on the electronic device 102 or independently of the electronic device 102 to provide data storage services for the electronic device 102.
  • Cloud computing and/or edge computing services may be configured on the electronic device 102 or independently of the electronic device 102 for use in Provide data computing services for the electronic device 102 .
  • Smart home devices can be, but are not limited to, PCs, mobile phones, tablets, smart air conditioners, smart hoods, smart refrigerators, smart ovens, smart stoves, smart washing machines, smart water heaters, smart washing equipment, smart dishwashers, and smart projection equipment.
  • existing pre-training models can be used.
  • the word vectors corresponding to the text or words have been trained many times and can better reflect the semantics of the text or words. You only need to use
  • the pre-training model is trained with a training data set that conforms to the actual application scenario (referred to as finetune), and the parameters of the pre-training model are adjusted and optimized to make it more suitable for the actual application scenario, so as to obtain higher accuracy in the actual application scenario.
  • existing pre-training models have complex network structures and a large number of parameters. When using pre-training models for training and classification, they require a lot of computing resources. Moreover, due to their complex network structures and large number of parameters, training is difficult. The speed is full.
  • CNN convolutional neural networks
  • LSTM long short-term memory artificial neural networks
  • CNN convolutional neural networks
  • LSTM long short-term memory artificial neural networks
  • CNN convolutional neural networks
  • LSTM long short-term memory artificial neural networks
  • CNN convolutional neural networks
  • LSTM long short-term memory artificial neural networks
  • CNN convolutional neural networks
  • LSTM long short-term memory artificial neural networks
  • a simpler network structure means that the network depth is not enough, the number of parameters is small, the model capacity is small, and the sensitivity to features is high.
  • this type of neural network has higher training and classification speed, its accuracy is lower and cannot be obtained. to long-distance context information, further reducing the accuracy of classification.
  • the text classification methods in the existing technology cannot improve the accuracy of classification while ensuring the training and classification speed of the model.
  • the inventor discovered through creative research and proposed the technical solution of the present disclosure, aiming to solve the above problems in the prior art.
  • the inventor proposed that multiple groups of different convolutional networks can be used to extract features from the text to be classified from different dimensions and depths, increase the number of features of the text to be classified, and obtain the characteristics of the text to be classified. More features can improve the accuracy of text classification by the improved convolutional neural network model.
  • the training and classification speed of the model is guaranteed. Therefore, the accuracy of classification can be improved while ensuring the training and classification speed of the model. .
  • FIG 2 is a schematic flowchart of a text classification method according to the first embodiment of the present disclosure. As shown in Figure 2, the execution subject of the present disclosure asks a text classification device, which is located in an electronic device.
  • the text classification method provided by this embodiment includes steps 201 to 204.
  • Step 201 Receive voice interaction instructions issued by the user, and convert the voice interaction instructions into text to be classified.
  • the electronic device can convert the voice interaction instruction issued by the user into the interactive instruction text through a preconfigured voice conversion device or a voice-to-text program, and can use the first
  • the preset function encodes the text, words, etc. in the interactive instruction text to form the text to be classified.
  • the first preset function can encode text, words, etc. by mapping text, word numbers, etc. into a multi-dimensional vector space, so that each text and word can be represented by a unique multi-dimensional vector, and these represent the text or word.
  • Multi-dimensional vectors are spliced according to the order in the interactive instruction text to form the text to be classified. Specifically, each multi-dimensional vector can be spliced into a vector matrix or a higher-dimensional row vector or column vector, etc.
  • Step 202 Use at least two different convolutional networks in the trained improved convolutional neural network model to respectively extract sub-feature data of the text to be classified, and splice each sub-feature data to obtain target feature data.
  • Convolution The network includes a pooling layer and at least one convolutional layer.
  • the improved convolutional neural network model is pre-trained, and the improved convolutional neural network model includes an embedding layer, at least two different convolutional networks, a feature fusion layer, a fully connected layer and a normalization layer.
  • the convolutional network includes a pooling layer and at least one convolutional layer.
  • Two different convolutional networks can have different numbers of convolutional layers in the convolutional network, or they can have the same number of convolutional layers but different preset step sizes and different preset convolutions for each convolutional layer.
  • the kernel sizes are different, and the preset convolution kernel filling methods of each convolution layer are different.
  • the sub-feature data of the text to be classified is data obtained by using different convolutional networks to extract features of the text to be classified. Therefore, the sub-feature data of the text to be classified can be data of different dimensions.
  • the improved convolutional neural network model at least two different convolutional networks are in parallel, which can extract features from the text to be classified from different dimensions and depths, increase the number of features of the text to be classified, and then realize the text to be classified. for more accurate classification.
  • the target feature data is obtained by splicing sub-feature data of at least two texts to be classified.
  • each convolutional network inputs the proposed sub-feature data into the feature fusion layer, and the feature fusion layer splices the sub-feature data.
  • the feature fusion layer splices the sub-feature data.
  • two sub-feature data that are 128-dimensional vectors are spliced to obtain target feature data that is a 256-dimensional vector.
  • Step 203 Input the target feature data into the fully connected layer of the trained improved convolutional neural network model, and classify the text to be classified according to the target feature data into the intent category to determine the corresponding target intent category.
  • the target feature data is the distributed representation of the text to be classified in the improved convolutional neural network model.
  • the fully connected layer of the improved convolutional neural network model can use the second preset The function maps the target feature data to each pre-constructed intent category, that is, converts the dimension of the target feature data into the dimension of the number of intent categories through the fully connected layer to determine the target intent category corresponding to the text to be classified.
  • Each pre-built intent category is a set of intent categories to which the voice interaction instructions issued by the user may belong.
  • the target intent category is the intent category to which the voice interaction instruction issued by the user actually belongs.
  • the second preset function can be obtained by training the improved convolutional network model.
  • Step 204 Control the target home device to perform corresponding operations according to the target intention category.
  • corresponding operations can be set for each intention category in advance.
  • the electronic device can control the smart home device to perform the operation corresponding to the target intention category, so as to enable the user to pass Voice interactive commands control smart home devices.
  • Figure 3 shows the network structure of an improved convolutional neural network model provided by the present disclosure.
  • the pre-built improved convolutional neural network model can be shown in Figure 3, including an embedding layer (embedding layer) 31, a first convolutional network 32, a second convolutional network 33, and a feature fusion layer (concat layer). ) 34.
  • Fully connected layer (dense layer) 35.
  • Normalization layer (softmax layer) 36.
  • the first convolution network 32 includes a first convolution layer (conv_6) 321, a second convolution layer (conv_4) 322, a third convolution layer (conv_2) 323 and a first pooling layer (max pooling layer) 324 .
  • the convolution kernel size of the first convolution layer 321 is 6*6, the convolution kernel size of the second convolution layer 322 is 4*4, and the convolution kernel size of the third convolution layer 323 is 2*2.
  • the second convolution network 33 includes a fourth convolution layer (conv_7) 331, a fifth convolution layer (conv_5) 332, a sixth convolution layer (conv_3) 333, and a second pooling layer (max pooling layer) 334.
  • the convolution kernel size of the fourth convolution layer 331 is 7*7
  • the convolution kernel size of the fifth convolution layer 332 is 5*5
  • the convolution kernel size of the sixth convolution layer 333 is 3*3.
  • Various parameters of the pre-built improved convolutional neural network model in this embodiment can be determined through pre-training.
  • a dropout layer can be added between the feature fusion layer and the normalization layer. , randomly remove some neurons during each training process to reduce overfitting of neural network model parameters.
  • the text classification method receives voice interaction instructions issued by the user and converts the voice interaction instructions into text to be classified; using at least two different convolutional networks in the trained improved convolutional neural network model, respectively Extract the sub-feature data of the text to be classified and splice each sub-feature data to obtain the target feature data.
  • the convolutional network includes a pooling layer and at least one convolution layer; input the target feature data into the trained improved convolutional neural network In the fully connected layer of the network model, the text to be classified is classified into intent categories based on the target feature data to determine the corresponding target intent category; the target home device is controlled to perform corresponding operations based on the target intent category.
  • the text classification method provided in this embodiment can ensure the training and classification of the model. speed while improving classification accuracy.
  • the text classification method provided in this embodiment is based on Embodiment 1.
  • the trained improved convolutional neural network model also includes an embedding layer.
  • step 202 at least two different layers in the trained improved convolutional neural network model are used.
  • the convolutional network respectively extracts the sub-feature data of the text to be classified, which also includes step 2011.
  • Step 2011 Use the embedding layer in the trained improved convolutional neural network model to segment the text to be classified and determine the word vector corresponding to each text or word, so as to obtain the word vector matrix corresponding to the text to be classified.
  • different encodings correspond to a unique word vector. Since the characters, words, etc. in the interactive instruction text are represented in the form of encoding, therefore, the characters in the interactive instruction text , words, etc. all correspond to different word vectors one-to-one, that is, in the trained improved convolutional neural network model, word vectors can be used to form unique identifiers for carriers of user intentions such as text, words, etc.
  • the embedding layer can segment the text to be classified, that is, distinguish the characters, words and numbers identified in the form of encoding in the text to be classified, to determine the word vectors corresponding to different characters or encodings, and then determine the corresponding word vectors of the text to be classified.
  • the word vector matrix corresponding to the text to be classified can be formed by combining the word vectors corresponding to each text, word and number in rows or columns to form a matrix. For example, after the voice interaction instructions issued by the user are converted into interactive instruction text, there are 10 Chinese characters in total, and the text to be classified is composed of the codes of these 10 Chinese characters arranged in the order in the interactive instruction text. It can be understood that each The codes of two Chinese characters can have the same length.
  • the embedding layer converts the text to be classified into a word vector matrix based on the word vectors corresponding to each of the trained codes. When the dimension of a single word vector is 128 dimensions, the word vector matrix can be a 10*128 vector matrix.
  • the embedding layer can randomly generate word vectors corresponding to each encoding, and the word vectors uniquely correspond to each encoding.
  • the word vector corresponding to each encoding can be adjusted through the back propagation algorithm, and the adjusted correspondence between each encoding and the word vector can be saved in the database to facilitate the improvement of the convolutional neural network.
  • the embedding layer can obtain the word vector corresponding to each encoding from the database.
  • the text classification method provided in this embodiment uses the embedding layer in the trained improved convolutional neural network model to segment the text to be classified and determine the word vector corresponding to each text or word, so as to obtain the word vector corresponding to the text to be classified.
  • matrix Since the word vector matrix corresponding to the text to be classified is determined, the intention in the text to be classified is expressed in numerical form.
  • the intention category of the text to be classified can be determined, and then the intention category of the text to be classified can be determined. Text Categorization.
  • the refinement in step 2011 includes steps 20111 to 20112.
  • Step 20111 Determine the word vector corresponding to each text or word in the text to be classified based on the preset dictionary obtained through training.
  • the word vector is used to uniquely identify the text or word in the preset dictionary.
  • the preset dictionary may be a dictionary obtained by deduplicating and sorting the words in the training data set during the training process of the improved convolutional neural network model.
  • the words in the training data set all have unique characteristics in the preset dictionary. index value.
  • the first preset function can map the characters, words, etc. in the interactive instruction text to the index values of the characters, words, etc. in the preset dictionary.
  • the embedding layer of the trained improved convolutional neural network model can query the word vector corresponding to each index value in the database obtained after training through the index value of each text or word in the dictionary.
  • Step 20112 Splice the word vectors corresponding to each text or word according to the order in the text to be classified to obtain a word vector matrix corresponding to the text to be classified.
  • word vectors can be column vectors or row vectors, and a single N-dimensional word vector can also form a word vector matrix with 1 row and N columns or N rows and 1 column (N is greater than or equal to 1). Therefore, according to the number M of characters and words after word segmentation (M is greater than or equal to 1), and the order of each character or word in the text to be classified, the N-dimensional word vector corresponding to each character or word can be formed into M rows and N columns. , a word vector matrix with N rows and M columns, 1 row and M*N columns, or M*N rows and 1 column.
  • the word vector corresponding to each character or word in the text to be classified is determined based on the preset dictionary obtained through training.
  • the word vector is used to uniquely identify the character or word in the preset dictionary; the word vector corresponding to each character or word is The vectors are spliced according to the order in the text to be classified to obtain the word vector matrix corresponding to the text to be classified. Since each text in the preset dictionary has a unique index, the text to be classified is converted into an index in the preset dictionary, and then the preset The index in the dictionary determines the word vector corresponding to the text to be classified, so it improves the speed of converting the text to be classified into a word vector matrix, thereby improving the training and classification speed of the model.
  • Figure 4 is a schematic flowchart of a text classification method provided according to the third embodiment of the present disclosure. As shown in Figure 4, the text classification method provided by this embodiment is based on the second embodiment and refines step 202. Then the steps 202 refinement includes steps 301 to 302.
  • Step 301 Input word vector matrices into at least two different convolutional networks.
  • At least two different convolutional networks have parallel structures, that is, the data input to each convolutional network is the same, and the data output by each convolutional network also goes to the same place.
  • the word vectors corresponding to the text to be classified are input into each convolutional network respectively.
  • Step 302 For each convolutional network, use at least one corresponding convolutional layer to perform feature extraction on the word vector matrix to obtain the feature matrix of the word vector matrix.
  • each convolutional network includes a pooling layer and at least one convolutional layer. If the convolutional network includes a convolutional layer and a pooling layer, the word vector matrix can be convolved using the convolutional layer's preset convolution kernel size, preset step size, and preset convolution kernel padding method. calculate.
  • the preset convolution kernel size can be set according to the length of the words in the voice interaction instructions counted in actual application scenarios. For example, the length of Chinese words is usually between 2 and 7 characters.
  • the preset The convolution kernel size can be 1*2, 2*1, 2*2, 1*3, 3*1, 3*3, 1*4, 4*1, 4*4, 1*5, 5*1, 5*5, 6*6, 7*7, etc.
  • the preset convolution kernel padding method can be "SAME" mode, that is, during convolution calculation, the convolution kernel size is padded with 0 at the boundary where there is no data in the word vector matrix, so that after convolution The length of the vector is the same as the length of the original vector.
  • the convolution calculation may be a convolution calculation method in a convolutional neural network, which will not be described in detail in this embodiment.
  • the improved convolutional neural network model can adjust the weight matrix of each convolution layer through the back propagation algorithm during training.
  • the improved convolutional neural network model When the improved convolutional neural network model is trained for the first time, it can be generated using a random initialization method, such as using a truncated normal distribution (truncated_normal_initializer) with a standard deviation of 0.02 to initialize the weight matrix.
  • a random initialization method such as using a truncated normal distribution (truncated_normal_initializer) with a standard deviation of 0.02 to initialize the weight matrix.
  • Step 303 Use the pooling layer of each convolutional network to reduce the dimension of the feature matrix to obtain sub-feature data.
  • the sub-feature data is in the form of a vector, and the vector corresponding to the sub-feature data has the same dimension as the vector in the word vector matrix. .
  • the word vector matrix may include multiple word vectors
  • what is obtained after the convolution calculation may also be a vector matrix including multiple vectors.
  • the convolutional networks are different convolutional networks, the number of rows and columns of the feature matrix of the word vector matrix extracted using different convolutional networks may be different.
  • the pooling layer in the convolutional network is used to reduce the dimensionality of the feature matrix of the word vector matrix so that different convolutional networks can input the same form of sub-feature data.
  • the sub-feature data can be in vector form, and for convenience Subsequently, the sub-feature data of different dimensions and depths extracted by different convolutional networks are fused.
  • the vectors corresponding to the sub-feature data can have the same dimensions as the vectors in the word vector matrix.
  • the length of the vector after convolution is the same as the length of the original vector.
  • the number of rows and columns of the feature matrix of is the same as the number of rows and columns of the matrix input to the convolutional network.
  • the word vector matrix corresponding to the text to be classified consists of 10 128-dimensional word vectors, it is a 10*128 (10 rows and 128 columns) matrix.
  • the feature matrix is also a 10*128 matrix.
  • the pooling layer is used to reduce the dimension of the feature matrix and convert the feature matrix into a 1*128 matrix, that is, a 128-dimensional vector. Specifically, the average value of each column of the matrix can be calculated, and the average value of each column can be used as the value of each dimension in the reduced vector.
  • the text classification method inputs the word vector matrices into at least two different convolutional networks respectively; for each convolutional network, at least one corresponding convolutional layer is used to extract features from the word vector matrix to obtain The feature matrix of the word vector matrix, and the pooling layer is used to reduce the dimension of the feature matrix to obtain sub-feature data.
  • the sub-feature data is in the form of a vector, and the vector corresponding to the sub-feature data has the same dimension as the vector in the word vector matrix. ; Use the pooling layer of each convolutional network to reduce the dimensionality of the feature matrix to obtain sub-feature data.
  • the sub-feature data is in the form of a vector, and the vector corresponding to the sub-feature data has the same dimension as the vector in the word vector matrix. Since the feature matrices of word vector matrices corresponding to the text to be classified are obtained in different dimensions and depths through different convolutional networks, more features of the text to be classified can be obtained, and the feature matrix can be reduced in dimension into sub-feature data, which can reduce The amount of subsequent calculations, therefore, can further improve the speed and accuracy of text classification.
  • Figure 5 is a schematic flow chart of a text classification method provided according to the fourth embodiment of the present disclosure.
  • the text classification method provided by this embodiment uses at least one convolution in step 302 based on the third embodiment.
  • the layer performs feature extraction and refinement on the word vector matrix, and the refinement in step 302 includes steps 401 to 403.
  • Step 401 Determine the number of convolutional layers in the convolutional network.
  • the number of convolutional layers in different convolutional networks in the used trained improved convolutional neural network model can be directly queried.
  • Step 402 If it is determined that the number of convolutional layers is one, determine the word vector matrix as the input matrix of the convolutional layer, and use the convolutional layer to perform the first operation on its input matrix.
  • Step 403 Determine the output matrix obtained by performing the first operation on the convolution layer as the feature matrix of the word vector matrix.
  • the first operation includes: performing convolution calculation on the input matrix with the preset step size, convolution kernel size and convolution kernel padding method of the convolution layer to obtain the first matrix; performing layer normalization on the first matrix Process to obtain the second matrix; input the second matrix into the preset activation function, and use the preset activation function to output the third matrix; sum the input matrix and the third matrix to obtain the residual matrix; convert the residual matrix Perform layer normalization to obtain the output matrix.
  • convolution calculation can perform feature extraction on the input matrix, transform the original input information (the word vector matrix corresponding to the text to be classified), and extract the key information in the original input information that can represent its true meaning. , differential information that can be distinguished from other information, etc., is expressed in the form of output features.
  • Layer normalization processing can make the distribution of the first matrix more stable, improve the network convergence speed, and thereby improve the training speed of the improved convolutional neural network model.
  • the activation function can use the gelu function, which enables the backpropagation algorithm to better optimize the weight matrix. Summing and calculating the residual matrix can prevent model overfitting and avoid gradient disappearance. Layer normalization of the residual matrix can make the parameter distribution of the residual matrix more stable and improve the network convergence speed.
  • the text classification method determines the number of convolutional layers in the convolutional network; if the number of convolutional layers is determined to be one, then determine the word vector matrix as the input matrix of the convolutional layer, and use the convolutional layer Perform a first operation on its input matrix; determine the output matrix obtained by performing the first operation on the convolution layer as the feature matrix of the word vector matrix; the first operation includes: using the preset step size of the convolution layer, the convolution kernel size and The convolution kernel filling method performs convolution calculation on the input matrix to obtain the first matrix; performs layer normalization processing on the first matrix to obtain the second matrix; inputs the second matrix into the preset activation function, and uses the preset The activation function outputs the third matrix; sums the input matrix and the third matrix to obtain the residual matrix; performs layer normalization on the residual matrix to obtain the output matrix.
  • the residual is calculated from the input and output of the convolution in the convolutional network to obtain the residual matrix, and layer normalization is performed after the convolutional layer and the residual matrix, it can reduce the overfitting problem of the model and speed up the process. Model training and classification speed.
  • step 302 also includes steps 404 to 407.
  • Step 404 If it is determined that the number of convolutional layers is multiple, and each convolutional layer is connected in order from large to small convolution kernel size.
  • the convolution kernel sizes can be arranged in order from large to small, and the larger size can be used first.
  • the convolution kernel extracts long text features, and then uses a smaller convolution kernel to fuse the long text features into short text features, which can then fuse long-distance context information into local features to ensure the fusion of context and current word features, so that The target feature data obtained by abstraction is more accurate, thereby improving the accuracy of classification. Therefore, in the improved convolutional neural network model, when the number of convolutional layers in the convolutional network is multiple, each convolutional layer is connected in order from large to small convolution kernel size.
  • Step 405 Determine the word vector matrix as the input matrix of the convolution layer with the largest convolution kernel size.
  • Step 406 Use each convolution layer to perform the first operation on its input matrix in turn, and determine the output matrix obtained by performing the first operation on each convolution layer as the input matrix of the next convolution layer until the smallest convolution kernel size is used.
  • the convolutional layer performs the first operation on its input matrix.
  • Step 407 Determine the output matrix obtained by performing the first operation on the convolution layer with the smallest convolution kernel size as the feature matrix of the word vector matrix.
  • each convolution layer is connected to each other.
  • the feature matrix of the word vector matrix can be obtained from the convolution layer with the smallest convolution kernel size.
  • each convolution layer in the middle (the convolution layer whose convolution kernel size is neither the largest nor the smallest) is used to further extract the output matrix after feature extraction of the previous convolution layer. And serve as the input matrix of the next convolutional layer.
  • the matrix output by performing the first operation on the convolution layer with the smallest size is a feature matrix obtained by extracting features from word vectors using a convolution network with multiple convolution layers.
  • Convolutional layers are used in turn to perform the first operation on the input matrix, and the output matrix of the previous convolutional layer is determined as the input matrix of the next convolutional layer, which can integrate long-distance contextual information in the text to be classified into local features. , to obtain abstract high-level features, that is, to gradually integrate the semantic information, grammatical information, etc. in the text to be classified into a feature vector.
  • the trained improved convolutional neural network model can include two different convolutional networks, one of which consists of three convolutional networks with convolution kernel sizes distributed as 7, 5, and 3.
  • the convolutional layer and a maximum pooling layer are connected in sequence.
  • the other convolutional network is composed of three convolutional layers with convolution kernel sizes distributed as 6, 4, and 2 and a maximum pooling layer.
  • the length of Chinese words is usually 2-7 characters, so selecting the convolution kernel size [2,3,4,5,6,7] can obtain complete context information, and the convolution kernel size arranged in reverse order is more suitable for increasing The influence of contextual information on the current word in Chinese.
  • Convolutional networks that are too deep will cause overfitting, so the convolutional kernels are divided into 2 groups [2, 4, 6] and [3, 5, 7], and two different convolutional networks are formed to calculate in parallel.
  • the splicing method blends features of different depths and widths together. Then, the convolution and residual are normalized (layer normal) to prevent over-fitting of the model, making the model converge faster. The training speed is improved through warm up, and better accuracy is obtained.
  • the influence of the layer (convolution layer with larger convolution kernel size) on the deep layer (convolution layer with smaller convolution kernel size) can be used to use the residual structure to make the input and output of the convolution as residual, and in each Layer normalization is used after a convolutional layer and after residual stacking to reduce overfitting problems.
  • the word vector matrix is determined to have the largest convolution kernel size.
  • the input matrix of the convolution layer sequentially use each convolution layer to perform the first operation on its input matrix, and determine the output matrix obtained by performing the first operation on each convolution layer as the input matrix of the next convolution layer, until the The convolution layer with the smallest convolution kernel size performs the first operation on its input matrix; the output matrix obtained by performing the first operation on the convolution layer with the smallest convolution kernel size is determined as the feature matrix of the word vector matrix.
  • each convolution layer is connected in order from large to small convolution kernel size, each convolution layer is used in turn to extract features from the word vector matrix, and long text features can be extracted first. Then the long text features are merged into the short text features, so more accurate target feature data can be obtained, further improving the accuracy of classification.
  • the text classification method provided by this embodiment refines step 303 based on any of the above embodiments, and then the refinement of step 303 includes step 501.
  • Step 501 Determine the maximum value of each vector in the feature matrix in the same dimension as sub-feature data.
  • the maximum pooling method can be used to reduce the dimension of each vector in the feature matrix, and the maximum value of each vector in the same dimension is determined as the sub-feature data, so as to reduce the dimension of the feature and extract better features. , has the characteristics of stronger semantic information.
  • the feature matrix is a 10*128 matrix
  • the maximum value of each column of the matrix can be found, and the maximum value of each column is used as the value of each dimension in the 128-dimensional vector after dimensionality reduction.
  • the text classification method provided in this embodiment determines the maximum value of each vector in the feature matrix in the same dimension as sub-feature data. While reducing the dimension of the feature, it can extract better features with stronger semantic information. , so more accurate sub-feature data can be obtained, further improving the accuracy of classification.
  • step 203 the text to be classified is classified into intention categories according to the target feature data to determine the corresponding target intention category for refinement, then step 203 Refinement includes step 2031.
  • Step 2031 Determine the category with the highest probability among each intention category corresponding to the target feature data as the target intention category.
  • the fully connected layer maps the target feature data to each target intention category, and obtains the probability that the target feature data belongs to each intention category, so as to classify the text to be classified at the semantic level. It can be understood that the sum of the probabilities of all possible intention categories corresponding to the text to be classified is 1. Therefore, the category with the highest probability among the target feature data corresponding to each intention category can be determined as the target intention category, so that the target intention category is as close as possible The same as the true intent category of the text to be classified.
  • FIG. 6 is a block diagram of a text classification device according to the sixth embodiment of the present disclosure.
  • the text classification device 70 provided by this embodiment includes a receiving module 71 , an acquisition module 72 , a determination module 73 and a control module 74 .
  • the receiving module 71 is used to receive voice interaction instructions issued by the user, and convert the voice interaction instructions into text to be classified.
  • the acquisition module 72 is configured to use at least two different convolutional networks in the trained improved convolutional neural network model to respectively extract the sub-feature data of the text to be classified, and splice the sub-feature data to obtain the target feature data.
  • a convolutional network includes a pooling layer and at least one convolutional layer.
  • the determination module 73 is used to input the target feature data into the fully connected layer of the trained improved convolutional neural network model, and classify the text to be classified according to the target feature data into the intention category to determine the corresponding target intention category.
  • the control module 74 is used to control the target home device to perform corresponding operations according to the target intention category.
  • the text classification device provided in this embodiment can execute the text classification method provided in the above-mentioned Embodiment 1.
  • the specific implementation method is similar to the principle and will not be described in detail here.
  • the trained improved convolutional neural network model also includes an embedding layer
  • the text classification device further includes a second acquisition module 75 .
  • the second acquisition module 75 is used to use the embedding layer in the trained improved convolutional neural network model to segment the text to be classified and determine the word vector corresponding to each text or word, so as to obtain the word vector matrix corresponding to the text to be classified.
  • the second acquisition module 75 is specifically configured to determine the word vector corresponding to each character or word in the text to be classified according to the preset dictionary obtained through training.
  • the word vector is used to uniquely identify the words in the preset dictionary. Text or words; the word vectors corresponding to each text or word are spliced in the order in the text to be classified to obtain the word vector matrix corresponding to the text to be classified.
  • the acquisition module 72 is specifically configured to input the word vector matrix into at least two different convolutional networks; for each convolutional network, use at least one corresponding convolutional layer to compare the word vector matrix. Perform feature extraction to obtain the feature matrix of the word vector matrix; use the pooling layer of each convolutional network to reduce the dimension of the feature matrix to obtain sub-feature data.
  • the sub-feature data is in the form of a vector, and the vector corresponding to the sub-feature data is The vectors in the word vector matrix have the same dimensions.
  • the acquisition module 72 is specifically configured to determine the number of convolutional layers in the convolutional network; if it is determined that the number of convolutional layers is one, determine the word vector matrix as the input of the convolutional layer. matrix, and uses a convolutional layer to perform the first operation on its input matrix; and determines the output matrix obtained by performing the first operation on the convolutional layer as the feature matrix of the word vector matrix.
  • the first operation includes: performing convolution calculation on the input matrix with the preset step size, convolution kernel size and convolution kernel padding method of the convolution layer to obtain the first matrix; performing layer normalization processing on the first matrix, To obtain the second matrix; input the second matrix into the preset activation function, and use the preset activation function to output the third matrix; sum the input matrix and the third matrix to obtain the residual matrix; layer the residual matrix Normalize to obtain the output matrix.
  • the acquisition module 72 is also specifically used to obtain the word vector matrix Determine it as the input matrix of the convolution layer with the largest convolution kernel size; use each convolution layer to perform the first operation on its input matrix in turn, and determine the output matrix obtained by performing the first operation on each convolution layer as the next convolution
  • the input matrix of the layer until the convolution layer with the smallest convolution kernel size performs the first operation on its input matrix; the output matrix obtained by performing the first operation on the convolution layer with the smallest convolution kernel size is determined as the feature of the word vector matrix matrix.
  • the acquisition module 72 is specifically configured to determine the maximum value of each vector in the feature matrix in the same dimension as the sub-feature data.
  • the acquisition module 72 is specifically configured to determine the category with the highest probability among the respective intention categories corresponding to the target feature data as the target intention category.
  • the text classification device provided in this embodiment can execute the text classification method provided in any one of the above two to six embodiments.
  • the specific implementation method is similar to the principle, and will not be described again here.
  • FIG. 7 is a block diagram of an electronic device provided according to a seventh embodiment of the present disclosure.
  • the electronic device 80 provided in this embodiment includes a circuit-interconnected memory 81 , a processor 82 and an input device 83 .
  • the memory 81 stores a computer program.
  • the input device 83 is used to receive voice interaction instructions issued by the user.
  • the processor 82 is configured to execute the text classification method as provided in any of the above embodiments through a computer program.
  • the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium includes a stored program. When the program is run, the text classification method as provided in any of the above embodiments is executed.
  • the memory 81 can be any suitable magnetic storage medium or magneto-optical storage medium, such as resistive random access memory RRAM (Resistive Random Access Memory), dynamic random access memory DRAM (Dynamic Random Access Memory), static random access memory SRAM ( Static Random-Access Memory), Enhanced Dynamic Random Access Memory EDRAM (Enhanced Dynamic Random Access Memory), High-Bandwidth Memory HBM (High-Bandwidth Memory), Hybrid Storage Cube HMC (Hybrid Memory Cube), etc.
  • resistive random access memory RRAM Resistive Random Access Memory
  • dynamic random access memory DRAM Dynamic Random Access Memory
  • static random access memory SRAM Static Random-Access Memory
  • Enhanced Dynamic Random Access Memory EDRAM Enhanced Dynamic Random Access Memory
  • High-Bandwidth Memory HBM High-Bandwidth Memory
  • Hybrid Storage Cube HMC Hybrid Storage Cube
  • the processor 82 can be an appropriate hardware processor, such as CPU (central processing unit), GPU (graphics processing unit), FPGA (Field Programmable Gate Array), DSP (Digital Signal Processing), ASIC (Application Specific Integrated Circuit), etc. .
  • CPU central processing unit
  • GPU graphics processing unit
  • FPGA Field Programmable Gate Array
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • the input device 83 may be an appropriate microphone, microphone array, input/output (I/O) interface, communication component, or other device, component or module that can be used to receive voice interaction instructions sent by the user.
  • I/O input/output
  • the present disclosure also provides a computer program product, including a computer program, which when executed by a processor implements the text classification method provided in any one of the above-mentioned Embodiments 1 to 6.
  • the present disclosure also provides a computer program, including: when the computer program is executed by a processor, it implements the text classification method as provided in any one of the above-mentioned Embodiments 1 to 6.
  • steps in the flowchart are shown in sequence as indicated by the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the flow chart may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily sequential, but may be performed in turn or alternately with other steps or sub-steps of other steps or at least part of the stages.
  • the above device embodiments are only illustrative, and the device of the present disclosure can also be implemented in other ways.
  • the division of modules in the above embodiment is only a logical function division, and there may be other division methods in actual implementation.
  • multiple modules can be combined, or can be integrated into another system, or some features can be ignored or not implemented.
  • each functional module in each embodiment of the present disclosure may be integrated into one module, or each module may exist independently, or two or more modules may be integrated together.
  • the above integrated modules can be implemented in the form of hardware or software program modules. If the integrated unit/module is implemented in the form of hardware, the hardware can be a digital circuit, an analog circuit, etc.
  • the physical implementation of hardware structures includes but is not limited to transistors, memristors, etc.
  • Integrated modules may be stored in a computer-readable memory when implemented as software program modules and sold or used as stand-alone products.
  • the technical solution of the present disclosure is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, It includes several instructions to cause a computer device (which can be a personal computer, a server or a network device, etc.) to execute all or part of the steps of the methods of various embodiments of the present disclosure.
  • the aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code.
  • each embodiment is described with its own emphasis.
  • parts that are not described in detail in a certain embodiment please refer to the relevant descriptions of other embodiments.
  • the technical features of the above embodiments can be combined in any way. To simplify the description, not all possible combinations of the technical features in the above embodiments are described. However, as long as there is no contradiction in the combination of these technical features, all possible combinations should be used. It is considered to be within the scope of this manual.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本公开公开了一种文本分类方法、文本分类装置、存储介质及电子装置,涉及智能家居技术领域,该文本分类方法包括:接收用户发出的语音交互指令,并将语音交互指令转换为待分类文本;采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,并将各子特征数据进行拼接,以获得目标特征数据,卷积网络包括池化层和至少一个卷积层;将目标特征数据输入已训练的改进卷积神经网络模型的全连接层中,根据目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别;根据目标意图类别控制目标家居设备执行对应的操作。本公开提供的方法能够在保证模型的训练和分类速度的同时提高分类的准确率。

Description

文本分类方法、文本分类装置、存储介质及电子装置
本公开要求于2022年03月16日提交中国专利局、申请号为202210259093.4、申请名称为“文本分类方法、文本分类装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及智能家居领域,具体而言,涉及一种文本分类方法、文本分类装置、存储介质及电子装置。
背景技术
在智能家居领域,通常是将用户的语音交互指令转换为文本,再对文本进行解析,进而获取到用户的意图,控制智能家居设备。对文本进行解析需要使用文本分类方法将文本在语义层面进行分类,计算文本对应每个意图的概率,并在所有已支持的意图中选择一个最可能的意图。
现目前,用于文本分类的预训练模型结构复杂,在训练和使用时都需要很多的计算资源,存在训练和分类速度慢的问题,而结构简单的神经网络在文本分类时又存在准确率较低的问题。
发明内容
本公开提供一种文本分类方法、文本分类装置、存储介质及电子装置,用以解决现有技术中无法在提高模型的训练和分类速度的同时提高分类的准确率的问题。
根据本公开的第一方面,提供一种文本分类方法,包括:接收用户发出的语音交互指令,并将所述语音交互指令转换为待分类文本;采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取所述待分类文本的子特征数据,并将各所述子特征数据进行拼接,以获得目标特征数据,所述卷积网络包括池化层和至少一个卷积层;将目标特征数据输入所述已训练的改进卷积神经网络模型的全连接层中,根据所述目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别;根据所述目标意图类别控制目标家居设备执行对应的操作。
根据本公开的第二方面,提供一种文本分类装置,包括:接收用户发出的语音交互指令,并将所述语音交互指令转换为待分类文本;采用已训练的改进卷积神经网络模型中的 至少两个不同的卷积网络分别提取所述待分类文本的子特征数据,并将各所述子特征数据进行拼接,以获得目标特征数据,所述卷积网络包括池化层和至少一个卷积层;将目标特征数据输入所述已训练的改进卷积神经网络模型的全连接层中,根据所述目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别;根据所述目标意图类别控制目标家居设备执行对应的操作。
根据本公开的第三方面,提供一种计算机可读的存储介质,计算机可读的存储介质包括存储的程序,所述程序运行时执行如第一方面中所述的方法。
根据本公开的第四方面,提供一种电子装置,包括存储器、处理器和输入装置,所述存储器中存储有计算机程序,所述输入装置用于接收用户发出的语音交互指令,所述处理器被设置为通过所述计算机程序执行如第一方面中所述的方法。
根据本公开的第五方面,提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如第一方面中所述的方法。
根据本身的第六方面,提供一种计算机程序,包括该计算机程序被处理器执行时实现如第一方面中所述的方法。
本公开提供的文本分类方法、文本分类装置、存储介质及电子装置,由于采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,因此可以提取待分类文本不同维度和深度的子特征数据,增加待分类文本的特征数量,获得待分类文本的更多特征,进而能够提高改进卷积神经网络模型对文本分类的准确度。同时,由于不同的卷积网络是并行提取待分类文本的特征,未增加网络深度,模型的训练和分类速度有所保障,所以,能够在保证模型的训练和分类速度的同时提高分类的准确率。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本公开实施例提供的文本分类方法的应用场景图;
图2是根据本公开第一实施例提供的文本分类方法流程示意图;
图3是本据本公开实施例提供的一种改进卷积神经网络模型的网络结构;
图4是根据本公开第三实施例提供的文本分类方法流程示意图;
图5是根据本公开第四实施例提供的文本分类方法流程示意图;
图6是根据本公开第六实施例提供的文本分类装置框图;
图7是根据本公开第七实施例提供的电子装置框图。
具体实施方式
为了使本技术领域的人员更好地理解本公开方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开保护的范围。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本公开实施例的一个方面,提供了一种文本分类方法。该文本分类方法广泛应用于智慧家庭(Smart Home)、智能家居、智能家用设备生态、智慧住宅(Intelligence House)生态等智能数字化控制应用场景。
图1是根据本公开实施例提供的文本分类方法的一种应用场景示意图,如图1所示,本公开实施例提供的一种应用场景中,包括:用户和电子装置102。用户能够根据自身的使用需求发送语音交互指令。电子装置可以是用户的语音指令想要控制的智能家居设备、也可以是能够与用户的语音指令想要控制的智能家居设备通信的其他智能家居设备或服务器等。因此,电子装置可以直接接收用户发出的语音交互指令,也可以通过与智能家居设备进行通信,进而接收用户发出的语音交互指令。
电子装置102中配置有已训练的改进神经网络模型。电子装置102接收用户发出的语音交互指令,并将语音交互指令转换为待分类文本;采用已训练的改进卷积神经网络模型中至少两个不同的卷积网络分别提取待分类文本的子特征数据,并将各子特征数据进行拼 接,以获得目标特征数据;将目标特征数据输入已训练的改进卷积神经网络模型的全连接层中,根据目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别;根据目标意图类别控制家居设备执行对应的操作。
电子装置102为用户的语音指令想要控制的智能家居设备时,可以直接控制自身执行对应的操作。
电子装置102为能够与用户的语音指令想要控制的智能家居设备进行通信的服务器时,可以向用户的语音指令想要控制的智能家居设备发送指令,以控制用户的语音指令想要控制的智能家居设备执行对应的操作。
电子装置102为能够与用户的语音指令想要控制的智能家居设备进行通信的其他智能家居设备时,可以向用户的语音指令想要控制的智能家居设备发送指令,以控制用户的语音指令想要控制的智能家居设备执行对应的操作。
可在电子装置102上或独立于电子装置102设置数据库,用于为电子装置102提供数据存储服务,可在电子装置102上或独立于电子装置102配置云计算和/或边缘计算服务,用于为电子装置102提供数据运算服务。
智能家居设备可以并不限定于为PC、手机、平板电脑、智能空调、智能烟机、智能冰箱、智能烤箱、智能炉灶、智能洗衣机、智能热水器、智能洗涤设备、智能洗碗机、智能投影设备、智能电视、智能晾衣架、智能窗帘、智能影音、智能插座、智能音响、智能音箱、智能新风设备、智能厨卫设备、智能卫浴设备、智能扫地机器人、智能擦窗机器人、智能拖地机器人、智能空气净化设备、智能蒸箱、智能微波炉、智能厨宝、智能净化器、智能饮水机、智能门锁、语音智能问答系统等。
以下对本公开所涉及的现有技术进行详细说明分析。
现有技术中,可以采用已有的预训练模型,已有的预训练模型中,文字或单词对应的词向量已经经过多次训练,能够较好的反应文字或单词中的语义,只需要使用符合实际应用场景的训练数据集对预训练模型进行训练(简称finetune),调整优化预训练模型的参数,使其更适合实际应用场景,以再实际应用场景中获得更高的准确率。但已有的预训练模型具有复杂的网络结构和数量庞大的参数,使用预训练模型进行训练和分类时,都需要占用较多的计算资源,并且,由于其网络结构复杂,参数量大,训练速度满,想要将预训练模型的参数调整为试合实际应用场景的参数,需要较多的时间,并且,在参数调整后,使用其进行分类时的速度也不会提高。并且,由于已有的BERT(Bidirectional Encoder  Representation from Transformers)模型、transformer模型等预训练模型采用的是去噪声自动编码器(DAE、Denoise Autoencoder)的方式,在预训练时会引入随机噪声来增加模型的鲁棒性。例如,对于预训练模型BERT来说,会使用[MASK]来随机替换原始的字词。在finetune时,却不使用[MASK]替换训练数据集中原始的自此,造成预训练过程与finetune过程的数据集分布是不同的。对于不同长度的文本来说,虽然被[MASK]的概率是相同的,但是被[MASK]的词在句子中的重要程度是不同的。示例性地,在100个词中随机抽10个词和在10个词中随机抽一个词,虽然比例相同,但是在语义层面来看,总数10个词中的一个词对句子的重要程度更高一些,这就导致BERT在短文本处理方面有着先天的缺点。而智能家电领域,语音交互指令通常都是较短的文本,用户与智能家居设备进行交互时,较少会发出长文本指令。
现有技术中,还存在卷积神经网络(CNN)、长短期长短期记忆人工神经网络(LSTM)等网络结构较简单、训练和分类速度较快的神经网络。通过使用符合实际应用场景的训练数据集对卷积神经网络、长短期记忆人工神经网络等进行训练,以优化网络参数,提高分类的准确度。但较简单的网络结构意味着网络深度不够、参数量少、模型容量小、对特征的敏感性高,进而这类神经网络,虽然训练和分类速度较高,但准确度较低,并且无法获取到长距离的上下文信息,进一步降低了分类的准确率。
综上,现有技术中的文本分类方法,无法在保证模型的训练和分类速度的同时提高分类的准确率。
所以,在面对现有技术中的问题时,发明人通过创造性研究发现,提出本公开的技术方案,旨在解决现有技术的如上问题。为了能够在证模型的训练和分类速度的同时提高分类的准确率,需要对卷积神经网络的结构进行改进,提高模型容量小,以获得更高的准确度。为提高模型容量,发明人提出可以采用多组不同的卷积网络对待分类文本进行特征提取,从不同的维度和深度对待分类文本进行特征提取,增加待分类文本的特征数量,获得待分类文本的更多特征,进而能够提高改进卷积神经网络模型对文本分类的准确度。同时,由于不同的卷积网络是并行提取待分类文本的特征,未增加网络深度,模型的训练和分类速度有所保障,因此,能够在保证模型的训练和分类速度的同时提高分类的准确率。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
实施例一
图2是根据本公开第一实施例提供的文本分类方法流程示意图,如图2所示,本公开的执行主体问文本分类装置,该装置位于电子装置中。本实施例提供的文本分类方法包括步骤201至步骤204。
步骤201,接收用户发出的语音交互指令,并将语音交互指令转换为待分类文本。
本实施例中,电子装置接收到用户发出的语音交互指令后,可以通过预先配置的语音转换装置或语音转文本的程序等将用户发出的语音交互指令转换为交互指令文本,并可以使用第一预设函数,将交互指令文本中的文字、单词等进行编码后形成的待分类文本。第一预设函数对文字、单词等的编码可以是将文字、单词数字等映射到多维向量空间,使得每一个文字和单词能够使用一个唯一的多维向量进行表示,并将这些表示文字或单词的多维向量按照在交互指令文本中的顺序进行拼接,以形成待分类文本。具体地,可以将各多维向量拼接为一个向量矩阵或者一个更高维度的行向量或列向量等。
步骤202,采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,并将各子特征数据进行拼接,以获得目标特征数据,卷积网络包括池化层和至少一个卷积层。
本实施例中,改进卷积神经网络模型是预先训练好的,改进卷积神经网络模型包括嵌入层、至少两个不同的卷积网络、特征融合层、全连接层和归一化层。其中,卷积网络包括池化层和至少一个卷积层。两个不同的卷积网络可以是卷积网络中卷积层的数量不同,也可以是卷积层的数量相同但是各卷积层预设的步长不同、各卷积层预设的卷积核尺寸不同、各卷积层预设的卷积核填补方式不同。
本实施例中,待分类文本的子特征数据是分别采用不同的卷积网络对待分类文本进行特征提取后获得的数据。因此,待分类文本的子特征数据可以是不同维度的数据。在改进卷积神经网络模型中,至少两个不同的卷积网络是并行的,可以从不同的维度和深度对待分类文本进行特征提取,能够增加待分类文本的特征数量,进而能够实现对待分类文本进行更准确的分类。
本实施例中,目标特征数据由至少两个待分类文本的子特征数据拼接得到数据。具体地,在改进卷积神经网络模型中,各卷积网络将提出的子特征数据输入特征融合层,特征融合层将各子特征数据拼接。示例性地,将两个为128维向量的子特征数据进行拼接,得到为256维向量的目标特征数据。
步骤203,将目标特征数据输入已训练的改进卷积神经网络模型的全连接层中,根据 目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别。
本实施例中,目标特征数据为待分类文本在改进卷积神经网络模型中的分布式表征,为确定待分类文本的意图类别,改进卷积神经网络模型的全连接层可以使用第二预设函数,将目标特征数据映射到预先构建的各意图类别中,即通过全连接层将目标特征数据的维度转换为意图类别数的维度,以确定待分类文本对应的目标意图类别。预先构建的各意图类别为用户发出的语音交互指令可能属于的意图类别的集合。目标意图类别为用户发出的语音交互指令实际属于的意图类别。第二预设函数可以通过对改进卷积网络模型进行训练获得。
步骤204,根据目标意图类别控制目标家居设备执行对应的操作。
本实施例中,可以预先对各意图类别设置对应的操作,在确定用户发出的语音交互指令对应的目标意图类别后,电子装置可以控制智能家居设备执行目标意图类别对应的操作,以实现用户通过语音交互指令控制智能家居设备。
图3示出了本公开提供的一种改进卷积神经网络模型的网络结构。
本实施例中,预先构建的改进卷积神经网络模型可以如图3所示,包括嵌入层(embedding层)31、第一卷积网络32、第二卷积网络33、特征融合层(concat层)34、全连接层(dense层)35、归一化层(softmax层)36。
其中,第一卷积网络32包括第一卷积层(conv_6)321、第二卷积层(conv_4)322、第三卷积层(conv_2)323和第一池化层(max pooling层)324。第一卷积层321的卷积核尺寸为6*6,第二卷积层322的卷积核尺寸为4*4,第三卷积层323的卷积核尺寸为2*2。
第二卷积网络33包括第四卷积层(conv_7)331、第五卷积层(conv_5)332、第六卷积层(conv_3)333和第二池化层(max pooling层)334。第四卷积层331的卷积核尺寸为7*7,第五卷积层332的卷积核尺寸为5*5,第六卷积层333的卷积核尺寸为3*3。
本实施例中预先构建的改进卷积神经网络模型的各种参数可以通过预先训练来确定,改进卷积神经网络模型在预先训练时,可以在特征融合层和归一化层之间添加dropout层,在每一次训练的过程中随机拿掉一些神经元,减少神经网络模型参数过拟合。
本实施例提供的文本分类方法,通过接收用户发出的语音交互指令,并将语音交互指令转换为待分类文本;采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,并将各子特征数据进行拼接,以获得目标特征数据,卷积网络包括池化层和至少一个卷积层;将目标特征数据输入已训练的改进卷积神经网络模型的全连接层中,根据目标特征数据对待分类文本进行意图类别的分类,以确定对应的 目标意图类别;根据目标意图类别控制目标家居设备执行对应的操作。由于采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,因此可以提取待分类文本不同维度和深度的特征,增加待分类文本的特征数量,获得待分类文本的更多特征,进而能够提高改进卷积神经网络模型对文本分类的准确度。同时,由于不同的卷积网络是并行提取待分类文本的特征,未增加网络深度,模型的训练和分类速度有所保障,所以,本实施例提供的文本分类方法能够在保证模型的训练和分类速度的同时提高分类的准确率。
实施例二
本实施例提供的文本分类方法,在实施例一的基础,已训练的改进卷积神经网络模型还包括嵌入层,在步骤202,采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,之前还包括步骤2011。
步骤2011,采用已训练的改进卷积神经网络模型中的嵌入层对待分类文本进行分词处理并确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵。
本实施例中,已训练的改进卷积神经网络模型中,不同编码均对应一个唯一的词向量,由于交互指令文本中文字、单词等以编码的形式进行表示,因此,交互指令文本中的文字、单词等以均与不同的词向量一一对应,即,已训练的改进卷积神经网络模型中,词向量可以用于对文字、单词等用户意图的载体形成唯一标识。嵌入层能够将待分类文本进行分词处理,即,将待分类文本中以编码形式进行标识的文字、单词和数字进行区分,以确定不同文字或编码对应的词向量,进而确定待分类文本对应的词向量矩阵。待分类文本对应的词向量矩阵可以是将各文字、单词和数字对应的词向量按行或列进行组合,进而形成矩阵。示例性地,用户发出的语音交互指令在转换为交互指令文本后共有10个汉字,则待分类文本由这10个汉字的编码按交互指令文本中的顺序排列而成,可以理解的是,每个汉字的编码可以具有相同的长度。嵌入层根据已训练的各个编码对应的词向量,将待分类文本转换词向量矩阵。当单个词向量的维度为128维时,词向量矩阵可以是10*128的向量矩阵。
改进卷积神经网络模型在第一次训练时,嵌入层可以随机生成各个编码对应的词向量,词向量与各个编码唯一对应。改进卷积神经网络模型在训练时可以通过反向传播算法调整各个的编码对应的词向量,并可以将调整后的各编码与词向量的对应关系保存在数据库中,以便在改进卷积神经网络模型下一次训练时,嵌入层可以从数据库中获取各个编码对应的词向量。
本实施例提供的文本分类方法,通过采用已训练的改进卷积神经网络模型中的嵌入层对待分类文本进行分词处理并确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵。由于确定出待分类文本对应的词向量矩阵,将待分类文本中的意图以数值的形式进行表示,在通过已训练的改进卷积神经网络模型后,能够确定待分类文本的意图类别,进而实现文本分类。
作为一种可选的实施方式,对步骤2011中确定各文字或单词对应的词向量进行细化,则步骤2011细化包括步骤20111至步骤20112。
步骤20111,根据训练获得的预设字典确定待分类文本中各文字或单词对应的词向量,词向量用于唯一标识预设字典中的文字或单词。
本实施例中,预设字典可以是在改进卷积神经网络模型的训练过程中,对训练数据集中的字进行去重排序后得到的字典,训练数据集中的字在预设字典中均具有唯一索引值。第一预设函数可以将交互指令文本中的文字和单词等映射为文字和单词等在预设字典中的索引值。已训练的改进卷积神经网络模型的嵌入层可以通过各文字或单词在字典中的索引值,在训练后获得的数据库中查询各索引值对应的词向量。
步骤20112,将各文字或单词对应的词向量按照待分类文本中的顺序进行拼接,以获得待分类文本对应的词向量矩阵。
本实施例中,由于词向量可以为列向量或行向量,且一个单独的N维词向量也可以组成1行N列或N行1列的词向量矩阵,(N大于或等于1)。因此,可以根据分词后文字和单词的数量M(M大于获得等于1),以及各文字或单词在待分类文本中的顺序,将各文字或单词对应的N维词向量,组成M行N列、N行M列、1行M*N列或M*N行1列的词向量矩阵。
本实施例中,通过根据训练获得的预设字典确定待分类文本中各文字或单词对应的词向量,词向量用于唯一标识预设字典中的文字或单词;将各文字或单词对应的词向量按照待分类文本中的顺序进行拼接,以获得待分类文本对应的词向量矩阵,由于预设字典中各文字具有唯一索引,将待分类文本转换为预设字典中的索引,再使用预设字典中的索引确定待分类文本对应的词向量,所以提高将待分类文本转换为词向量矩阵的速度,进而提高模型的训练和分类速度。
实施例三
图4是根据本公开第三实施例提供的文本分类方法流程示意图,如图4所示,本实施例提供的文本分类方法,在实施例二的基础上,对步骤202进行细化,则步骤202细化包 括步骤301至步骤302。
步骤301,将词向量矩阵分别输入至少两个不同的卷积网络。
本实施例中,至少两个不同的卷积网络是并行结构,即,输入各卷积网络的数据是相同的,各卷积网络输出的数据也是去往相同的地方。具体地,将待分类文本对应的词向量分别输入各卷积网络的。
步骤302,针对每个卷积网络,采用对应的至少一个卷积层对词向量矩阵进行特征提取,以获得词向量矩阵的特征矩阵。
本实施例中,对于每个卷积网络,其包括池化层和至少一个卷积层。若卷积网络包括一个卷积层和池化层,则可以采用该卷积层预设的卷积核尺寸、预设的步长以及预设的卷积核填补方式对词向量矩阵进行卷积计算。本实施例中,预设的卷积核尺寸可以根据实际应用场景中统计的语音交互指令中词语的长度进行设置,示例性地,中文词语长度通常在2至7个字,因此,预设的卷积核尺寸可以为1*2、2*1、2*2、1*3、3*1、3*3、1*4、4*1、4*4、1*5、5*1、5*5、6*6、7*7等。本实施例中,预设的卷积核填补方式可以为“SAME”模式,即,在卷积计算时,卷积核尺寸在词向量矩阵中没有数据的边界进行补0,以使卷积后的向量长度与原向量长度相同。此处,卷积计算可以为卷积神经网络中的卷积计算方法,本实施例在此不做赘述。改进卷积神经网络模型在训练时可以通过反向传播算法调整各卷积层的权重矩阵。在改进卷积神经网络模型第一次训练时,可以使用随机初始化的方法生成,例如使用标准差为0.02的截断正态分布(truncated_normal_initializer)进行权重矩阵的初始化。
步骤303,采用各卷积网络的池化层对特征矩阵进行降维,以获得子特征数据,子特征数据为向量形式,且子特征数据对应的向量与词向量矩阵中的向量具有相同的维度。
本实施例中,由于词向量矩阵中可能包括多个词向量,因此,在卷积计算之后得到的也可能是一个包括多个向量的向量矩阵。而由于卷积网络为不同的卷积网络,因此,使用不同的卷积网络提取的词向量矩阵的特征矩阵的行数和列数可能并不相同。本实施例中,采用卷积网络中的池化层对词向量矩阵的特征矩阵进行降维以使不同卷积网络能够输入相同形式的子特征数据,子特征数据可以为向量形式,且为便于后续将不同卷积网络提取的不同维度和深度的子特征数据进行融合,子特征数据对应的向量可以与词向量矩阵中的向量具有相同的维度。
继续根据上述示例进行举例说明,若各卷积网络的各卷积层预设的卷积核填补方式均为“SAME”模式,则卷积后的向量长度与原向量长度相同,卷积后所得的特征矩阵的行 数和列数与输入卷积网络的矩阵的行数和列数相同。若待分类文本对应的词向量矩阵由10个128维的词向量组成,为10*128(10行128列)的矩阵时。采用对应卷积网络的至少一个卷积层对词向量矩阵进行特征提取后,特征矩阵也为10*128的矩阵。此时,采用池化层对特征矩阵进行降维,将特征矩阵转换为1*128的矩阵,即128维的向量。具体地,可以将求取矩阵各列的平均值,并将各列的平均值作为降维后的向量中各维的值。
本实施例提供的文本分类方法,通过将词向量矩阵分别输入至少两个不同的卷积网络;针对每个卷积网络,采用对应的至少一个卷积层对词向量矩阵进行特征提取,以获得词向量矩阵的特征矩阵,并采用池化层对特征矩阵进行降维,以获得子特征数据,子特征数据为向量形式,且子特征数据对应的向量与词向量矩阵中的向量具有相同的维度;采用各卷积网络的池化层对特征矩阵进行降维,以获得子特征数据,子特征数据为向量形式,且子特征数据对应的向量与词向量矩阵中的向量具有相同的维度。由于通过不同的卷积网络获取到待分类文本对应的词向量矩阵在不同维度和深度的特征矩阵,可以在获取到待分类文本更多的特征数量,将特征矩阵降维成子特征数据,可以减少后续的计算量,所以,能够进一步提高文本分类的速度和准确度。
实施例四
图5是根据本公开第四实施例提供的文本分类方法流程示意图,如图5所示,本实施例提供的文本分类方法,在实施例三的基础上,对步骤302中采用至少一个卷积层对词向量矩阵进行特征提取进行细化,则步骤302细化包括步骤401至步骤403。
步骤401,确定卷积网络中卷积层的数量。
本实施例中,可以直接查询所使用的已训练的改进卷积神经网络模型中各不同的卷积网络中卷积层的数量。
步骤402,若确定卷积层的数量为一个,则将词向量矩阵确定为卷积层的输入矩阵,并采用卷积层对其输入矩阵执行第一操作。
步骤403,将卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。
其中,第一操作包括:以卷积层预设的步长、卷积核尺寸和卷积核填补方式对输入矩阵进行卷积计算,以获得第一矩阵;对第一矩阵进行层归一化处理,以获得第二矩阵;将第二矩阵输入预设激活函数,并采用预设激活函数输出第三矩阵;将输入矩阵与第三矩阵进行求和,以获得残差矩阵;将残差矩阵进行层归一化处理,以获得输出矩阵。
本实施例的第一操作中,卷积计算能够对输入矩阵进行特征提取,将原始输入信息(待分类文本对应的词向量矩阵)进行变换,将原始输入信息中能够代表其真实含义的关键信 息、能够将其于其他信息进行区别的区别信息等以输出特征的形式表现。层归一化处理能够让第一矩阵的分布更加稳定,提高网络收敛速度,进而提高改进卷积神经网络模型在训练时的速度。激活函数可以使用gelu函数,激活函数能够使得反向传播算法对权重矩阵进行更好的优化。求和计算残差矩阵,能够防止模型过拟合、避免梯度消失。对残差矩阵进行层归一化处理能够让残差矩阵的参数分布更加稳定,提高网络收敛速度。
本实施例提供的文本分类方法,通过确定卷积网络中卷积层的数量;若确定卷积层的数量为一个,则将词向量矩阵确定为卷积层的输入矩阵,并采用卷积层对其输入矩阵执行第一操作;将卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵;第一操作包括:以卷积层预设的步长、卷积核尺寸和卷积核填补方式对输入矩阵进行卷积计算,以获得第一矩阵;对第一矩阵进行层归一化处理,以获得第二矩阵;将第二矩阵输入预设激活函数,并采用预设激活函数输出第三矩阵;将输入矩阵与第三矩阵进行求和,以获得残差矩阵;将残差矩阵进行层归一化处理,以获得输出矩阵。由于在卷积网络中将卷积的输入与输出计算残差,得到残差矩阵,并在卷积层和残差矩阵后进行层归一化处理,所以能够减少模型的过拟合问题,加速模型的训练和分类速度。
作为一种可选的实施方式,步骤302还包括步骤404至步骤407。
步骤404,若确定卷积层的数量为多个,且各卷积层以卷积核尺寸从大至小的顺序连接。
本实施例中,若确定卷积层的数量为多个,由于中文的上下文信息对当前词的语义理解至关重要,卷积核尺寸按照从大至小的顺序排列可以先使用尺寸较大的卷积核提取长文本特征,再使用尺寸较小的卷积核将长文本特征融合到短文本特征,进而可以将长距离上下文信息融合到局部特征中,保证上下文与当前词语的特征融合,使得抽象得到的目标特征数据更准确,进而提高分类的准确度。因此,在改进卷积神经网络模型中,卷积网络中卷积层的数量为多个时,各卷积层以卷积核尺寸从大至小的顺序连接。
步骤405,将词向量矩阵确定为卷积核尺寸最大的卷积层的输入矩阵。
步骤406,依次采用各卷积层对其输入矩阵执行第一操作,并将各卷积层执行第一操作得到的输出矩阵确定为下一卷积层的输入矩阵,直至采用卷积核尺寸最小的卷积层对其输入矩阵执行第一操作。
步骤407,将卷积核尺寸最小的卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。
本实施例中,各卷积层是相互连接的,将词向量矩阵输入卷积核尺寸最大的卷积层, 就能够从卷积核尺寸最小的卷积层得到词向量矩阵的特征矩阵。
本实施例中,中间的各卷积层(卷积核尺寸既不是尺寸最大也不是尺寸最小的卷积层),用于将上一卷积层进行特征提取后的输出矩阵作进一步的提取,并作为下一卷积层的输入矩阵。
本实施例中,尺寸最小的卷积层执行第一操作输出得到的矩阵为具有多个卷积层的卷积网络对词向量进行特征提取后得到的特征矩阵。依次采用个卷积层对输入矩阵执行第一操作,并将上一卷积层的输出矩阵确定为下一卷积层的输入矩阵,能够将待分类文本中长距离上下文信息融合到局部特征中,得到抽象的高层特征,即将待分类文本中的语义信息、语法信息等逐步融合至一个特征向量中。
作为一种可选的实施方式,已训练的改进卷积神经网络模型中,可以包括两个不同的卷积网络,其中一个卷积网络由卷积核尺寸分布为7、5、3的三个卷积层以及一个最大池化层依次连接组成,另一个卷积网络由卷积核尺寸分布为6、4、2的三个卷积层以及一个最大池化层依次连接组成。中文词语长度通常在2-7个字,因此选取[2,3,4,5,6,7]的卷积核尺寸能够获取上下文完整信息,并且,倒序排列的卷积核尺寸更适用于增加中文中上下文信息对当前词的影响。而太深层次的卷积网络会造成过拟合,所以将卷积核分成了2组[2,4,6]和[3,5,7],通过形成两个不同的卷积网络并行计算再拼接的方式将不同深度和宽度的特征融合到一起。再通过对卷积和残差做归一化处理(layer normal)防止模型的过拟合,使模型更快收敛,通过warm up提高训练速度,并且得到更好的准确率同时,为了增加网络浅层(卷积核尺寸更大的卷积层)对深层(卷积核尺寸更小的卷积层)的影响,可以使用残差结构,将卷积的输入和输出做残差,并在每一个卷积层后以及残差叠加后使用层归一化处理,以减少过拟合问题。
本实施例提供的文本分类方法,通过若确定卷积层的数量为多个,且各卷积层以卷积核尺寸从大至小的顺序连接;将词向量矩阵确定为卷积核尺寸最大的卷积层的输入矩阵;依次采用各卷积层对其输入矩阵执行第一操作,并将各卷积层执行第一操作得到的输出矩阵确定为下一卷积层的输入矩阵,直至采用卷积核尺寸最小的卷积层对其输入矩阵执行第一操作;将卷积核尺寸最小的卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。由于在卷积网络中将卷积的输入与输出计算残差,得到残差矩阵,并在每个卷积层和残差矩阵后进行层归一化处理,所以能够减少模型的过拟合问题,加速模型的训练和分类速度,同时,由于各卷积层以卷积核尺寸从大至小的顺序连接,依次使用各卷积层对词向量矩阵进行特征提取,能够先提取长文本特征,再将长文本特征融合到短文本特征, 所以,能够获得更准确的目标特征数据,进一步提高分类的准确度。
实施例五
本实施例提供的文本分类方法,在上述任意一个实施例的基础上,对步骤303进行细化,则步骤303细化包括步骤501。
步骤501,将特征矩阵中各向量在同一维度的最大值确定为子特征数据。
本实施例中,可以使用最大池化的方法对特征矩阵中各向量进行降维,将各向量在同一维度的最大值确定为子特征数据,以在降低特征的维度的同时,提取更好的、具有更强烈的语义信息的特征。继续根据上述示例进行举例说明,若特征矩阵为10*128的矩阵,可以求取矩阵各列的最大值,将各列的最大值作为降维后128维的向量中各维的值。
本实施例提供的文本分类方法,通过将特征矩阵中各向量在同一维度的最大值确定为子特征数据,由于在降低特征的维度的同时,提取更好的、具有更强烈的语义信息的特征,所以能够获得更准确的子特征数据,进一步提高分类的准确度。
作为一种可选的实施方式,在上述任意一个实施例的基础上,对步骤203中根据目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别进行细化,则步骤203细化包括步骤2031。
步骤2031,将目标特征数据对应各意图类别中概率最大的类别确定为目标意图类别。
本实施例中,全连接层将目标特征数据对应至各目标意图类别,获得目标特征数据属于各意图类别的概率,以实现将待分类文本在语义层面的分类。可以理解的是,待分类文本对应所有可能的意图类别的概率之和为1,因此,可以将目标特征数据对应各意图类别中概率最大的类别确定为目标意图类别,以使得目标意图类别尽可能与待分类文本真实的意图类别相同。
实施例六
图6是根据本公开第六实施例提供的文本分类装置框图,如图6所示,本实施例提供的文本分类装置70包括接收模块71、获取模块72、确定模块73以及控制模块74。
接收模块71,用于接收用户发出的语音交互指令,并将语音交互指令转换为待分类文本。
获取模块72,用于采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取待分类文本的子特征数据,并将各子特征数据进行拼接,以获得目标特征数据,卷积网络包括池化层和至少一个卷积层。
确定模块73,用于将目标特征数据输入已训练的改进卷积神经网络模型的全连接层 中,根据目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别。
控制模块74,用于根据目标意图类别控制目标家居设备执行对应的操作。
本实施例提供的文本分类装置可以执行上述实施例一提供的文本分类方法,具体的实现方式与原理类似,在此不做赘述。
作为一种可选的实施方式,已训练的改进卷积神经网络模型还包括嵌入层,文本分类装置还包括第二获取模块75。第二获取模块75用于,采用已训练的改进卷积神经网络模型中的嵌入层对待分类文本进行分词处理并确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵。
作为一种可选的实施方式,第二获取模块75具体用于,根据训练获得的预设字典确定待分类文本中各文字或单词对应的词向量,词向量用于唯一标识预设字典中的文字或单词;将各文字或单词对应的词向量按照待分类文本中的顺序进行拼接,以获得待分类文本对应的词向量矩阵。
作为一种可选的实施方式,获取模块72具体用于,将词向量矩阵分别输入至少两个不同的卷积网络;针对每个卷积网络,采用对应的至少一个卷积层对词向量矩阵进行特征提取,以获得词向量矩阵的特征矩阵;采用各卷积网络的池化层对特征矩阵进行降维,以获得子特征数据,子特征数据为向量形式,且子特征数据对应的向量与词向量矩阵中的向量具有相同的维度。
作为一种可选的实施方式,获取模块72具体还用于,确定卷积网络中卷积层的数量;若确定卷积层的数量为一个,则将词向量矩阵确定为卷积层的输入矩阵,并采用卷积层对其输入矩阵执行第一操作;将卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。第一操作包括:以卷积层预设的步长、卷积核尺寸和卷积核填补方式对输入矩阵进行卷积计算,以获得第一矩阵;对第一矩阵进行层归一化处理,以获得第二矩阵;将第二矩阵输入预设激活函数,并采用预设激活函数输出第三矩阵;将输入矩阵与第三矩阵进行求和,以获得残差矩阵;将残差矩阵进行层归一化处理,以获得输出矩阵。
作为一种可选的实施方式,若确定卷积层的数量为多个,且各卷积层以卷积核尺寸从大至小的顺序连接;获取模块72具体还用于,将词向量矩阵确定为卷积核尺寸最大的卷积层的输入矩阵;依次采用各卷积层对其输入矩阵执行第一操作,并将各卷积层执行第一操作得到的输出矩阵确定为下一卷积层的输入矩阵,直至采用卷积核尺寸最小的卷积层对其输入矩阵执行第一操作;将卷积核尺寸最小的卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。
作为一种可选的实施方式,获取模块72具体还用于,将特征矩阵中各向量在同一维度的最大值确定为子特征数据。
作为一种可选的实施方式,获取模块72具体还用于,将目标特征数据对应各意图类别中概率最大的类别确定为目标意图类别。
本实施例提供的文本分类装置可以执行上述实施例二至六中任意一个提供的文本分类方法,具体的实现方式与原理类似,此处不再赘述。
实施例七
图7是根据本公开第七实施例提供的电子装置框图,如图7所示,本实施例提供的电子装置80包括电路互连的存储器81、处理器82和输入装置83。
存储器81中存储有计算机程序。
输入装置83用于接收用户发出的语音交互指令。
处理器82被设置为通过计算机程序执行如上述任意一个实施例提供的文本分类方法。
本公开还提供一种计算机可读的存储介质,计算机可读的存储介质包括存储的程序,程序运行时执行如上述任意一个实施例提供的文本分类方法。
存储器81可以是任何适当的磁存储介质或者磁光存储介质,比如,阻变式存储器RRAM(Resistive Random Access Memory)、动态随机存取存储器DRAM(Dynamic Random Access Memory)、静态随机存取存储器SRAM(Static Random-Access Memory)、增强动态随机存取存储器EDRAM(Enhanced Dynamic Random Access Memory)、高带宽内存HBM(High-Bandwidth Memory)、混合存储立方HMC(Hybrid Memory Cube)等等。
处理器82可以是适当的硬件处理器,比如CPU(central processing unit)、GPU(graphics processing unit)、FPGA(Field Programmable Gate Array)、DSP(Digital Signal Processing)和ASIC(Application Specific Integrated Circuit)等等。
输入装置83可以是适当的麦克风、麦克风阵列、输入/输出(I/O)接口、通信组件等能够用于接收用户发送的语音交互指令的设备、组件或模块。
本公开还提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上述实施例一至六中任意一个提供的文本分类方法。
本公开还提供一种计算机程序,包括:该计算机程序被处理器执行时实现如上述实施例一至六中任意一个提供的文本分类方法。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为 依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
进一步需要说明的是,虽然流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
应该理解,上述的装置实施例仅是示意性的,本公开的装置还可通过其它的方式实现。例如,上述实施例中模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如,多个模块可以结合,或者可以集成到另一个系统,或一些特征可以忽略或不执行。
另外,若无特别说明,在本公开各个实施例中的各功能模块可以集成在一个模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一起。上述集成的模块既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。集成的单元/模块如果以硬件的形式实现时,该硬件可以是数字电路,模拟电路等等。硬件结构的物理实现包括但不局限于晶体管,忆阻器等等。
集成的模块如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。上述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述仅是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (20)

  1. 一种文本分类方法,包括:
    接收用户发出的语音交互指令,并将所述语音交互指令转换为待分类文本;
    采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取所述待分类文本的子特征数据,并将各所述子特征数据进行拼接,以获得目标特征数据,所述卷积网络包括池化层和至少一个卷积层;
    将目标特征数据输入所述已训练的改进卷积神经网络模型的全连接层中,根据所述目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别;
    根据所述目标意图类别控制目标家居设备执行对应的操作。
  2. 根据权利要求1所述的方法,其中,所述已训练的改进卷积神经网络模型还包括嵌入层;
    所述采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取所述待分类文本的子特征数据之前,还包括:
    采用已训练的改进卷积神经网络模型中的嵌入层对所述待分类文本进行分词处理并确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵。
  3. 根据权利要求2所述的方法,其中,所述确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵,包括:
    根据训练获得的预设字典确定待分类文本中各文字或单词对应的词向量,所述词向量用于唯一标识预设字典中的文字或单词;
    将各文字或单词对应的词向量按照待分类文本中的顺序进行拼接,以获得待分类文本对应的词向量矩阵。
  4. 根据权利要求2或3所述的方法,其中,所述采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取所述待分类文本的子特征数据,包括:
    将所述词向量矩阵分别输入至少两个不同的卷积网络;
    针对每个卷积网络,采用对应的至少一个卷积层对所述词向量矩阵进行特征提取,以获得词向量矩阵的特征矩阵;
    采用各卷积网络的池化层对所述特征矩阵进行降维,以获得所述子特征数据,所述子特征数据为向量形式,且所述子特征数据对应的向量与词向量矩阵中的向量具有相同的维度。
  5. 根据权利要求4所述的方法,其中,所述采用对应的至少一个卷积层对所述词向 量矩阵进行特征提取,以获得词向量矩阵的特征矩阵,包括:
    确定卷积网络中卷积层的数量;
    若确定卷积层的数量为一个,则将所述词向量矩阵确定为卷积层的输入矩阵,并采用卷积层对其输入矩阵执行第一操作;
    将卷积层执行所述第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵;
    所述第一操作包括:
    以卷积层预设的步长、卷积核尺寸和卷积核填补方式对输入矩阵进行卷积计算,以获得第一矩阵;
    对第一矩阵进行层归一化处理,以获得第二矩阵;
    将所述第二矩阵输入预设激活函数,并采用预设激活函数输出第三矩阵;
    将所述输入矩阵与第三矩阵进行求和,以获得残差矩阵;
    将残差矩阵进行层归一化处理,以获得输出矩阵。
  6. 根据权利要求5所述的方法,其中,还包括:
    若确定卷积层的数量为多个,且各卷积层以卷积核尺寸从大至小的顺序连接;
    将所述词向量矩阵确定为卷积核尺寸最大的卷积层的输入矩阵;
    依次采用各卷积层对其输入矩阵执行所述第一操作,并将各卷积层执行所述第一操作得到的输出矩阵确定为下一卷积层的输入矩阵,直至采用卷积核尺寸最小的卷积层对其输入矩阵执行所述第一操作;
    将卷积核尺寸最小的卷积层执行所述第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。
  7. 根据权利要求3至6中任一项所述的方法,其中,所述采用各卷积网络的池化层对所述特征矩阵进行降维,以获得所述子特征数据,包括:
    将特征矩阵中各向量在同一维度的最大值确定为所述子特征数据。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述根据所述目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别,包括:
    将目标特征数据对应各意图类别中概率最大的类别确定为目标意图类别。
  9. 一种文本分类装置,包括:
    接收模块,用于接收用户发出的语音交互指令,并将所述语音交互指令转换为待分类文本;
    获取模块,用于采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分 别提取所述待分类文本的子特征数据,并将各所述子特征数据进行拼接,以获得目标特征数据,所述卷积网络包括池化层和至少一个卷积层;
    确定模块,用于将目标特征数据输入所述已训练的改进卷积神经网络模型的全连接层中,根据所述目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别;
    控制模块,用于根据所述目标意图类别控制目标家居设备执行对应的操作。
  10. 根据权利要求9所述的文本分类装置,其中,所述已训练的改进卷积神经网络模型还包括嵌入层,文本分类装置还包括第二获取模块;
    第二获取模块,用于采用已训练的改进卷积神经网络模型中的嵌入层对待分类文本进行分词处理并确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵。
  11. 根据权利要求10所述的文本分类装置,其中,所述第二获取模块在用于确定各文字或单词对应的词向量,以获得待分类文本对应的词向量矩阵时,具体用于,
    根据训练获得的预设字典确定待分类文本中各文字或单词对应的词向量,词向量用于唯一标识预设字典中的文字或单词;
    将各文字或单词对应的词向量按照待分类文本中的顺序进行拼接,以获得待分类文本对应的词向量矩阵。
  12. 根据权利要求10或11所述的文本分类装置,其中,所述获取模块在用于采用已训练的改进卷积神经网络模型中的至少两个不同的卷积网络分别提取所述待分类文本的子特征数据时,具体用于,
    将词向量矩阵分别输入至少两个不同的卷积网络;
    针对每个卷积网络,采用对应的至少一个卷积层对词向量矩阵进行特征提取,以获得词向量矩阵的特征矩阵;
    采用各卷积网络的池化层对特征矩阵进行降维,以获得子特征数据,子特征数据为向量形式,且子特征数据对应的向量与词向量矩阵中的向量具有相同的维度。
  13. 根据权利要求12所述的文本分类装置,其中,所述获取模块在用于采用对应的至少一个卷积层对所述词向量矩阵进行特征提取,以获得词向量矩阵的特征矩阵时,具体用于,
    确定卷积网络中卷积层的数量;
    若确定卷积层的数量为一个,则将所述词向量矩阵确定为卷积层的输入矩阵,并采用卷积层对其输入矩阵执行第一操作;
    将卷积层执行所述第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵;
    所述第一操作包括:
    以卷积层预设的步长、卷积核尺寸和卷积核填补方式对输入矩阵进行卷积计算,以获得第一矩阵;
    对第一矩阵进行层归一化处理,以获得第二矩阵;
    将所述第二矩阵输入预设激活函数,并采用预设激活函数输出第三矩阵;
    将所述输入矩阵与第三矩阵进行求和,以获得残差矩阵;
    将残差矩阵进行层归一化处理,以获得输出矩阵。
  14. 根据权利要13所述的文本分类装置,其中,若确定卷积层的数量为多个,且各卷积层以卷积核尺寸从大至小的顺序连接;则所述获取模块具体还用于,
    将词向量矩阵确定为卷积核尺寸最大的卷积层的输入矩阵;
    依次采用各卷积层对其输入矩阵执行第一操作,并将各卷积层执行第一操作得到的输出矩阵确定为下一卷积层的输入矩阵,直至采用卷积核尺寸最小的卷积层对其输入矩阵执行第一操作;
    将卷积核尺寸最小的卷积层执行第一操作得到的输出矩阵确定为词向量矩阵的特征矩阵。
  15. 根据权利要求11至14中任一项所述的文本分类装置,其中,所述获取模块在用于采用各卷积网络的池化层对所述特征矩阵进行降维,以获得所述子特征数据时,具体还用于,
    将特征矩阵中各向量在同一维度的最大值确定为子特征数据。
  16. 根据权利要求9至15中任一项所述的文本分类装置,其中,所述获取模块在用于根据所述目标特征数据对待分类文本进行意图类别的分类,以确定对应的目标意图类别时,具体用于,将目标特征数据对应各意图类别中概率最大的类别确定为目标意图类别。
  17. 一种电子装置,包括存储器、处理器和输入装置,所述存储器中存储有计算机程序,所述输入装置用于接收用户发出的语音交互指令,所述处理器被设置为通过所述计算机程序执行权利要求1至8中任一项所述的方法。
  18. 一种计算机可读的存储介质,计算机可读的存储介质包括存储的程序,所述程序运行时执行如权利要求1至8中任一项所述的方法。
  19. 一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法。
  20. 一种计算机程序,包括:该计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法。
PCT/CN2022/095743 2022-03-16 2022-05-27 文本分类方法、文本分类装置、存储介质及电子装置 WO2023173593A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210259093.4 2022-03-16
CN202210259093.4A CN114936280A (zh) 2022-03-16 2022-03-16 文本分类方法、文本分类装置、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2023173593A1 true WO2023173593A1 (zh) 2023-09-21

Family

ID=82863376

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095743 WO2023173593A1 (zh) 2022-03-16 2022-05-27 文本分类方法、文本分类装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN114936280A (zh)
WO (1) WO2023173593A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116992032A (zh) * 2023-09-25 2023-11-03 之江实验室 基于模型自动量化的文本分类方法、系统和存储介质
CN118015419A (zh) * 2024-03-11 2024-05-10 安徽大学 基于小样本学习和多结构特征融合的有源干扰识别方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115658886A (zh) * 2022-09-20 2023-01-31 广东技术师范大学 基于语义文本的智能肝癌分期方法、系统及介质
CN117708680B (zh) * 2024-02-06 2024-06-21 青岛海尔科技有限公司 一种用于提升分类模型准确度的方法及装置、存储介质、电子装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083700A (zh) * 2019-03-19 2019-08-02 北京中兴通网络科技股份有限公司 一种基于卷积神经网络的企业舆情情感分类方法及系统
CN112464674A (zh) * 2020-12-16 2021-03-09 四川长虹电器股份有限公司 一种字级别的文本意图识别方法
US20210134274A1 (en) * 2019-10-31 2021-05-06 Lg Electronics Inc. Device with convolutional neural network for acquiring multiple intent words, and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083700A (zh) * 2019-03-19 2019-08-02 北京中兴通网络科技股份有限公司 一种基于卷积神经网络的企业舆情情感分类方法及系统
US20210134274A1 (en) * 2019-10-31 2021-05-06 Lg Electronics Inc. Device with convolutional neural network for acquiring multiple intent words, and method thereof
CN112464674A (zh) * 2020-12-16 2021-03-09 四川长虹电器股份有限公司 一种字级别的文本意图识别方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116992032A (zh) * 2023-09-25 2023-11-03 之江实验室 基于模型自动量化的文本分类方法、系统和存储介质
CN116992032B (zh) * 2023-09-25 2024-01-09 之江实验室 基于模型自动量化的文本分类方法、系统和存储介质
CN118015419A (zh) * 2024-03-11 2024-05-10 安徽大学 基于小样本学习和多结构特征融合的有源干扰识别方法

Also Published As

Publication number Publication date
CN114936280A (zh) 2022-08-23

Similar Documents

Publication Publication Date Title
WO2023173593A1 (zh) 文本分类方法、文本分类装置、存储介质及电子装置
US20220147715A1 (en) Text processing method, model training method, and apparatus
CN107229684B (zh) 语句分类方法、系统、电子设备、冰箱及存储介质
CN108875074B (zh) 基于交叉注意力神经网络的答案选择方法、装置和电子设备
WO2021083239A1 (zh) 一种进行图数据查询的方法、装置、设备及存储介质
CN110083693B (zh) 机器人对话回复方法及装置
CN110427461A (zh) 智能问答信息处理方法、电子设备及计算机可读存储介质
JP7300435B2 (ja) 音声インタラクションするための方法、装置、電子機器、およびコンピュータ読み取り可能な記憶媒体
CN113392210A (zh) 文本分类方法、装置、电子设备及存储介质
WO2017193685A1 (zh) 社交网络中数据的处理方法和装置
CN109992788B (zh) 基于未登录词处理的深度文本匹配方法及装置
WO2022140900A1 (zh) 个人知识图谱构建方法、装置及相关设备
CN110390107B (zh) 基于人工智能的下文关系检测方法、装置及计算机设备
US11120214B2 (en) Corpus generating method and apparatus, and human-machine interaction processing method and apparatus
US20230297617A1 (en) Video retrieval method and apparatus, device, and storage medium
CN111161726A (zh) 一种智能语音交互方法、设备、介质及系统
JP2024512628A (ja) キャプション生成器を生成するための方法および装置、並びにキャプションを出力するための方法および装置
Song Sentiment analysis of Japanese text and vocabulary learning based on natural language processing and SVM
CN117113385B (zh) 一种应用于用户信息加密的数据提取方法及系统
CN113792594A (zh) 一种基于对比学习的视频中语言片段定位方法及装置
CN112735438A (zh) 一种在线声纹特征更新方法及设备、存储设备和建模设备
CN116975221A (zh) 文本阅读理解方法、装置、设备及存储介质
CN114547266B (zh) 信息生成模型的训练方法、生成信息的方法、装置和设备
CN111814469B (zh) 一种基于树型胶囊网络的关系抽取方法及装置
Jia et al. An optimized classification algorithm by neural network ensemble based on PLS and OLS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22931618

Country of ref document: EP

Kind code of ref document: A1