CN109033088B - Neural network-based second language learning model - Google Patents

Neural network-based second language learning model Download PDF

Info

Publication number
CN109033088B
CN109033088B CN201811025138.1A CN201811025138A CN109033088B CN 109033088 B CN109033088 B CN 109033088B CN 201811025138 A CN201811025138 A CN 201811025138A CN 109033088 B CN109033088 B CN 109033088B
Authority
CN
China
Prior art keywords
encoder
lstm
word
output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811025138.1A
Other languages
Chinese (zh)
Other versions
CN109033088A (en
Inventor
陆勇毅
秦龙
徐书尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Singsound Intelligent Technology Co ltd
Original Assignee
Beijing Singsound Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Singsound Intelligent Technology Co ltd filed Critical Beijing Singsound Intelligent Technology Co ltd
Priority to CN201811025138.1A priority Critical patent/CN109033088B/en
Publication of CN109033088A publication Critical patent/CN109033088A/en
Application granted granted Critical
Publication of CN109033088B publication Critical patent/CN109033088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a second language learning model based on a neural network, which has the technical scheme that the model comprises a context encoder, a linguistic feature encoder, a user information encoder and a question form encoder, wherein the input features of the context encoder are words and letters, the input features of the linguistic feature encoder are parts of speech and dependency labels of corresponding words, the input features of the user information encoder are student ID information, learning duration and nationality of students, and the input features of the question form encoder are answer states, types of exercises, answer time and answer modes. The self-adaptive learning system capable of recommending learning materials according to the actual demands of students has wide application prospect, and can greatly improve the learning efficiency of students and reduce the burden of teachers.

Description

Neural network-based second language learning model
Technical Field
The invention relates to a neural network-based second language learning model.
Background
Second language learning modeling (Second Language Acquisition, SLA) is a task in the field of foreign language learning to predict whether a student will respond correctly to future problems based on the student's answer history. The research SLAM has important significance for constructing an intelligent self-adaptive learning system in the field of foreign language learning.
Bayesian Knowledge Tracing (BKT) is a hidden Markov model modeling knowledge of students. The model expresses the mastery of a certain concept by students in a Binary hidden State (Binary State). BKT has been successfully applied to courses that have fewer concepts and knowledge points, such as mathematics, programming, etc., and can be predefined. However, in the language learning field, such as english learning, words are very important knowledge points, and the number of knowledge points is too large compared with other subjects, such as mathematics, so that the binary hidden state matrix formed by the method is very sparse. Modeling the language learning process of a student using this method can face challenges.
Deep Knowledge Tracing (DKT) is a method for modeling learning processes using recurrent neural networks (Recurrent Neural Networks, RNN). However, in practice, the learning history of students is very Long, and even RNN or its variant LSTM (Long Short-Term Memory), GRU (Gated Recurrent Units) has difficulty in remembering such Long history. Moreover, the conventional DKT model usually splices all features and inputs them together as inputs into the RNN model, but for language learning, such input all information like embedded expressions (embedded words), linguistic features (parts of speech, dependencies, etc.), personal information of students, etc. together into the network flatly, likely results in the model being too dense to learn.
BKT and DKT are common models for modeling the learning history of students. However, in the field of foreign language learning, the effect is not ideal for SLA by directly applying these two models. .
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide the self-adaptive learning system capable of recommending learning materials according to the actual demands of students, which has wide application prospect, and can greatly improve the learning efficiency of students and lighten the burden of teachers.
In order to achieve the above purpose, the present invention provides the following technical solutions: the second language learning model based on the neural network comprises a context encoder, a linguistic feature encoder, a user information encoder and a question form encoder, wherein the input features of the context encoder are words and letters, the input features of the linguistic feature encoder are parts of speech and dependency labels of corresponding words, the input features of the user information encoder are student ID information, learning duration and nationality of students, and the input features of the question form encoder are answer states, types of exercises, answer time and answer modes.
The invention is further provided with: the context encoder is composed of a word level encoder and a letter level encoder.
The invention is further provided with: the word level encoder is of a bidirectional LSTM structure.
The invention is further provided with: the structure of the letter level encoder is a hierarchical bidirectional LSTM structure.
The invention is further provided with: the linguistic feature encoder has an LSTM structure, and takes the embedded expression of the part of speech and the dependency label as input by splicing.
The invention is further provided with: the user information encoder and the title form encoder are all of fully connected neural network structures.
The invention is further provided with: the word-level encoder structure is expressed as representing each word (w 1 ,w 2 ,...,w N ) The word embedded expression of (2) is used as input and is input into a forward LSTM model and a backward LSTM model, and the output of the last layer of the forward LSTM is spliced to obtain the output gt of the word encoder:
Figure BDA0001788069380000031
Figure BDA0001788069380000032
Figure BDA0001788069380000033
wherein K0 represents the number of layers of LSTM;
the letter-level encoder structure representation is that the letter embedded representation of each word is entered into an LSTM, each word is encoded,
where K1 is the number of layers of the LSTM, M is the number of letters of the word,
then the code output of each word is passed through a Mean over Time layer to obtain h wt
Figure BDA0001788069380000034
Similar to the word level encoder, the word (h w1 ,h w2 ,...,h wN ) Input into a bi-directional LSTM, then concatenate the forward and backward outputs:
Figure BDA0001788069380000035
Figure BDA0001788069380000036
Figure BDA0001788069380000037
where K2 denotes the number of layers of the LSTM layer,
the context encoder final output is: o= (O) 1 ,o 2 ,...,o N ),
Figure BDA0001788069380000041
The invention is further provided with: the linguistic feature encoder structure is expressed as splicing the embedded representations of the part of speech and the dependency tags as inputs:
Figure BDA0001788069380000042
Figure BDA0001788069380000043
Figure BDA0001788069380000044
wherein K3 is the number of layers of the LSTM layer.
The invention is further provided with: the user information encoder structure is expressed as follows:
μ 0 =[μ,s,days]
Figure BDA0001788069380000045
/>
where u is the embedded representation of the user, s is the embedded representation of the user's nationality, and days is the learning duration of the user. j=1, 2,.. 4 ,K 4 Is the number of layers of the neural network. Wu, bu are parameters, derived from training.
The invention is further provided with: the title form encoder is expressed as:
f 0 =[m,sess,c,t]
Figure BDA0001788069380000046
where m is the embedded representation of the question type, sess represents the embedded representation of the answer state, c represents the embedded representation of the answer mode, and t is the time spent answering. j=1, 2,.. 5 ,K 5 Is the number of layers of the subject encoder. W (W) f 、b f Is a training derived parameter.
The invention has the following advantages: the self-adaptive learning system capable of recommending learning materials according to the actual demands of students has wide application prospects, and can greatly improve the learning efficiency of students and reduce the burden of teachers. Models that can predict from a student's learning history whether learning materials are too difficult or too simple for the student are an important component of such adaptive learning systems.
Drawings
FIG. 1 is a diagram of a model structure of the present invention;
FIG. 2 is a hierarchical structure diagram of a letter-level encoder of the present invention;
FIG. 3 is a graph comparing AUC data of the present invention with baseline model data;
FIG. 4 is a graph comparing F1 data of the present invention with baseline model data.
Detailed Description
Referring to fig. 1 to 2, a second language learning model based on a neural network of the present embodiment includes a context encoder, a linguistic feature encoder, a user information encoder, and a question form encoder, wherein the input features of the context encoder are words and letters, the input features of the linguistic feature encoder are parts of speech and dependency labels of corresponding words, the input features of the user information encoder are student ID information, learning duration, and nationality of students, and the input features of the question form encoder are answer states, types of exercises, answer times, and answer modes (table 1).
Feature packet table
Figure BDA0001788069380000051
TABLE 1
The context encoder is composed of a word level encoder and a letter level encoder, the word level encoder can well capture semantic information of the context, however, learning new words is an important component of language learning, and the letter level encoder can capture the constituent information of the words, so that the OOV (Out Of Vocabulary) problem is solved to a certain extent.
As shown in fig. 2, the word-level encoder is constructed as a bi-directional LSTM structure.
The structure of the letter level encoder is a hierarchical bidirectional LSTM structure.
The linguistic feature encoder has an LSTM structure, and takes the embedded expression of the part of speech and the dependency label as input by splicing.
The user information encoder and the title form encoder are all of fully connected neural network structures.
The word level compilationsThe encoder structure is expressed as representing each word (w 1 ,w 2 ,...,w N ) The word embedded expression of (2) is used as input and is input into a forward LSTM model and a backward LSTM model, and the output of the last layer of the forward LSTM is spliced to obtain the output gt of the word encoder:
Figure BDA0001788069380000061
Figure BDA0001788069380000062
Figure BDA0001788069380000063
wherein K0 represents the number of layers of LSTM;
as shown in fig. 2, the letter-level encoder structure representation is input into one LSTM for each word's letter-embedded representation, encoding each word,
where K1 is the number of layers of the LSTM, M is the number of letters of the word,
then the code output of each word is passed through a Mean over Time layer to obtain h wt
Figure BDA0001788069380000064
Similar to the word level encoder, the word (h w1 ,h w2 ,...,h wN ) Input into a bi-directional LSTM, then concatenate the forward and backward outputs:
Figure BDA0001788069380000071
Figure BDA0001788069380000072
Figure BDA0001788069380000073
wherein K2 represents the number of layers of the LSTM layer.
The context encoder final output is:
O=(o 1 ,o 2 ,...,o N ),
Figure BDA0001788069380000074
the linguistic feature encoder structure is expressed as splicing the embedded representations of the part of speech and the dependency tags as inputs:
Figure BDA0001788069380000075
Figure BDA0001788069380000076
Figure BDA0001788069380000077
wherein K3 is the number of layers of the LSTM layer.
The user information encoder structure is expressed as follows:
μ 0 =[μ,s,days]
Figure BDA0001788069380000078
where u is the embedded representation of the user, s is the embedded representation of the user's nationality, and days is the learning duration of the user. j=1, 2,.. 4 ,K 4 Is the number of layers of the neural network. Wu, bu are parameters, derived from training.
The title form encoder is expressed as:
f 0 =[m,sess,c,t]
Figure BDA0001788069380000079
where m is the embedded representation of the question type, sess represents the embedded representation of the answer state, c represents the embedded representation of the answer mode, and t is the time spent answering. j=1, 2,.. 5 ,K 5 Is the number of layers of the subject encoder. W (W) f 、b f Is a training derived parameter.
A decoder:
the outputs (O, L, mu) of the respective encoders are set K4 ,f K5 ) Is input to the decoder. Assume that the entered word sequence is (w 1 ,w 2 ,...,w N ),p t Representing the corresponding w t Is a predictive probability of (1):
Figure BDA0001788069380000081
Figure BDA0001788069380000082
pt=σ(W p ·(v⊙γ t )+b p )
wherein W is v 、b v 、W γ 、b γ 、W p 、b p Is a training derived parameter.
Loss function:
the loss function of the model is defined as follows:
Figure BDA0001788069380000083
wherein d is a superparameter, 0 < d < 1.
By adopting the above technical scheme, as shown in fig. 3 and 4, the scheme adopts a method of grouping the features and adopting different neural network structures to encode according to different features, and achieves good effects on the data sets of the Duolino SLAM three languages (English, spanish and French), and the AUC and F1 are far beyond the baseline model.
1. All LSTM or BiLSTM structures in this scheme may be replaced by other (Bi) RNN (Recurrent Neural Networks) and variants such as (Bi) GRU.
2. Experiments show that the linguistic feature encoder has little influence on the effect, and the scheme can also knock out the linguistic feature encoder or combine the linguistic feature encoder into the context encoder.
3. In the context encoder, the letter level encoder and the word level encoder can obtain better results only by a model, so that one alternative is to use only one of the letter level encoder and the word level encoder.
4. The letter-level encoder, here a Hierarchical (RNN) structure is used, however a flat (Flattened) structure may be used as an alternative.
5. In the scheme, a fully-connected neural network structure is adopted for the structure of the user information code, however, in the research of the text, the effect of modeling the user answer history by using the RNN is found to be slightly lower than that of the disclosed scheme, but the modeling of the user answer history by using the RNN is also a feasible scheme.
6. The subject information encoder may also be replaced by other neural network structures.
The common features required by SLA are grouped according to the characteristics of the common features, and are divided into four sets of context correlation, linguistic feature correlation, user correlation, question type and answer environment correlation. And adopting a corresponding neural network structure to encode according to the characteristics of the corresponding characteristics. This has the advantage of making the sets of features relatively independent, making the network architecture more tailored to the actual model of the data, and thus making it easier for the network to converge to a better result.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (5)

1. An adaptive learning system, characterized by: the self-adaptive learning system comprises a second language learning model based on a neural network, wherein the second language learning model based on the neural network comprises a context encoder, a linguistic feature encoder, a user information encoder, a question form encoder and a decoder, the input features of the context encoder are words and letters, the input features of the linguistic feature encoder are parts of speech and dependency labels of corresponding words, the input features of the user information encoder are student ID information, learning duration and nationality of students, the input features of the question form encoder are answer states, types of exercises, answer time and answer modes, the input of the decoder comprises the output of the context encoder, the output of the linguistic feature encoder, the output of the user information encoder and the output of the question form encoder, and the output of the decoder comprises the prediction probability of the words;
the context encoder comprises a word level encoder and a letter level encoder, the word level encoder is in a bidirectional LSTM structure, the letter level encoder is in a hierarchical bidirectional LSTM structure, the linguistic feature encoder is in an LSTM structure, embedded expressions of parts of speech and dependency labels are spliced to serve as input, and the user information encoder and the topic form encoder are in a fully-connected neural network structure.
2. An adaptive learning system as claimed in claim 1, wherein: the structure of the word-level encoder is expressed as representing each word (w 1 ,w 2 ,...,w N ) Takes word embedded expression of (1) as input, inputs the word embedded expression into a forward LSTM model and a backward LSTM model, and spells the output of the last layer of the forward LSTM and the backward LSTMReceiving the output g of the word level encoder t
Figure FDA0004082711500000011
Figure FDA0004082711500000012
Figure FDA0004082711500000013
Wherein K is 0 Expressing the number of layers of the LSTM;
the structure of the letter-level encoder is expressed as representing individual words (w 1 ,w 2 ,...,w N ) Is input into an LSTM, each word is encoded, and then the encoded output of each word is passed through the Mean over Time layer to obtain (h) w1 ,h w2 ,...,h wN ) Wherein
Figure FDA0004082711500000014
K 1 For the number of layers of LSTM, M is the number of letters of the word, and (h w1 ,h w2 ,...,h wN ) Input into forward and backward LSTM, and then splice the outputs of the forward and backward LSTM to obtain the output of the letter-level encoder>
Figure FDA0004082711500000015
Figure FDA0004082711500000016
Figure FDA0004082711500000017
Figure FDA0004082711500000018
Wherein K is 2 The number of layers of the LSTM layer is expressed,
the context encoder final output is: o= (O) 1 ,o 2 ,...,o N ) Wherein
Figure FDA0004082711500000019
3. An adaptive learning system as claimed in claim 1, wherein: the linguistic feature encoder structure is expressed as concatenating the part of speech and the embedded representation of the dependency tag as input:
Figure FDA0004082711500000021
Figure FDA0004082711500000022
Figure FDA0004082711500000023
/>
wherein K is 3 Is the number of layers of the LSTM layer.
4. An adaptive learning system as claimed in claim 1, wherein: the user information encoder structure is expressed as:
u 0 =[u,s,days]
Figure FDA0004082711500000024
where u is the user's inlayThe entry representation, s is an embedded representation of the user's nationality, and days is the learning duration of the user, j=1, 2,.. 4 ,K 4 Layer number W of neural network u ,b u Is a parameter, derived from training.
5. An adaptive learning system as claimed in claim 1, wherein: the title form encoder is expressed as:
f 0 =[m,sess,c,t]
Figure FDA0004082711500000025
wherein m is an embedded representation of a question type, sess represents an embedded representation of a question answering state, c represents an embedded representation of a question answering mode, t is the time spent for answering, j=1, 2, and K 5 ,K 5 Is the number of layers, W, of the title encoder f 、b f Is a training derived parameter.
CN201811025138.1A 2018-09-04 2018-09-04 Neural network-based second language learning model Active CN109033088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811025138.1A CN109033088B (en) 2018-09-04 2018-09-04 Neural network-based second language learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811025138.1A CN109033088B (en) 2018-09-04 2018-09-04 Neural network-based second language learning model

Publications (2)

Publication Number Publication Date
CN109033088A CN109033088A (en) 2018-12-18
CN109033088B true CN109033088B (en) 2023-05-30

Family

ID=64623239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811025138.1A Active CN109033088B (en) 2018-09-04 2018-09-04 Neural network-based second language learning model

Country Status (1)

Country Link
CN (1) CN109033088B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223553B (en) * 2019-05-20 2021-08-10 北京师范大学 Method and system for predicting answer information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101203895A (en) * 2005-04-05 2008-06-18 Ai有限公司 Systems and methods for semantic knowledge assessment, instruction and acquisition
CN106126507A (en) * 2016-06-22 2016-11-16 哈尔滨工业大学深圳研究生院 A kind of based on character-coded degree of depth nerve interpretation method and system
CN107357789A (en) * 2017-07-14 2017-11-17 哈尔滨工业大学 Merge the neural machine translation method of multi-lingual coding information
CN107578106A (en) * 2017-09-18 2018-01-12 中国科学技术大学 A kind of neutral net natural language inference method for merging semanteme of word knowledge

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101203895A (en) * 2005-04-05 2008-06-18 Ai有限公司 Systems and methods for semantic knowledge assessment, instruction and acquisition
CN106126507A (en) * 2016-06-22 2016-11-16 哈尔滨工业大学深圳研究生院 A kind of based on character-coded degree of depth nerve interpretation method and system
CN107357789A (en) * 2017-07-14 2017-11-17 哈尔滨工业大学 Merge the neural machine translation method of multi-lingual coding information
CN107578106A (en) * 2017-09-18 2018-01-12 中国科学技术大学 A kind of neutral net natural language inference method for merging semanteme of word knowledge

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CLUF: a Neural Model for Second Language Acquisition Modeling;Shuyao Xu等;《Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications》;20180605;第374-380页 *

Also Published As

Publication number Publication date
CN109033088A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN113158665B (en) Method for improving dialog text generation based on text abstract generation and bidirectional corpus generation
CN112364660B (en) Corpus text processing method, corpus text processing device, computer equipment and storage medium
CN107408111A (en) End-to-end speech recognition
Vousden et al. Simplifying reading: Applying the simplicity principle to reading
CN112037773B (en) N-optimal spoken language semantic recognition method and device and electronic equipment
CN111460176A (en) Multi-document machine reading understanding method based on Hash learning
CN111382231B (en) Intention recognition system and method
CN112699688B (en) Text generation method and system with controllable discourse relation
CN107657313B (en) System and method for transfer learning of natural language processing task based on field adaptation
CN113987179A (en) Knowledge enhancement and backtracking loss-based conversational emotion recognition network model, construction method, electronic device and storage medium
CN111563146B (en) Difficulty controllable problem generation method based on reasoning
CN112541060A (en) End-to-end task type dialogue learning framework and method based on confrontation training
WO2023231513A1 (en) Conversation content generation method and apparatus, and storage medium and terminal
CN112182161A (en) Personalized dialogue generation method and system based on user dialogue history
CN113420557B (en) Chinese named entity recognition method, system, equipment and storage medium
CN113239174A (en) Hierarchical multi-round conversation generation method and device based on double-layer decoding
Li et al. Multi-level gated recurrent neural network for dialog act classification
CN116049387A (en) Short text classification method, device and medium based on graph convolution
CN115906816A (en) Text emotion analysis method of two-channel Attention model based on Bert
CN115294627A (en) Text-driven multi-modal emotion analysis method and device for learner
CN113743095B (en) Chinese problem generation unified pre-training method based on word lattice and relative position embedding
CN109033088B (en) Neural network-based second language learning model
Wan et al. Improved dynamic memory network for dialogue act classification with adversarial training
CN112257432A (en) Self-adaptive intention identification method and device and electronic equipment
Wang et al. An Interactive Adversarial Reward Learning-Based Spoken Language Understanding System.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 155, bungalow 17, No. 12, Jiancai Chengzhong Road, Xisanqi, Haidian District, Beijing 100096

Applicant after: BEIJING SINGSOUND INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 100000 No. 38, 2f, block B, building 1, yard 2, Yongcheng North Road, Haidian District, Beijing

Applicant before: BEIJING SINGSOUND EDUCATION TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant