CN115811630A - Education informatization method based on artificial intelligence - Google Patents

Education informatization method based on artificial intelligence Download PDF

Info

Publication number
CN115811630A
CN115811630A CN202310083977.3A CN202310083977A CN115811630A CN 115811630 A CN115811630 A CN 115811630A CN 202310083977 A CN202310083977 A CN 202310083977A CN 115811630 A CN115811630 A CN 115811630A
Authority
CN
China
Prior art keywords
text
model
improved
output
communication area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310083977.3A
Other languages
Chinese (zh)
Other versions
CN115811630B (en
Inventor
康凤
黄浩坤
郑伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Aeronautic Polytechnic
Original Assignee
Chengdu Aeronautic Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Aeronautic Polytechnic filed Critical Chengdu Aeronautic Polytechnic
Priority to CN202310083977.3A priority Critical patent/CN115811630B/en
Publication of CN115811630A publication Critical patent/CN115811630A/en
Application granted granted Critical
Publication of CN115811630B publication Critical patent/CN115811630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses an artificial intelligence-based education informatization method, which comprises the following steps: training a deep learning model, wherein the deep learning model comprises an improved BERT model and a Bi-LSTM model; inputting text data published in an interactive communication area of an online education platform by a user into a trained deep learning model to obtain an output text; and displaying the obtained output text in an interactive communication area of the online education platform. The invention changes the profanity language left by the adversary in the communication interaction area of the online education platform into the praise language through the deep learning model for natural language processing and displays the praise language in the communication interaction area, thereby achieving the purposes of promoting the teaching interest of teachers and driving the adversary.

Description

Education informatization method based on artificial intelligence
Technical Field
The invention relates to the field of artificial intelligence, in particular to an educational informatization method based on artificial intelligence.
Background
On the one hand, it is a major goal of artificial intelligence to let machines understand human languages and interact with humans according to their thoughts and behaviors. Therefore, natural language processing becomes an indispensable core for realizing artificial intelligence. Natural language processing is a new man-machine interactive mode, and its purpose is that the computer utilizes language model to change the input language into semantic symbol and relation, then according to different tasks to obtain different outputs. The common natural language processing has two core tasks, one is to realize the understanding of the machine to the language, and the other is to output the corresponding language according to the understanding of the machine to the language. Realizing artificial intelligence requires enormous computational effort, and therefore artificial intelligence has been an attractive theory for lying in textbooks for a long period of time. However, in recent years, along with the miniaturization development of chips, computer computing power has been dramatically improved, and computing power can be strong on a small chip, so that artificial intelligence and natural language processing are finally realized from theory.
On the other hand, the online education system has become a trend, and online lectures and online learning by using a network become the most common teaching interaction mode. The remote teaching interaction can be realized for teachers and students without going out of home. Although online education and online learning improve the efficiency of educational education, some of the problems arising therein have to arouse people's thoughts. On a network classroom, some victims appear, the purpose of the victims is not to learn knowledge online but to be confused, the victims often leave profanity languages in communication areas or comment areas to disturb the order of a classroom, and many teachers are deeply victimized, so that the teaching enthusiasm of the teachers is low, and the efficiency of the classroom is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the education informatization method based on artificial intelligence solves the problems that the enthusiasm of teachers for teaching is low and the classroom efficiency is low due to the fact that profanity and profanity language exists in the communication interaction area of the online education platform.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a method of artificial intelligence based educational informatization, the method comprising the steps of:
s1: training a deep learning model, wherein the deep learning model comprises an improved BERT model and a Bi-LSTM model;
s2: inputting text data published in an interactive communication area of an online education platform by a user into a trained deep learning model to obtain an output text;
s3: and displaying the obtained output text in an interactive communication area of the online education platform.
The beneficial effect of above-mentioned scheme is: through the technical scheme, the profanity language issued by the wrongdoers in the interactive platform communication area is converted into the praise language, so that the healthy classroom discipline is maintained, and the enthusiasm and classroom efficiency of teacher teaching are improved.
Further, the following sub-steps are included in S1:
s1-1: acquiring historical text data of an interactive communication area of an online education platform and preprocessing the historical text data to obtain preprocessed text data;
s1-2: inputting the preprocessed text data into an improved Attention model to obtain a first text code;
s1-3: inputting the first text code into the improved BERT model, and pre-training the improved BERT model;
s1-4: fine-tuning the improved BERT model to obtain a trained improved BERT model;
s1-5: splicing the first text code and the output of the trained improved BERT model to obtain a second text code;
s1-6: and inputting the second text code into the Bi-LSTM model to finish the training of the Bi-LSTM model.
The beneficial effects of the further scheme are as follows: through the technical scheme, the historical text data is input into the Attention model after being preprocessed, and training of the improved BERT model and the Bi-LSTM model is completed after a series of processing.
Further, the pretreatment in S1-1 comprises the following substeps:
s1-1-1: carrying out regularization processing on the historical text data, wherein the regularization processing comprises word segmentation and punctuation mark removal;
s1-1-2: and performing text vectorization processing on the regularized text data to obtain word vectors of word segmentation and obtain preprocessed text data.
The further beneficial effects are as follows: by the technical scheme, the historical text data is regularized and vectorized, and pre-training of the historical text data is completed.
Further, the S1-2 comprises the following sub-steps:
s1-2-1: calculating the norm of the current word vector of the text data and each word vector in the context, and comparing the norm with a preset threshold value;
s1-2-2: removing the word vectors with the norm of the current word vector larger than a preset threshold value from the context to obtain an improved context;
s1-2-3: inputting the current word vector and the improved context into an Attention model to obtain a first text encoding word vector corresponding to the current word vector, and obtaining a first text code according to the first text encoding word vector.
The beneficial effects of the further scheme are as follows: by the technical scheme, the word vectors with the norm larger than the threshold are removed, the improved context is obtained, the improved context and the current word vector are input into the Attention model, the first text code is obtained, and the pre-training of the improved BERT model is conveniently completed subsequently.
Further, the S1-3 comprises the following sub-steps:
s1-3-1: performing double random processing on the word vector, randomly obtaining a word vector R and any word vector K in the first text code, and replacing K with R to obtain the first text code after the double random processing;
s1-3-2: inputting the first text code after the double random processing into an improved BERT model, and taking out a vector Kc corresponding to a word vector K in an output sequence;
s1-3-3: performing linear transformation on the vector Kc to obtain a vector K1, and performing softmax transformation on the vector K1 to obtain an output vector Km with the maximum probability;
s1-3-4: and determining parameters of the improved BERT model by comparing the output vector Km with the word vector K, and finishing the pre-training of the improved BERT model.
The beneficial effects of the further scheme are as follows: through the technical scheme, the first text code is input into the improved BERT model, and the pre-training of the improved BERT model is completed through a series of processing.
Further, the following sub-steps are included in S1-4:
s1-4-1: labeling the preprocessed text, and marking each sentence in the preprocessed text with a sentence type label and a score label, wherein the sentence type label comprises profanity, neutrality and praise, the score label comprises 0-9 scores, and the score corresponding to the neutral label is 0 score;
s1-4-2: inputting the preprocessed text into an improved BERT model after pre-training, and outputting a corresponding statement type label and a corresponding score label;
s1-4-3: and comparing the statement type label and the score label output by the improved BERT model with the original statement type label and the score label to finish fine adjustment of the improved BERT model and obtain the trained improved BERT model.
The beneficial effects of the above further scheme are: through the technical scheme, the preprocessed text is subjected to labeling regulation, and is input into the improved BERT model after pre-training, so that fine adjustment of the improved BERT model is completed.
Further, the output of the Bi-LSTM model in S1-6 is specifically as follows:
(1) If the statement type label of the second text code is praise or neutral, the statement type label is output as an input text of the first text code corresponding to the second text code;
(2) If the statement type label of the second text code is insulting and the score label is m, the output is that the statement type label is praise and the score label is a text of m.
The beneficial effects of the further scheme are as follows: by the technical scheme, different input texts are obtained according to different sentence type labels.
Further, the text output in S2 is specifically:
(1) If the text published by the user in the interactive communication area of the online education platform is neutral or praise, the text is obtained;
(2) If the text issued by the user in the interactive communication area of the online education platform is profanity, profound text is obtained according to the degree of the profanity, and the deeper the degree of the profanity, the deeper the degree of the profanity.
The beneficial effects of the further scheme are as follows: through the technical scheme, different types of sentences published in the interactive communication area of the online education platform by the user are converted into different texts.
Further, the step of displaying the output text in the interactive communication area of the online education platform in S3 specifically includes:
(1) If the text published by the user in the online education platform interactive communication area is neutral or praise, displaying the text in the online education platform interactive communication area;
(2) If the text issued by the user in the interactive communication area of the online education platform is insulting, the output text is emphasized, and the text after the emphasis processing is displayed in the interactive communication area of the online education platform, wherein the emphasis processing comprises text highlighting, text deepening or barrage floating.
The beneficial effects of the further scheme are as follows: through the technical scheme, different texts are output and displayed in the interactive communication area according to different types of sentences published in the interactive communication area of the online education platform by the user.
Drawings
FIG. 1 is a flow chart of a method for education informatization based on artificial intelligence.
FIG. 2 is a diagram of an improved Attention model architecture.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
As shown in fig. 1, a method for education informationization based on artificial intelligence, the method comprising the steps of:
s1: training a deep learning model, wherein the deep learning model comprises an improved BERT model and a Bi-LSTM model;
s2: inputting text data published in an interactive communication area of an online education platform by a user into a trained deep learning model to obtain an output text;
s3: and displaying the obtained output text in an interactive communication area of the online education platform.
The S1 comprises the following steps:
s1-1: acquiring historical text data of an interactive communication area of an online education platform and preprocessing the historical text data to obtain preprocessed text data;
s1-2: inputting the preprocessed text data into an improved Attention model to obtain a first text code, as shown in fig. 2, where the improved Attention model includes a norm calculation unit and a common Attention unit, first obtaining norms b1, b3, b5 and the like from an input vector a1-an through the norm calculation unit, and then obtaining corresponding first text codes of outputs c1, c3, c5 and the like from b1, b3, b5 and the like through the Attention unit;
s1-3: inputting the first text code into an improved BERT model, and pre-training the improved BERT model, wherein an Attention unit adopted in the BERT model is consistent with the improved Attention model in FIG. 2, but the consistency is only structural consistency, parameters of the modified BERT model are obtained according to training, and the parameters are different;
s1-4: fine-tuning the improved BERT model to obtain a trained improved BERT model, and setting initial parameters as parameters obtained in a pre-training process by using parameters obtained by the pre-trained BERT model, wherein the parameters are not randomly generated any more when the improved BERT model is fine-tuned;
s1-5: splicing the first text code and the output of the trained improved BERT model to obtain a second text code;
s1-6: and inputting the second text code into the Bi-LSTM model to finish the training of the Bi-LSTM model.
The pretreatment in the S1-1 comprises the following steps:
s1-1-1: carrying out regularization processing on the historical text data, wherein the regularization processing comprises word segmentation and punctuation mark removal;
s1-1-2: and performing text vectorization processing on the text data after the regularization processing to obtain word vectors of word segmentation, and obtaining the text data after the preprocessing.
S1-2 comprises the following steps:
s1-2-1: calculating the norm of the current word vector of the text data and each word vector in the context, and comparing the norm with a preset threshold value;
s1-2-2: removing the word vectors with the norm of the current word vector larger than a preset threshold value from the context to obtain an improved context;
s1-2-3: inputting the current word vector and the improved context into an Attention model to obtain a first text encoding word vector corresponding to the current word vector, and obtaining a first text code according to the first text encoding word vector.
S1-3 comprises the following steps:
s1-3-1: performing double random processing on the word vector, randomly obtaining a word vector R and any word vector K in the first text code, and replacing K with R to obtain the first text code after the double random processing;
s1-3-2: inputting the first text code after the double random processing into an improved BERT model, and taking out a vector Kc corresponding to a word vector K in an output sequence;
s1-3-3: performing linear transformation on the vector Kc to obtain a vector K1, and performing softmax transformation on the vector K1 to obtain an output vector Km with the maximum probability;
s1-3-4: and determining parameters of the improved BERT model by comparing the output vector Km with the word vector K, and finishing the pre-training of the improved BERT model.
S1-4 comprises the following steps:
s1-4-1: labeling the preprocessed text, and marking each sentence in the preprocessed text with a sentence type label and a score label, wherein the sentence type label comprises profanity, neutrality and praise, the score label comprises 0-9 scores, and the score corresponding to the neutral label is 0 score;
s1-4-2: inputting the preprocessed text into an improved BERT model after pre-training, and outputting a corresponding statement type label and a corresponding score label;
s1-4-3: and comparing the sentence type label and the score label output by the improved BERT model with the original sentence type label and the score label to finish fine adjustment of the improved BERT model and obtain the trained improved BERT model.
The output of the Bi-LSTM model in S1-6 is specifically as follows:
(1) If the statement type label of the second text code is praised or neutral, outputting an input text of the first text code corresponding to the second text code;
(2) If the statement type label of the second text code is insulting and the score label is m, the output is that the statement type label is praise and the score label is a text of m.
The output text in S2 is specifically:
(1) If the text published by the user in the interactive communication area of the online education platform is neutral or praise, the text is obtained;
(2) If the text issued by the user in the interactive communication area of the online education platform is insulting, profound text is obtained according to the degree of the insulting, and the deeper the degree of the insulting, the deeper the degree of the profound text is obtained.
And S3, displaying the output text in an interactive communication area of the online education platform specifically comprises the following steps:
(1) If the text published by the user in the online education platform interactive communication area is neutral or praise, displaying the text in the online education platform interactive communication area;
(2) If the text issued by the user in the interactive communication area of the online education platform is insulting, the output text is emphasized, and the text after the emphasis processing is displayed in the interactive communication area of the online education platform, wherein the emphasis processing comprises text highlighting, text deepening or barrage floating.
The present invention introduces an improved authorization, based on which the improved BERT is used to identify whether the language entered by the user is profanity or profanity, and finally the profanity is converted into profanity by LSTM. BERT has shown good characteristics in natural language processing, and the general process comprises two steps of pre-training and fine-tuning, and the text is predicted to be foul or prosperous by utilizing BERT, and meanwhile, because the Attention is an important component in the BERT, the improved Attention is also adopted to reduce the calculation cost, which is called improved BERT, and the LSTM adopts a Bi-LSTM architecture.
In one embodiment of the present invention, when the user enters multiple text sentences, including "is this teacher an XX", "is the teacher really good to speak", "is the teacher XX", "is the teacher, the third question of the second question can be spoken again? "," the teacher has told well ". The first sentence is judged as profanity language, the degree of profanity is 7 minutes, the second sentence is profanity language, the degree of profanity is 7 minutes, the third sentence is profanity language, the degree of profanity is 9 minutes, the fourth sentence is neutral language, the fifth sentence is profanity language, and the degree of profanity is 5 minutes. When the second sentence, the fourth sentence and the fifth sentence are input by the user, the second sentence, the fourth sentence and the fifth sentence can be directly output in the interactive communication area, and other students and teachers can directly see the sentences. When the first sentence is input, a sentence with a 7-point praise degree is output to the interactive communication area, for example, the 7-point praise degree of "teacher is true and good", and the sentence replaces "the teacher is XX", so that the sentence is displayed to be seen by other teachers and other students in the interactive communication area. Similarly, the sentence with 9 praise corresponding to "teacher XX" is displayed in the interactive communication area to be seen by other teachers and other students.
The invention reduces the chance of online education cyber riot, converts the profanity language issued by the wrongdoers into the profanity language for display, not only can improve the enthusiasm of teacher education and teaching, but also can attack the wrongdoers, so that the profanity languages can not play the role of profanity teachers, and also praise the teachers reversely to deviate from the purpose of profanity teachers, thereby achieving the purpose of driving away the wrongdoers.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit and scope of the invention.

Claims (9)

1. An artificial intelligence based educational informatization method, which is characterized by comprising the following steps:
s1: training a deep learning model, wherein the deep learning model comprises an improved BERT model and a Bi-LSTM model;
s2: inputting text data published in an interactive communication area of an online education platform by a user into a trained deep learning model to obtain an output text;
s3: and displaying the obtained output text in an interactive communication area of the online education platform.
2. The method for educational informatization based on artificial intelligence of claim 1, wherein the step S1 comprises the following sub-steps:
s1-1: obtaining historical text data of an interactive communication area of an online education platform and preprocessing the historical text data to obtain preprocessed text data;
s1-2: inputting the preprocessed text data into an improved Attention model to obtain a first text code;
s1-3: inputting the first text code into the improved BERT model, and pre-training the improved BERT model;
s1-4: fine-tuning the improved BERT model to obtain a trained improved BERT model;
s1-5: splicing the first text code and the output of the trained improved BERT model to obtain a second text code;
s1-6: and inputting the second text code into the Bi-LSTM model to finish the training of the Bi-LSTM model.
3. The artificial intelligence based education informationization method of claim 2, wherein the preprocessing in S1-1 includes the following substeps:
s1-1-1: carrying out regularization processing on the historical text data, wherein the regularization processing comprises word segmentation and punctuation mark removal;
s1-1-2: and performing text vectorization processing on the text data after the regularization processing to obtain word vectors of word segmentation, and obtaining the text data after the preprocessing.
4. The artificial intelligence based education informationization method of claim 3, wherein the S1-2 includes the following sub-steps:
s1-2-1: calculating the norm of the current word vector of the text data and each word vector in the context, and comparing the norm with a preset threshold value;
s1-2-2: removing the word vectors with the norm of the current word vector larger than a preset threshold value from the context to obtain an improved context;
s1-2-3: inputting the current word vector and the improved context into an Attention model to obtain a first text encoding word vector corresponding to the current word vector, and obtaining a first text code according to the first text encoding word vector.
5. The artificial intelligence based education informationization method of claim 4, wherein the S1-3 includes the following sub-steps:
s1-3-1: performing double random processing on the word vector, randomly obtaining a word vector R and any word vector K in the first text code, and replacing K with R to obtain the first text code after the double random processing;
s1-3-2: inputting the first text code after the double random processing into an improved BERT model, and taking out a vector Kc corresponding to a word vector K in an output sequence;
s1-3-3: performing linear transformation on the vector Kc to obtain a vector K1, and performing softmax transformation on the vector K1 to obtain an output vector Km with the maximum probability;
s1-3-4: and determining parameters of the improved BERT model by comparing the output vector Km with the word vector K, and finishing the pre-training of the improved BERT model.
6. The artificial intelligence based education informationization method of claim 5, wherein the S1-4 includes the following sub-steps:
s1-4-1: labeling the preprocessed text, and marking each sentence in the preprocessed text with a sentence type label and a score label, wherein the sentence type label comprises profanity, neutrality and praise, the score label comprises 0-9 scores, and the score corresponding to the neutral label is 0 score;
s1-4-2: inputting the preprocessed text into the pre-trained improved BERT model, and outputting corresponding sentence type labels and score labels;
s1-4-3: and comparing the statement type label and the score label output by the improved BERT model with the original statement type label and the score label to finish fine adjustment of the improved BERT model and obtain the trained improved BERT model.
7. The artificial intelligence based education informationization method of claim 6, wherein the output of the Bi-LSTM model in S1-6 is specifically:
(1) If the statement type label of the second text code is praise or neutral, the statement type label is output as an input text of the first text code corresponding to the second text code;
(2) If the sentence type tag coded by the second text is profanity and the score tag is m, the sentence type tag is praise and the score tag is m text.
8. The artificial intelligence based education informationization method of claim 1, wherein the output text in S2 is specifically:
(1) If the text published by the user in the interactive communication area of the online education platform is neutral or praise, the text is obtained;
(2) If the text issued by the user in the interactive communication area of the online education platform is insulting, profound text is obtained according to the degree of the insulting, and the deeper the degree of the insulting, the deeper the degree of the profound text is obtained.
9. The artificial intelligence based education informationization method according to claim 1, wherein the displaying of the output text in the online education platform interactive communication area in S3 is specifically:
(1) If the text published by the user in the online education platform interactive communication area is neutral or praise, displaying the text in the online education platform interactive communication area;
(2) If the text issued by the user in the interactive communication area of the online education platform is insulting, the output text is emphasized, and the text after the emphasis processing is displayed in the interactive communication area of the online education platform, wherein the emphasis processing comprises text highlighting, text deepening or barrage floating.
CN202310083977.3A 2023-02-09 2023-02-09 Education informatization method based on artificial intelligence Active CN115811630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310083977.3A CN115811630B (en) 2023-02-09 2023-02-09 Education informatization method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310083977.3A CN115811630B (en) 2023-02-09 2023-02-09 Education informatization method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN115811630A true CN115811630A (en) 2023-03-17
CN115811630B CN115811630B (en) 2023-05-02

Family

ID=85487689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310083977.3A Active CN115811630B (en) 2023-02-09 2023-02-09 Education informatization method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN115811630B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209401A (en) * 2020-01-03 2020-05-29 西安电子科技大学 System and method for classifying and processing sentiment polarity of online public opinion text information
CN111241789A (en) * 2020-01-14 2020-06-05 平安科技(深圳)有限公司 Text generation method and device
CN112256945A (en) * 2020-11-06 2021-01-22 四川大学 Social network Cantonese rumor detection method based on deep neural network
CN113239700A (en) * 2021-04-27 2021-08-10 哈尔滨理工大学 Text semantic matching device, system, method and storage medium for improving BERT
CN113515942A (en) * 2020-12-24 2021-10-19 腾讯科技(深圳)有限公司 Text processing method and device, computer equipment and storage medium
US11170175B1 (en) * 2019-07-01 2021-11-09 Intuit, Inc. Generating replacement sentences for a particular sentiment
US20210374361A1 (en) * 2020-06-02 2021-12-02 Oracle International Corporation Removing undesirable signals from language models using negative data
US20220114476A1 (en) * 2020-10-14 2022-04-14 Adobe Inc. Utilizing a joint-learning self-distillation framework for improving text sequential labeling machine-learning models
CN114595693A (en) * 2020-12-07 2022-06-07 国网辽宁省电力有限公司营销服务中心 Text emotion analysis method based on deep learning
US11386160B1 (en) * 2021-08-09 2022-07-12 Capital One Services, Llc Feedback control for automated messaging adjustments
CN115392259A (en) * 2022-10-27 2022-11-25 暨南大学 Microblog text sentiment analysis method and system based on confrontation training fusion BERT
CN115630653A (en) * 2022-11-02 2023-01-20 合肥学院 Network popular language emotion analysis method based on BERT and BilSTM

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11170175B1 (en) * 2019-07-01 2021-11-09 Intuit, Inc. Generating replacement sentences for a particular sentiment
CN111209401A (en) * 2020-01-03 2020-05-29 西安电子科技大学 System and method for classifying and processing sentiment polarity of online public opinion text information
CN111241789A (en) * 2020-01-14 2020-06-05 平安科技(深圳)有限公司 Text generation method and device
US20210374361A1 (en) * 2020-06-02 2021-12-02 Oracle International Corporation Removing undesirable signals from language models using negative data
US20220114476A1 (en) * 2020-10-14 2022-04-14 Adobe Inc. Utilizing a joint-learning self-distillation framework for improving text sequential labeling machine-learning models
CN112256945A (en) * 2020-11-06 2021-01-22 四川大学 Social network Cantonese rumor detection method based on deep neural network
CN114595693A (en) * 2020-12-07 2022-06-07 国网辽宁省电力有限公司营销服务中心 Text emotion analysis method based on deep learning
CN113515942A (en) * 2020-12-24 2021-10-19 腾讯科技(深圳)有限公司 Text processing method and device, computer equipment and storage medium
CN113239700A (en) * 2021-04-27 2021-08-10 哈尔滨理工大学 Text semantic matching device, system, method and storage medium for improving BERT
US11386160B1 (en) * 2021-08-09 2022-07-12 Capital One Services, Llc Feedback control for automated messaging adjustments
CN115392259A (en) * 2022-10-27 2022-11-25 暨南大学 Microblog text sentiment analysis method and system based on confrontation training fusion BERT
CN115630653A (en) * 2022-11-02 2023-01-20 合肥学院 Network popular language emotion analysis method based on BERT and BilSTM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
方英兰;孙吉祥;韩兵;: "基于BERT的文本情感分析方法的研究" *
杨奎河;刘智鹏;: "基于BERT-BiLSTM的短文本情感分析" *

Also Published As

Publication number Publication date
CN115811630B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
Agarwal et al. A review of tools and techniques for computer aided pronunciation training (CAPT) in English
CN109670168B (en) Short answer automatic scoring method, system and storage medium based on feature learning
CN113609859A (en) Special equipment Chinese named entity recognition method based on pre-training model
CN109947915B (en) Knowledge management system-based artificial intelligence expert system and construction method thereof
CN112101045B (en) Multi-mode semantic integrity recognition method and device and electronic equipment
CN111563146A (en) Inference-based difficulty controllable problem generation method
KR20220060780A (en) Knowledge based dialogue system and method for language learning
CN117121015A (en) Multimodal, less-hair learning using frozen language models
CN108509539B (en) Information processing method and electronic device
CN111382231A (en) Intention recognition system and method
CN112528883A (en) Teaching scene video description generation method based on backstepping network
CN114048301B (en) Satisfaction-based user simulation method and system
CN113011196B (en) Concept-enhanced representation and one-way attention-containing subjective question automatic scoring neural network model
CN113326367A (en) Task type dialogue method and system based on end-to-end text generation
CN117113937A (en) Electric power field reading and understanding method and system based on large-scale language model
CN115811630A (en) Education informatization method based on artificial intelligence
CN115221306B (en) Automatic response evaluation method and device
CN117216197A (en) Answer reasoning method, device, equipment and storage medium
CN107992482B (en) Protocol method and system for solving steps of mathematic subjective questions
CN115759102A (en) Chinese poetry wine culture named entity recognition method
CN112785039B (en) Prediction method and related device for answer score rate of test questions
KR20160131304A (en) Method and apparatus for easily memorizing the meaning of new words (foreign words and jargon or advertisement contents) using connective blanks and inference blanks
CN116151242B (en) Intelligent problem recommendation method, system and storage medium for programming learning scene
Goodman et al. Linguistics, Psycholinguistics, and the Teaching of Reading: An Annotated Bibliography.
Aslampoor et al. Effectiveness of English vocabulary learning strategies for learning second language learners

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant