CN111159403B - Intelligent classroom perception method and system - Google Patents

Intelligent classroom perception method and system Download PDF

Info

Publication number
CN111159403B
CN111159403B CN201911381726.3A CN201911381726A CN111159403B CN 111159403 B CN111159403 B CN 111159403B CN 201911381726 A CN201911381726 A CN 201911381726A CN 111159403 B CN111159403 B CN 111159403B
Authority
CN
China
Prior art keywords
text
training
knowledge
knowledge point
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911381726.3A
Other languages
Chinese (zh)
Other versions
CN111159403A (en
Inventor
张玉良
杨广龙
郑健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Gaole Education Technology Co ltd
Original Assignee
Guangdong Gaole Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Gaole Education Technology Co ltd filed Critical Guangdong Gaole Education Technology Co ltd
Priority to CN201911381726.3A priority Critical patent/CN111159403B/en
Publication of CN111159403A publication Critical patent/CN111159403A/en
Application granted granted Critical
Publication of CN111159403B publication Critical patent/CN111159403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The invention discloses a method and a system for intelligent classroom perception, wherein a TF-IDF value of each word is calculated through a space vector model to calculate the influence degree of each feature word on a text type, a feature higher than a dimension threshold value is selected as a feature dimension, a space vector matrix training support vector machine of a training sample obtained by a knowledge point training and perception module is used for obtaining a classifier, and the classifier is used for identifying intention classification about knowledge in text expression and outputting a knowledge point theme and intention; actively tracking real-time voice expression in a classroom, sensing and understanding real-time mentioned knowledge points and intentions, calling the meaning, question bank, resources and the like of the matched knowledge points, and performing auxiliary teaching display at an education terminal; the characteristic space dimensionality is reduced, less computer resources are consumed, timely and accurate feedback can be rapidly made for users in teaching and learning in classroom education, and the user experience of the current intelligent classroom teaching is improved.

Description

Intelligent classroom perception method and system
Technical Field
The disclosure belongs to the technical field of intelligent teaching assistance, and particularly relates to an intelligent classroom perception method and system, which are suitable for teaching and learning in classroom education.
Background
In recent years, the internet and artificial intelligence are continuously developed, and an artificial intelligence cloud service platform is rapidly raised, so that convenience is provided for landing of some intelligent applications, and the artificial intelligence is not far out of reach any more. The application of artificial intelligence in the field of education has also been developed: the continuous masquerading of intelligent hardware and terminal products really provides great convenience and help for the informatization of the education industry, and the intellectualization of classroom teaching and the intellectualization of auxiliary teaching gradually form a trend.
Disclosure of Invention
In order to solve the problems, the technical scheme of the intelligent classroom perception method and system is provided, real-time voice expression in a classroom is actively tracked, real-time mentioned knowledge points and intentions are sensed (understood), the disjunctions, question banks, resources and the like of the matched knowledge points are called, and auxiliary teaching display is carried out on an education terminal.
In order to achieve the above object, according to an aspect of the present disclosure, a method for intelligent classroom sensing is provided, the steps of the method depend on an intelligent classroom sensing system, and the intelligent classroom sensing system includes a knowledge base resource library module, a real-time voice access module, a knowledge point training and sensing module, and a response presenting terminal; the knowledge point training and sensing module comprises a knowledge point training submodule and a knowledge point sensing submodule;
The method specifically comprises the following steps:
s100: the knowledge point training submodule imports a training sample text, and carries out pretreatment on a training sample to obtain a processed sample text;
s200: extracting the characteristics of the processed sample text based on the N-Gram, and establishing a characteristic dictionary; carrying out feature screening to reduce the dimension of the feature space; constructing a word frequency matrix for the processed sample text as a space vector model;
s300: calculating a TF-IDF value of each word through a space vector model, calculating the influence degree of each feature word on the text type, setting a dimension threshold, and selecting features higher than the dimension threshold as feature dimensions for text space vector expression of processed sample texts;
s400: training a matrix formed by the space vectors of the processed sample texts through a support vector machine to obtain a classifier;
s500: the real-time voice access module collects voice and carries out voice translation to obtain a translated text and transmits the translated text to the knowledge point sensing submodule;
s600: the knowledge point perception submodule preprocesses the translated text to obtain a processed translated text and extracts subject words of knowledge points;
s700: performing text vector expression on the processed translation text according to the characteristics screened during the training of the support vector machine;
S800: identifying intention classification about knowledge in the text expression through a classifier and taking output as a knowledge point theme and intention;
s900: and outputting the knowledge point theme and intention through a terminal presentation module.
Further, in S100, the method for preprocessing the training sample to obtain the processed sample text specifically includes the following steps:
s101: calling a knowledge base resource base module through a Chinese word segmentation device HanLP or FudanNLP to load the knowledge point subject word auxiliary segmentation in the knowledge base;
s102: deleting words in the stop word list;
s103: normalizing synonyms and near synonyms into one default word version;
s104: and uniformly representing the subject words of the knowledge points in the training text as unique identification words.
Further, in S300, the degree of influence of each feature word on the text type is calculated, that is, the frequency of occurrence of each feature word in the text, that is, the word frequency is calculated, and a larger word frequency represents a larger degree of influence of the word on the text type.
Further, in S300, the dimension threshold is a final dimension of text dimension reduction.
The feature dictionary is used to make words and their ids correspond one-to-one.
Screening the characteristics: because the initial feature dictionary is large in scale, the dimensionality of a sample space vector generally reaches tens of thousands of levels, feature selection can reduce feature space dimensionality, and less computer resources are consumed.
The text space vector of the processed sample text is a two-dimensional matrix, the row represents the occurrence frequency of each characteristic word in each training sample, and the column of the word frequency matrix represents the characteristic words contained in each training sample and the occurrence frequency of the characteristic words.
Further, in S600, the method for the knowledge point perception sub-module to preprocess the translated text to obtain a processed translated text and extract the subject word of the knowledge point specifically includes the following steps:
s601: calling a knowledge base resource base module through a Chinese word segmentation device HanLP or FudanNLP to load the knowledge point subject word auxiliary segmentation in the knowledge base;
s602: deleting words in the stop word list;
s603: normalizing synonyms and near synonyms into one default word version;
s604: and uniformly representing the subject words of the knowledge points in the translated text as unique identification words.
Further, the training sample text is a test text input in advance, or a text file converted by voice input.
The invention also provides an intelligent classroom perception system, which comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
The training sample preprocessing unit is used for importing a training sample text through the knowledge point training submodule and preprocessing the training sample to obtain a processed sample text;
the word frequency matrix construction unit is used for extracting the characteristics of the processed sample text based on the N-Gram and establishing a characteristic dictionary; carrying out feature screening to reduce the dimension of the feature space; constructing a word frequency matrix for the processed sample text as a space vector model;
the sample vector expression unit is used for calculating a TF-IDF value of each word through a space vector model, calculating the influence degree of each feature word on the text type, setting a dimension threshold, selecting features higher than the dimension threshold as feature dimensions, and using the feature dimensions as text space vector expression of processed sample texts;
the classifier training unit is used for training a matrix formed by the space vectors of the processed sample texts through a support vector machine to obtain a classifier;
the voice translation unit is used for acquiring voice by the real-time voice access module, translating the voice to obtain a translated text and transmitting the translated text to the knowledge point sensing submodule;
the translated text preprocessing unit is used for preprocessing the translated text by the knowledge point perception submodule to obtain a processed translated text and extracting subject words of the knowledge points;
The text vector expression unit is used for performing text vector expression on the processed translation text according to the features screened during the training of the support vector machine;
an intention classification identification unit for identifying intention classification about knowledge in the text expression through a classifier and outputting the intention classification as a knowledge point theme and intention;
and the terminal presentation unit is used for outputting the knowledge point theme and intention through the terminal presentation module.
The beneficial effect of this disclosure does: the invention provides an intelligent classroom perception method and system, which reduce the characteristic space dimension, consume less computer resources, quickly and accurately feed back teaching and learning users in classroom education in time, and improve the user experience of the current intelligent classroom teaching.
Drawings
The foregoing and other features of the present disclosure will become more apparent from the detailed description of the embodiments shown in conjunction with the drawings in which like reference characters designate the same or similar elements throughout the several views, and it is apparent that the drawings in the following description are merely some examples of the present disclosure and that other drawings may be derived therefrom by those skilled in the art without the benefit of any inventive faculty, and in which:
FIG. 1 is a system block diagram of an intelligent classroom awareness system;
FIG. 2 is a flow chart of the internal of the knowledge point training sub-module;
FIG. 3 is a flow chart of the real-time speech access module translating speech text;
FIG. 4 is a flow chart of the internal of the knowledge point perception sub-module.
Detailed Description
The conception, specific structure and technical effects of the present disclosure will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, aspects and effects of the present disclosure. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The present disclosure proposes a method of intelligent classroom awareness, the steps of which rely on an intelligent classroom awareness system,
as shown in fig. 1, fig. 1 is a system structure diagram of an intelligent classroom sensing system, wherein the intelligent classroom sensing system comprises a knowledge base resource library module, a real-time voice access module, a knowledge point training and sensing module, and a response presentation terminal; the knowledge point training and sensing module comprises a knowledge point training submodule and a knowledge point sensing submodule;
as shown in fig. 2, fig. 2 is a flow chart of the inside of the knowledge point training sub-module, and the flow of the knowledge point training sub-module includes the following steps:
1. Importing a training sample text, and preprocessing a training sample: loading auxiliary word segmentation of a knowledge point topic vocabulary in a knowledge base by using word segmentation tools such as a Chinese word segmentation device HanLP and FudanNLP;
2. deleting some nonsense words in the stop word list;
3. normalizing synonyms and near synonyms into one default word version;
4. uniformly representing the subject terms of the knowledge points in the training text as unique identification terms;
5. extracting the features of the text processed by the step 1-4 based on the N-Gram, and establishing a feature dictionary; in addition, feature screening is required, and the dimension of a feature space is reduced; finally, a space vector model (word frequency data matrix) is constructed for the training samples:
the function of the feature dictionary is as follows: so that the words and their number ids are in one-to-one correspondence.
Selecting characteristics: because the initial feature dictionary is large in scale, the dimensionality of a sample space vector generally reaches tens of thousands of levels, feature selection can reduce feature space dimensionality, and less computer resources are consumed.
And calculating the TF-IDF value of each word through a word frequency matrix, calculating the influence degree of each potential feature word on the text type, and setting a dimension threshold, wherein the feature with the dimension threshold from high to low is preferably used as a feature dimension for text space vector expression.
The training sample space vector is a two-dimensional matrix, the row represents the occurrence frequency of each characteristic word in each training sample, and the column of the word frequency matrix represents the characteristic words contained in each training sample and the occurrence frequency of the characteristic words.
At this point, the text space vector representation of the training sample is completed;
6. training a space vector matrix of a training sample by using a Support Vector Machine (SVM) to obtain a classifier;
FIG. 3 is a flow chart of the real-time speech access module translating the speech text, as shown in FIG. 3; the real-time voice access module comprises the following processes: the real-time voice access module comprises a hardware terminal of classroom voice, a human voice detection module at the rear end of software, a recording module and a voice translation (ASR) module. The hardware terminal collects real-time voice of a classroom, returns real-time audio through a network and transmits the real-time audio to the software rear end, the software rear end carries out voice detection (VAD) on real-time audio flow, after voice is judged, recording is started, non-voice parts are cut off, the recording is stopped, the collected recording is subjected to voice translation on an open cloud platform, and real-time voice expression characters in the classroom are obtained.
As shown in fig. 4, fig. 4 is an internal flow chart of the knowledge point perception sub-module, and the flow of the knowledge point perception sub-module includes the following steps:
1. Completing the text preprocessing of the steps 1-4 for the translated text information at one time, and extracting subject terms of knowledge points;
2. translating the text to perform text vector expression according to the screened features during training;
3. recognizing intention classification about knowledge in the text expression by using a classifier after training, and outputting a knowledge point theme and an intention;
thus, completing the knowledge point training and sensing module;
the response presenting terminal is a specific presenting terminal of the knowledge resource library, and the terminal can be a page or a hardware application terminal and is used as an outlet of the interactive response.
The knowledge base resource base module is used for constructing a knowledge point framework in a layered mode and integrating the listed knowledge points and comprises a knowledge base, wherein the knowledge base comprises paraphrases of the knowledge points, question bases and resources related to the corresponding knowledge points, and the knowledge base at least comprises knowledge point subject words, stop word lists, synonyms and synonyms; and feeding back resource information and resources including files such as voice, video, courseware and the like to the request of the knowledge point training and sensing module.
The real-time speech access module, as shown in fig. 3, includes a hardware terminal for classroom speech, a human speech detection module at the back end of software, a recording module, and an speech translation (ASR) module. The hardware terminal collects real-time voice of a classroom, returns real-time audio through a network and transmits the real-time audio to the software rear end, the software rear end carries out voice detection (VAD) on real-time audio flow, after voice is judged, recording is started, non-voice parts are cut off, the recording is stopped, the collected recording is subjected to voice translation on an open cloud platform, and real-time voice expression characters in the classroom are obtained;
The knowledge point training and sensing module comprises two modules, namely a knowledge point training sub-module and a knowledge point sensing sub-module; the knowledge point training submodule generates an intention classifier aiming at the knowledge points through supervised learning of training samples; the knowledge point perception submodule extracts the knowledge point theme and intentions related to the knowledge points for the classroom sound 'translation text';
an embodiment of the present disclosure provides an intelligent classroom perception system, which includes: a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the steps in one of the above described embodiments of the intelligent classroom awareness system when executing the computer program.
The system comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
the training sample preprocessing unit is used for importing a training sample text through the knowledge point training submodule and preprocessing the training sample to obtain a processed sample text;
The word frequency matrix construction unit is used for extracting the characteristics of the processed sample text based on the N-Gram and establishing a characteristic dictionary; carrying out feature screening to reduce the dimension of the feature space; constructing a word frequency matrix for the processed sample text as a space vector model;
the sample vector expression unit is used for calculating a TF-IDF value of each word through a space vector model, calculating the influence degree of each feature word on the text type, setting a dimension threshold, selecting features higher than the dimension threshold as feature dimensions, and using the feature dimensions as text space vector expression of processed sample texts;
the classifier training unit is used for training a matrix formed by the space vectors of the processed sample texts through a support vector machine to obtain a classifier;
the voice translation unit is used for acquiring voice by the real-time voice access module, translating the voice to obtain a translated text and transmitting the translated text to the knowledge point sensing submodule;
the translated text preprocessing unit is used for preprocessing the translated text by the knowledge point perception submodule to obtain a processed translated text and extracting subject words of the knowledge points;
the text vector expression unit is used for performing text vector expression on the processed translation text according to the features screened during the training of the support vector machine;
An intention classification identification unit for identifying intention classification about knowledge in the text expression through a classifier and outputting the intention classification as a knowledge point theme and intention;
and the terminal presentation unit is used for outputting the theme and the intention of the knowledge point through the terminal presentation module.
The intelligent classroom perception system can be operated in computing equipment such as desktop computers, notebooks, palm computers and cloud servers. The intelligent classroom-aware system can be operated by a system comprising, but not limited to, a processor and a memory. It will be understood by those skilled in the art that the example is merely an example of an intelligent classroom aware system and is not intended to limit an intelligent classroom aware system to include more or less than proportional components, or to combine certain components, or to include different components, e.g., the intelligent classroom aware system may also include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the intelligent classroom aware system operational system, and various interfaces and lines connecting the various parts of the entire intelligent classroom aware system operational system.
The memory may be used to store the computer programs and/or modules, and the processor may implement the various functions of the intelligent classroom awareness system by running or executing the computer programs and/or modules stored in the memory and invoking the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
While the present disclosure has been described in considerable detail and with particular reference to a few illustrative embodiments thereof, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed as effectively covering the intended scope of the disclosure by providing a broad, potential interpretation of such claims in view of the prior art with reference to the appended claims. Furthermore, the foregoing describes the disclosure in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the disclosure, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (9)

1. A method for intelligent classroom perception is characterized in that the method depends on an intelligent classroom perception system, and the intelligent classroom perception system comprises a knowledge base resource library module, a real-time voice access module, a knowledge point training and perception module and a response presenting terminal; the knowledge point training and sensing module comprises a knowledge point training submodule and a knowledge point sensing submodule;
the method specifically comprises the following steps:
s100: the knowledge point training submodule imports a training sample text, and carries out pretreatment on a training sample to obtain a processed sample text;
s200: extracting the characteristics of the processed sample text based on the N-Gram, and establishing a characteristic dictionary; carrying out feature screening to reduce the dimension of the feature space; constructing a word frequency matrix for the processed sample text as a space vector model; the feature dictionary is used for enabling words and serial numbers id of the words to be in one-to-one correspondence;
s300: calculating a TF-IDF value of each word through a space vector model, calculating the influence degree of each feature word on the text type, setting a dimension threshold, and selecting features higher than the dimension threshold as feature dimensions for text space vector expression of processed sample texts; the text space vector is a two-dimensional matrix, rows represent the occurrence frequency of each feature word in each training sample, and columns represent the feature words contained in each training sample and the occurrence frequency of the feature words;
S400: training a matrix formed by the space vectors of the processed sample texts through a support vector machine to obtain a classifier;
s500: the real-time voice access module collects voice and carries out voice translation to obtain a translated text and transmits the translated text to the knowledge point sensing submodule;
s600: the knowledge point perception submodule preprocesses the translated text to obtain a processed translated text and extracts subject words of knowledge points;
s700: performing text vector expression on the processed translation text according to the characteristics screened during the training of the support vector machine;
s800: identifying intention classification about knowledge in the text expression through a classifier and taking output as a knowledge point theme and intention;
s900: and outputting the knowledge point theme and intention through a terminal presentation module.
2. The method for intelligent classroom perception according to claim 1, wherein in S100, the method for preprocessing the training samples to obtain the processed sample text specifically comprises the steps of:
s101: calling a knowledge base resource base module through a Chinese word segmentation device HanLP or FudanNLP to load the knowledge point subject word auxiliary segmentation in the knowledge base;
s102: deleting words in the stop word list;
s103: normalizing synonyms and near synonyms into one default word version;
S104: and uniformly representing the subject words of the knowledge points in the training text as unique identification words.
3. The method for intelligent classroom perception according to claim 1, wherein the dimension threshold is the final dimension of text dimension reduction in S300.
4. The method as claimed in claim 1, wherein the text space vector of the processed sample text is a two-dimensional matrix, the rows represent the number of occurrences of each feature word in each training sample, and the columns of the word frequency matrix represent the feature words contained in each training sample and the number of occurrences of the feature words.
5. The method of claim 1, wherein in S600, the method for the knowledge point perception sub-module to pre-process the translated text to obtain a processed translated text and extract subject words of knowledge points comprises the following steps:
s601: calling a knowledge base resource base module through a Chinese word segmentation device HanLP or FudanNLP to load the knowledge point subject word auxiliary segmentation in the knowledge base;
s602: deleting words in the stop word list;
s603: normalizing synonyms and near synonyms into one default word version;
S604: and uniformly representing the subject words of the knowledge points in the translated text as unique identification words.
6. The method according to claim 1, wherein the knowledge base resource base module is used for hierarchically constructing a knowledge point framework and integrating listed knowledge points, and comprises a knowledge base, wherein the knowledge base comprises paraphrases of each knowledge point, a question base and resources associated with the corresponding knowledge point, and at least comprises subject words, stop word lists, synonyms and synonyms of the knowledge points; and feeding back resource information on the request of the knowledge point training and sensing module, wherein the resources comprise sound, video and courseware files.
7. The method for intelligent classroom perception according to claim 1, wherein the real-time voice access module includes a hardware terminal for classroom sounds, a human voice detection module at the back end of software, a recording module, and a voice translation module; the hardware terminal collects real-time voice of a classroom, transmits the real-time voice back to the software back end through a network, the software back end performs voice detection on real-time voice streams, starts recording after voice is judged, cuts off a non-voice part, stops recording, performs voice translation on the collected recording on an open cloud platform, and obtains characters expressed by the real-time voice in the classroom.
8. The method for intelligent classroom perception according to claim 1, wherein the knowledge point training and perception module comprises two modules, a knowledge point training sub-module and a knowledge point perception sub-module; the knowledge point training submodule generates an intention classifier aiming at the knowledge points through supervised learning of training samples; the knowledge point perception submodule extracts knowledge point subjects and intentions related to knowledge points for the classroom sound translation text.
9. A system for intelligent classroom awareness, the system comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
the training sample preprocessing unit is used for importing a training sample text through the knowledge point training submodule and preprocessing the training sample to obtain a processed sample text;
the word frequency matrix construction unit is used for extracting the characteristics of the processed sample text based on the N-Gram and establishing a characteristic dictionary; carrying out feature screening to reduce the dimension of the feature space; constructing a word frequency matrix for the processed sample text as a space vector model; the feature dictionary is used for enabling words and serial numbers id of the words to be in one-to-one correspondence;
The sample vector expression unit is used for calculating a TF-IDF value of each word through a space vector model, calculating the influence degree of each feature word on the text type, setting a dimension threshold, selecting features higher than the dimension threshold as feature dimensions, and using the feature dimensions as text space vector expression of processed sample texts;
the classifier training unit is used for training a matrix formed by the space vectors of the processed sample texts through a support vector machine to obtain a classifier;
the voice translation unit is used for acquiring voice by the real-time voice access module, translating the voice to obtain a translated text and transmitting the translated text to the knowledge point sensing submodule;
the translated text preprocessing unit is used for preprocessing the translated text by the knowledge point perception submodule to obtain a processed translated text and extracting subject words of the knowledge points;
the text vector expression unit is used for performing text vector expression on the processed translation text according to the features screened during the training of the support vector machine; the text space vector is a two-dimensional matrix, rows represent the occurrence frequency of each feature word in each training sample, and columns represent the feature words contained in each training sample and the occurrence frequency of the feature words; an intention classification identification unit for identifying intention classification about knowledge in the text expression through a classifier and outputting the intention classification as a knowledge point theme and intention;
And the terminal presentation unit is used for outputting the knowledge point theme and intention through the terminal presentation module.
CN201911381726.3A 2019-12-27 2019-12-27 Intelligent classroom perception method and system Active CN111159403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911381726.3A CN111159403B (en) 2019-12-27 2019-12-27 Intelligent classroom perception method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911381726.3A CN111159403B (en) 2019-12-27 2019-12-27 Intelligent classroom perception method and system

Publications (2)

Publication Number Publication Date
CN111159403A CN111159403A (en) 2020-05-15
CN111159403B true CN111159403B (en) 2022-07-29

Family

ID=70558689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911381726.3A Active CN111159403B (en) 2019-12-27 2019-12-27 Intelligent classroom perception method and system

Country Status (1)

Country Link
CN (1) CN111159403B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113840109B (en) * 2021-09-23 2022-11-08 杭州海宴科技有限公司 Classroom audio and video intelligent note taking method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389307A (en) * 2015-12-02 2016-03-09 上海智臻智能网络科技股份有限公司 Statement intention category identification method and apparatus
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality
CN110069627A (en) * 2017-11-20 2019-07-30 中国移动通信集团上海有限公司 Classification method, device, electronic equipment and the storage medium of short text

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100755677B1 (en) * 2005-11-02 2007-09-05 삼성전자주식회사 Apparatus and method for dialogue speech recognition using topic detection
CN106601237B (en) * 2016-12-29 2020-02-07 上海智臻智能网络科技股份有限公司 Interactive voice response system and voice recognition method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389307A (en) * 2015-12-02 2016-03-09 上海智臻智能网络科技股份有限公司 Statement intention category identification method and apparatus
CN110069627A (en) * 2017-11-20 2019-07-30 中国移动通信集团上海有限公司 Classification method, device, electronic equipment and the storage medium of short text
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality

Also Published As

Publication number Publication date
CN111159403A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN108491433B (en) Chat response method, electronic device and storage medium
CN109446430B (en) Product recommendation method and device, computer equipment and readable storage medium
US10803850B2 (en) Voice generation with predetermined emotion type
CN106658129B (en) Terminal control method and device based on emotion and terminal
CN110168535B (en) Information processing method and terminal, computer storage medium
CN111046133A (en) Question-answering method, question-answering equipment, storage medium and device based on atlas knowledge base
CN111753060A (en) Information retrieval method, device, equipment and computer readable storage medium
JP2017534941A (en) Orphan utterance detection system and method
CN106407393B (en) information processing method and device for intelligent equipment
US11238050B2 (en) Method and apparatus for determining response for user input data, and medium
CN110580516B (en) Interaction method and device based on intelligent robot
KR20200087977A (en) Multimodal ducument summary system and method
CN110377708B (en) Multi-scene conversation switching method and device
CN113254613A (en) Dialogue question-answering method, device, equipment and storage medium
CN110647613A (en) Courseware construction method, courseware construction device, courseware construction server and storage medium
US11822589B2 (en) Method and system for performing summarization of text
CN111126084A (en) Data processing method and device, electronic equipment and storage medium
CN114722837A (en) Multi-turn dialog intention recognition method and device and computer readable storage medium
CN111159403B (en) Intelligent classroom perception method and system
CN111444321A (en) Question answering method, device, electronic equipment and storage medium
CN111898363B (en) Compression method, device, computer equipment and storage medium for long and difficult text sentence
CN113268593A (en) Intention classification and model training method and device, terminal and storage medium
CN112989843A (en) Intention recognition method and device, computing equipment and storage medium
CN112989003B (en) Intention recognition method, device, processing equipment and medium
CN111767710B (en) Indonesia emotion classification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant