CN110598041A - FlACS real-time analysis method and device - Google Patents

FlACS real-time analysis method and device Download PDF

Info

Publication number
CN110598041A
CN110598041A CN201910843037.3A CN201910843037A CN110598041A CN 110598041 A CN110598041 A CN 110598041A CN 201910843037 A CN201910843037 A CN 201910843037A CN 110598041 A CN110598041 A CN 110598041A
Authority
CN
China
Prior art keywords
audio
text
category
codes
text message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910843037.3A
Other languages
Chinese (zh)
Inventor
刘军民
蒋万强
周云翔
龙诗娥
刘北方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Nubi Internet Technology Co Ltd
Original Assignee
Guangzhou Nubi Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Nubi Internet Technology Co Ltd filed Critical Guangzhou Nubi Internet Technology Co Ltd
Priority to CN201910843037.3A priority Critical patent/CN110598041A/en
Publication of CN110598041A publication Critical patent/CN110598041A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Abstract

The invention discloses a method and a device for FlACS real-time analysis, wherein the method comprises the steps of firstly acquiring pickup signals in a teaching classroom; extracting all audio frequencies in the pickup signals, and converting the audio frequencies into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message; matching corresponding category codes for the text information by combining the text information with a preset text analysis method, so as to identify the category codes corresponding to each audio one by one respectively; each audio is split into a plurality of first time segments with the same time interval, each split first time segment is associated with the corresponding audio category code, a plurality of second time segments with category codes are obtained, and therefore the system can calculate the style of a teaching classroom according to all the second time segments in a period. By adopting the technical scheme of the invention, the automatic marking of the time segments is realized without depending on manual observation, thereby effectively improving the working efficiency.

Description

FlACS real-time analysis method and device
Technical Field
The invention relates to the technical field of teaching, in particular to a method and a device for analyzing FlACS in real time.
Background
Flanders interaction analysis system (referred to as FlaSC for short), which is an observation system for quantitatively describing and analyzing teaching interaction behaviors of teachers and students in a classroom, and the FlaSC system can effectively help teachers to know the current classroom mode, so that the teachers are guided to improve the teaching behaviors.
The existing FlaCS system generally adopts a manual observation and recording method to complete the encoding of class interaction types. The manual observation and recording method comprises the steps of firstly dividing teaching classroom audio into continuous 3-second time segments from beginning to end according to a time sequence; secondly, sequentially recording the category code corresponding to each time slice by a researcher; then, calculating the proportion of each category code in the whole audio of the teaching classroom; and finally, according to the proportion of each category code in the whole teaching class audio, analyzing the mode and style of the teaching class. However, the current manual observation method has low efficiency; in addition, the manual observation mode of the existing FIACS cannot feed back the current mode of the teaching classroom in real time in the teaching process, and is not beneficial for teachers to adjust the mode of the teaching classroom in time.
Disclosure of Invention
The embodiment of the invention provides a method and a device for analyzing FlACS in real time, which realize automatic marking of time segments without depending on manual observation, thereby effectively improving the working efficiency.
The embodiment of the invention provides a FlACS real-time analysis method, which comprises the following steps:
acquiring pickup signals in a teaching classroom;
extracting all audio frequencies in the pickup signals, and converting the audio frequencies into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message;
matching corresponding category codes for the text information by combining the text information with a preset text analysis method, so as to identify the category codes corresponding to the audios one by one;
and splitting each audio into a plurality of first time segments with the same time interval, associating the type codes of the corresponding audio for each split first time segment to obtain a plurality of second time segments with the type codes, so that the system can calculate the style of the teaching classroom according to all the second time segments in the period.
As a preferred scheme, the matching of each text message with a corresponding category code is performed on each text message by combining a preset text analysis method, so as to identify the category code corresponding to each audio one by one, specifically:
collecting all vocabularies related to the FlaCS category codes to form a static word list; the vocabularies in the static vocabulary are classified according to the category codes;
segmenting each text message to obtain text sets corresponding to each text message one by one, wherein each text set comprises a plurality of word groups;
matching the phrases in each text set with the vocabularies in the static word list to obtain the category code corresponding to each text set, so as to match the category code corresponding to each text message;
and identifying the class code corresponding to each audio respectively one by one according to the class code of the text information.
As a preferred scheme, the matching of each text message with a corresponding category code is performed on each text message by combining a preset text analysis method, so as to identify the category code corresponding to each audio one by one, specifically:
collecting corpora; wherein the corpus comprises a plurality of sentences;
respectively combining each sentence with a FIACS analysis method of the keywords to obtain category codes corresponding to the sentences one by one;
respectively taking each sentence and the corresponding code thereof as input quantities, and training a deep learning model to obtain a classifier model;
and inputting each piece of text information into the classifier model, and obtaining the class code corresponding to each piece of text information, thereby identifying the class code corresponding to each audio frequency one by one.
Preferably, the pickup signal is audio data classified by a voiceprint recognition technology;
the audio data classified by the voiceprint recognition technology comprises teacher audio data, student audio data or silence audio data.
Preferably, the category code comprises three categories of teacher language, student language and no-valid language;
the teacher language includes: expressing emotion, encouraging praise, adopting opinions, asking questions, teaching, instructing, and judging; the student language includes: response and communication; the inactive language is a silent sound.
Correspondingly, an embodiment of the present invention further provides a FlACS real-time analysis apparatus, including:
the audio acquisition module is used for acquiring pickup signals in a teaching classroom;
the audio conversion module is used for extracting all audio in the pickup signals and converting the audio into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message;
the text classification module is used for matching each text message with a corresponding class code by combining each text message with a preset text analysis method, so that the class codes corresponding to each audio frequency one by one are identified;
and the audio splitting module is used for splitting each audio into a plurality of first time segments with the same time interval, associating the type codes of the corresponding audio for each split first time segment to obtain a plurality of second time segments with the type codes, and thus the system can calculate the style of the teaching classroom according to all the second time segments in the period.
Preferably, the text classification module includes: the system comprises a first collection unit, a first segmentation unit, a first matching unit and a first identification unit;
the first collection unit is used for collecting all vocabularies related to the FlaCS category codes to form a static vocabulary; the vocabularies in the static vocabulary are classified according to the category codes;
the first word segmentation unit is configured to segment each text message to obtain a text set corresponding to each text message one by one, where each text set includes a plurality of word groups;
the first matching unit is used for matching the phrases in each text set with the vocabularies in the static word list to obtain the category code corresponding to each text set, so as to match the category code corresponding to each text message;
the first identification unit is used for identifying the class codes corresponding to the audios one by one according to the class codes of the text information.
Preferably, the text classification module includes: the system comprises a second collection unit, a second analysis unit, a second training unit and a second identification unit;
the second collecting unit is used for collecting the linguistic data; wherein the corpus comprises a plurality of sentences;
the second analysis unit is used for respectively combining each sentence with a FIACS analysis method of the keywords to obtain the category code corresponding to each sentence one by one;
the second training unit is used for training the deep learning model by taking each sentence and the corresponding code thereof as input quantities to obtain a classifier model;
the second identification unit is configured to input each piece of text information into the classifier model, and obtain a category code corresponding to each piece of text information, so as to identify a category code corresponding to each audio.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a FlACS real-time analysis method and a device, the method firstly obtains pickup signals in a teaching classroom; extracting all audio frequencies in the pickup signals, and converting the audio frequencies into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message; matching corresponding category codes for the text information by combining the text information with a preset text analysis method, so as to identify the category codes corresponding to each audio one by one respectively; each audio is split into a plurality of first time segments with the same time interval, each split first time segment is associated with the corresponding audio category code, a plurality of second time segments with category codes are obtained, and therefore the system can calculate the style of a teaching classroom according to all the second time segments in a period. Compared with the prior art which adopts a manual observing and recording method, the technical scheme of the invention does not need to manually observe and record each time segment of a teaching classroom, but converts the audio into the text information in real time, and matches the text information with the corresponding coding type to realize the automatic marking of the time segment, thereby effectively improving the working efficiency.
Drawings
Fig. 1 is a schematic flow chart of a first embodiment of a FlACS real-time analysis method provided by the present invention;
fig. 2 is a schematic structural diagram of a second embodiment of the FlACS real-time analysis apparatus provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment:
referring to fig. 1, a schematic flow chart of an embodiment of a FlACS real-time analysis method provided by the present invention is shown. As shown in fig. 1, the construction method includes steps 101 to 104, and each step is as follows:
step 101: and acquiring pickup signals in a teaching classroom.
In the present embodiment, the sound pickup signal is audio data classified by a voiceprint recognition technique; the audio data classified by the voiceprint recognition technology comprises teacher audio data, student audio data or silence audio data. The audio data classified by the voiceprint recognition technology can effectively prevent teacher audio data, student audio data and silence audio data from being divided into the same time segment, so that the class coding of each time segment is more accurate.
Step 102: extracting all audio frequencies in the pickup signals, and converting the audio frequencies into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message.
Step 103: and matching the text information with corresponding class codes by combining a preset text analysis method, so as to identify the class codes corresponding to each audio respectively one by one.
In this embodiment, step 103 specifically includes: collecting all vocabularies related to the FlaCS category codes to form a static word list; the vocabulary in the static vocabulary is classified according to the category codes; segmenting each text message to obtain text sets corresponding to each text message one by one, wherein each text set comprises a plurality of word groups; matching the phrases in each text set with the vocabularies in the static word list to obtain the category code corresponding to each text set, so as to match the category code corresponding to each text message; and identifying the class codes corresponding to each audio respectively one by one according to the class codes of the text information.
In this embodiment, step 103 specifically includes: collecting corpora; wherein, the corpus comprises a plurality of sentences; respectively combining each sentence with a FIACS analysis method or a manual labeling method of the keywords to obtain category codes which respectively correspond to the sentences one by one; respectively taking each sentence and the corresponding code thereof as input quantities, and training the deep learning model to obtain a classifier model; inputting each text message into a classifier model, and obtaining a class code corresponding to each text message, thereby identifying the class code corresponding to each audio one by one; wherein, the deep learning model is a neural network with RNN, TRANSFORMER, LSTM, BERT and CNN structures.
In the embodiment, the category code includes three categories, which are teacher language, student language and no valid language; wherein, the teacher language includes: expressing emotion, encouraging praise, adopting opinions, asking questions, teaching, instructing, and judging; the student language includes: response and communication; no valid language is a silent sound.
In the present embodiment, emotion is expressed, which refers to the emotion of the admitting student; encouragement, meaning praise or language encouraging students; adopting opinions, which refer to languages that accept student thoughts; asking questions, which refers to a teacher asking a student for a question; lectures, which refer to the content of a teacher lecture course; the instruction refers to the way that the teacher instructs the students, gives commands and expects the students to obey the words; criticizing, which refers to criticizing the speech of students; responses, which refer to the student responding to the language in which the teacher said the expression; communication, which refers to the active opening of a conversation by a student to express the idea of the student; no valid language is a silent sound.
Step 104: each audio is split into a plurality of first time segments with the same time interval, each split first time segment is associated with the corresponding audio category code, a plurality of second time segments with category codes are obtained, and therefore the system can calculate the style of a teaching classroom according to all the second time segments in a period.
In this embodiment, each audio is split into a plurality of first time segments of 3 seconds, and if the time segments of less than 3 seconds are taken as a complete 3-second segment or discarded according to actual conditions; and associating the category codes of the corresponding audio for each split first time segment to obtain a plurality of second time segments with the category codes, arranging all the second time segments according to a time sequence, sequentially recording the second time segments into a matrix with 10 rows and 10 columns, adding 10 before and after the classroom coding time sequence according to a filling rule, and sequentially extracting the category codes, wherein each code is used twice, namely, the code and the previous code and the next code form a pair, the previous number of each pair of category codes represents the number of lines, and the next number of each pair of category codes represents the number of columns. For example, the category codes recorded in the classroom coding time series are 4, 4, 4, 10, 10, 10, 5, 5, 5 … …, first two preceding and succeeding category code combinations, for example, (4, 4), (4, 4), (4, 10), (10, 10), (10, 10), (10, 5), (5, 5) … … are paired, and second, the same category code combinations are counted, and the total frequency is filled in a matrix of 10 × 10. The FICAS interaction analysis matrix is a symmetric matrix, the meaning of the rows and the columns of the FICAS interaction analysis matrix is represented by the specified class codes of the coding system, and a pair of codes is filled in each cell in the matrix and is mainly used for recording the frequency of each classroom behavior. And deducing the classroom teaching condition according to the proportional relation among the classroom behavior frequencies in the matrix and the distribution condition of the classroom behaviors in the matrix.
In this embodiment, the style of the teaching classroom is calculated according to the frequency of occurrence of each category code combination and the following formula to obtain a plurality of standard indexes, and the style of the teaching classroom is calculated according to the proportion of each standard index. Wherein the standard indicators include teacher utterance percentage, student utterance percentage, silence sound percentage, teacher indirect influence ratio, teacher direct influence ratio, teacher reaction ratio, teacher question ratio, student question ratio, teacher instant reaction ratio, teacher instant question ratio, content cross region ratio, steady state ratio, student steady state ratio.
In the present embodiment, the teacher utterance percentage calculation formula is as follows:the student speaking percentage is calculated according to the following formula:the percentage of silence sound is calculated as follows:the teacher indirect influence ratio is calculated according to the following formula:the teacher directly influences the ratio, and the calculation formula is as follows:the teacher reaction ratio is calculated as follows:the teacher question rate is calculated as follows:the student questioning ratio is calculated according to the following formula:the teacher immediate response ratio is calculated as follows:the teacher gives an instant question rate, and the calculation formula is as follows:the content cross region ratio is calculated as follows:the steady state ratio, calculated as follows: (ii) a Student steady state ratio, the formula for calculation is as follows:wherein, the cell (i, j) is expressed as an element formed by each row and column of a matrix in the FlaS;represented as the sum of the columns;expressed as the sum of the rows;represented as the sum of all matrices.
As can be seen from the above, the FlACS real-time analysis method provided by the embodiment of the present invention first obtains pickup signals in a teaching class; extracting all audio frequencies in the pickup signals, and converting the audio frequencies into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message; matching corresponding category codes for the text information by combining the text information with a preset text analysis method, so as to identify the category codes corresponding to each audio one by one respectively; each audio is split into a plurality of first time segments with the same time interval, each split first time segment is associated with the corresponding audio category code, a plurality of second time segments with category codes are obtained, and therefore the system can calculate the style of a teaching classroom according to all the second time segments in a period. Compared with the prior art which adopts a manual observing and recording method, the technical scheme of the invention does not need to manually observe and record each time segment of a teaching classroom, but converts the audio into the text information in real time, and matches the text information with the corresponding coding type to realize the automatic marking of the time segment, thereby effectively improving the working efficiency.
Second embodiment:
fig. 2 is a schematic structural diagram of a second embodiment of a FlACS real-time analysis apparatus according to the present invention. The device includes: the system comprises an audio acquisition module 201, an audio conversion module 202, a text classification module 203 and an audio splitting module 204.
The audio acquisition module 201 is used for acquiring pickup signals in a teaching classroom;
the audio conversion module 202 is configured to extract all audio in the pickup signal, and convert the audio into a plurality of text messages according to a speech recognition technology; wherein one audio corresponds to one text message;
the text classification module 203 is configured to match corresponding category codes for each text message by combining each text message with a preset text analysis method, so as to identify the category codes corresponding to each audio one to one;
the audio splitting module 204 is configured to split each audio into a plurality of first time segments with the same time interval, associate a category code of a corresponding audio for each split first time segment, and obtain a plurality of second time segments with category codes, so that the system can calculate the style of the teaching classroom according to all the second time segments in a period.
In this embodiment, the text classification module 203 includes: the system comprises a first collection unit, a first segmentation unit, a first matching unit and a first identification unit; the first collection unit is used for collecting all vocabularies related to the FlaCS category codes to form a static vocabulary; the vocabulary in the static vocabulary is classified according to the category codes; the first word segmentation unit is used for segmenting each text message to obtain a text set corresponding to each text message one by one, wherein each text set comprises a plurality of word groups; the first matching unit is used for matching the phrases in each text set with the vocabularies in the static word list to obtain the category code corresponding to each text set, so as to match the category code corresponding to each text message; the first identification unit is used for identifying the class codes corresponding to each audio respectively according to the class codes of the text information.
In this embodiment, the text classification module 203 includes: the system comprises a second collection unit, a second analysis unit, a second training unit and a second identification unit; the second collecting unit is used for collecting the linguistic data; wherein, the corpus comprises a plurality of sentences; the second analysis unit is used for respectively combining each sentence with a FIACS analysis method or a manual labeling method of the keywords to obtain category codes which respectively correspond to the sentences one by one; the second training unit is used for training the deep learning model by taking each sentence and the corresponding code thereof as input quantities to obtain a classifier model; and the second identification unit is used for inputting each piece of text information into the classifier model, obtaining the class code corresponding to each piece of text information and identifying the class code corresponding to each audio one by one.
The more detailed working principle and flow of this embodiment can be seen, but not limited to, in the FlACS real-time analysis method of the first embodiment.
As can be seen from the above, the FlACS real-time analysis device provided in the embodiment of the present invention does not need to manually observe and record each time segment of a teaching classroom, but converts the audio into the text information in real time, and matches the text information with the corresponding coding type, thereby implementing automatic labeling of the time segment, and effectively improving the working efficiency.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (8)

1. A FlACS real-time analysis method is characterized by comprising the following steps:
acquiring pickup signals in a teaching classroom;
extracting all audio frequencies in the pickup signals, and converting the audio frequencies into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message;
matching corresponding category codes for the text information by combining the text information with a preset text analysis method, so as to identify the category codes corresponding to the audios one by one;
and splitting each audio into a plurality of first time segments with the same time interval, associating the type codes of the corresponding audio for each split first time segment to obtain a plurality of second time segments with the type codes, so that the system can calculate the style of the teaching classroom according to all the second time segments in the period.
2. The FlACS real-time analysis method according to claim 1, wherein the matching of each text message with a predetermined text analysis method for each text message matches a corresponding category code for each text message, so as to identify a category code corresponding to each audio, specifically:
collecting all vocabularies related to the FlaCS category codes to form a static word list; the vocabularies in the static vocabulary are classified according to the category codes;
segmenting each text message to obtain text sets corresponding to each text message one by one, wherein each text set comprises a plurality of word groups;
matching the phrases in each text set with the vocabularies in the static word list to obtain the category code corresponding to each text set, so as to match the category code corresponding to each text message;
and identifying the class code corresponding to each audio respectively one by one according to the class code of the text information.
3. The FlACS real-time analysis method according to claim 1, wherein the matching of each text message with a predetermined text analysis method for each text message matches a corresponding category code for each text message, so as to identify a category code corresponding to each audio, specifically:
collecting corpora; wherein the corpus comprises a plurality of sentences;
respectively combining each sentence with a FIACS analysis method of the keywords to obtain category codes corresponding to the sentences one by one;
respectively taking each sentence and the corresponding code thereof as input quantities, and training a deep learning model to obtain a classifier model;
and inputting each piece of text information into the classifier model, and obtaining the class code corresponding to each piece of text information, thereby identifying the class code corresponding to each audio frequency one by one.
4. The FlACS real-time analysis method according to any one of claims 1 to 3, wherein the pickup signal is audio data classified by a voiceprint recognition technique;
the audio data classified by the voiceprint recognition technology comprises teacher audio data, student audio data or silence audio data.
5. The FlACS real-time analysis method according to any one of claims 1 to 3, wherein the category code comprises three categories of teacher language, student language, no valid language;
the teacher language includes: expressing emotion, encouraging praise, adopting opinions, asking questions, teaching, instructing, and judging; the student language includes: response and communication; the inactive language is a silent sound.
6. A FlACS real-time analysis device, comprising:
the audio acquisition module is used for acquiring pickup signals in a teaching classroom;
the audio conversion module is used for extracting all audio in the pickup signals and converting the audio into a plurality of text messages according to a voice recognition technology; wherein one audio corresponds to one text message;
the text classification module is used for matching each text message with a corresponding class code by combining each text message with a preset text analysis method, so that the class codes corresponding to each audio frequency one by one are identified;
and the audio splitting module is used for splitting each audio into a plurality of first time segments with the same time interval, associating the type codes of the corresponding audio for each split first time segment to obtain a plurality of second time segments with the type codes, and thus the system can calculate the style of the teaching classroom according to all the second time segments in the period.
7. The FlaCS real-time analysis device according to claim 6, wherein the text classification module comprises: the system comprises a first collection unit, a first segmentation unit, a first matching unit and a first identification unit;
the first collection unit is used for collecting all vocabularies related to the FlaCS category codes to form a static vocabulary; the vocabularies in the static vocabulary are classified according to the category codes;
the first word segmentation unit is configured to segment each text message to obtain a text set corresponding to each text message one by one, where each text set includes a plurality of word groups;
the first matching unit is used for matching the phrases in each text set with the vocabularies in the static word list to obtain the category code corresponding to each text set, so as to match the category code corresponding to each text message;
the first identification unit is used for identifying the class codes corresponding to the audios one by one according to the class codes of the text information.
8. The FlaCS real-time analysis device according to claim 6, wherein the text classification module comprises: the system comprises a second collection unit, a second analysis unit, a second training unit and a second identification unit;
the second collecting unit is used for collecting the linguistic data; wherein the corpus comprises a plurality of sentences;
the second analysis unit is used for respectively combining each sentence with a FIACS analysis method of the keywords to obtain the category code corresponding to each sentence one by one;
the second training unit is used for training the deep learning model by taking each sentence and the corresponding code thereof as input quantities to obtain a classifier model;
the second identification unit is configured to input each piece of text information into the classifier model, and obtain a category code corresponding to each piece of text information, so as to identify a category code corresponding to each audio.
CN201910843037.3A 2019-09-06 2019-09-06 FlACS real-time analysis method and device Pending CN110598041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910843037.3A CN110598041A (en) 2019-09-06 2019-09-06 FlACS real-time analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910843037.3A CN110598041A (en) 2019-09-06 2019-09-06 FlACS real-time analysis method and device

Publications (1)

Publication Number Publication Date
CN110598041A true CN110598041A (en) 2019-12-20

Family

ID=68858067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910843037.3A Pending CN110598041A (en) 2019-09-06 2019-09-06 FlACS real-time analysis method and device

Country Status (1)

Country Link
CN (1) CN110598041A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274401A (en) * 2020-01-20 2020-06-12 华中师范大学 Classroom utterance classification method and device based on multi-feature fusion
CN112669885A (en) * 2020-12-31 2021-04-16 咪咕文化科技有限公司 Audio editing method, electronic equipment and storage medium
CN113051426A (en) * 2021-03-18 2021-06-29 深圳市声扬科技有限公司 Audio information classification method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958433A (en) * 2017-12-11 2018-04-24 吉林大学 A kind of online education man-machine interaction method and system based on artificial intelligence
WO2018204696A1 (en) * 2017-05-03 2018-11-08 Tableau Software, Inc. Systems and methods of applying pragmatics principles for interaction with visual analytics
CN108874861A (en) * 2018-04-19 2018-11-23 华南师范大学 A kind of teaching big data Visualized Analysis System and method
CN109447863A (en) * 2018-10-23 2019-03-08 广州努比互联网科技有限公司 A kind of 4MAT real-time analysis method and system
JP2019061050A (en) * 2017-09-26 2019-04-18 カシオ計算機株式会社 Interaction apparatus, interaction method, and program
CN110136703A (en) * 2019-03-25 2019-08-16 视联动力信息技术股份有限公司 A kind of fuzzy answer method and view networked system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018204696A1 (en) * 2017-05-03 2018-11-08 Tableau Software, Inc. Systems and methods of applying pragmatics principles for interaction with visual analytics
JP2019061050A (en) * 2017-09-26 2019-04-18 カシオ計算機株式会社 Interaction apparatus, interaction method, and program
CN107958433A (en) * 2017-12-11 2018-04-24 吉林大学 A kind of online education man-machine interaction method and system based on artificial intelligence
CN108874861A (en) * 2018-04-19 2018-11-23 华南师范大学 A kind of teaching big data Visualized Analysis System and method
CN109447863A (en) * 2018-10-23 2019-03-08 广州努比互联网科技有限公司 A kind of 4MAT real-time analysis method and system
CN110136703A (en) * 2019-03-25 2019-08-16 视联动力信息技术股份有限公司 A kind of fuzzy answer method and view networked system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙晓梅: ""基于弗兰德斯互动分析的英语课师生语言互动研究"", 《中国优秀硕士学位论文全文数据库 社会科学Ⅱ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274401A (en) * 2020-01-20 2020-06-12 华中师范大学 Classroom utterance classification method and device based on multi-feature fusion
CN112669885A (en) * 2020-12-31 2021-04-16 咪咕文化科技有限公司 Audio editing method, electronic equipment and storage medium
CN113051426A (en) * 2021-03-18 2021-06-29 深圳市声扬科技有限公司 Audio information classification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110489756B (en) Conversational human-computer interactive spoken language evaluation system
CN105845134A (en) Spoken language evaluation method through freely read topics and spoken language evaluation system thereof
CN102376182B (en) Language learning system, language learning method and program product thereof
CN103151042A (en) Full-automatic oral language evaluating management and scoring system and scoring method thereof
CN110598041A (en) FlACS real-time analysis method and device
CN103761975A (en) Method and device for oral evaluation
KR100995847B1 (en) Language training method and system based sound analysis on internet
McCrocklin Dictation programs for second language pronunciation learning: Perceptions of the transcript, strategy use and improvement
Ke et al. Exploring the relationship between aural decoding and listening comprehension among L2 learners of English
CN112767940B (en) Voice training recognition method, system, equipment and storage medium
Kazu et al. The Influence of Pronunciation Education via Artificial Intelligence Technology on Vocabulary Acquisition in Learning English.
Sukarman et al. The Use of Audio-Lingual Method in Improving Speaking Accuracy of Indonesian EFL Learners
Bai Pronunciation Tutor for Deaf Children based on ASR
Li et al. English sentence pronunciation evaluation using rhythm and intonation
Ykhlef et al. Towards building an emotional speech corpus of Algerian dialect: criteria and preliminary assessment results
CN111967703A (en) Language-behavior big data synchronous analysis system for classroom teaching
Lin et al. Native Listeners' Shadowing of Non-native Utterances as Spoken Annotation Representing Comprehensibility of the Utterances.
Li et al. A preliminary study on the recognition of speech acts performed by international chinese teachers in class based on deep learning
Maizarah et al. THE CORRELATION BETWEEN SPEAKING SKILL AND READING COMPREHENSION (A CASE STUDY ON THE THIRD SEMESTER OF ENGLISH STUDY PROGRAMAT ISLAMIC UNIVERSITY OFINDRAGIRI TEMBILAHAN)
Al-Rami et al. A framework for pronunciation error detection and correction for non-native Arab speakers of English language
Varatharaj Developing Automated Audio Assessment Tools for a Chinese Language Course
Schuetze Second language vocabulary acquisition: Male vs female learners and the role of words associated with emotion
Nigam et al. Role of intonation in scoring spoken english
Zylich Training Noise-Robust Spoken Phrase Detectors with Scarce and Private Data: An Application to Classroom Observation Videos
Castro et al. Teachers’ and Students’ Prosodic Knowledge and Skills: Aid to Reading Comprehension

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220

RJ01 Rejection of invention patent application after publication