CN112364152A - Response type learning assistance method, system and equipment - Google Patents

Response type learning assistance method, system and equipment Download PDF

Info

Publication number
CN112364152A
CN112364152A CN202011241632.9A CN202011241632A CN112364152A CN 112364152 A CN112364152 A CN 112364152A CN 202011241632 A CN202011241632 A CN 202011241632A CN 112364152 A CN112364152 A CN 112364152A
Authority
CN
China
Prior art keywords
learning
user
word
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011241632.9A
Other languages
Chinese (zh)
Inventor
任皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shushui Intelligent Technology Co ltd
Original Assignee
Shanghai Shushui Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shushui Intelligent Technology Co ltd filed Critical Shanghai Shushui Intelligent Technology Co ltd
Priority to CN202011241632.9A priority Critical patent/CN112364152A/en
Publication of CN112364152A publication Critical patent/CN112364152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • G06F16/337Profile generation, learning or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/319Inverted lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The method comprises the steps of obtaining multi-dimensional data of a learning object on a terminal device by a user; performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, and performing single training according to word calculation results to obtain word models; performing single training according to the output result of the word fed back by the user by using the word model to obtain a sentence model; acquiring learning materials in a storage library, and extracting an object to be pushed from the learning materials; and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model, and pushing the objects to be pushed to the user according to the learning difficulty values. Therefore, the personalized information of the user can be obtained under the condition of not interfering the user to carry out modeling and the problem of dynamically and automatically recommending the learning materials for the user in real time is solved.

Description

Response type learning assistance method, system and equipment
Technical Field
The present application relates to the field of computers, and in particular, to a method, system, and device for responsive learning assistance.
Background
In the process of using the auxiliary product to independently learn languages, the conventional auxiliary product obtains subjective answers of a user in a question and answer mode on the basis of obtaining user information and adopts an objective answer testing method to interfere the normal learning process of the user. The user information is composed of specific scores and knowledge points, and the information is not visually acquired and needs to be fed back by a user; after the user knowledge model of the server is established, the process of feeding back information to the user is independent of a single learning process, so that the learning materials are not updated timely. Because the evaluation of the learning result is not quantitative, the server cannot learn the characteristics of the learner and the examination model, and cannot form the data required by the AI teacher. The self-organized subjects are adopted for learning and training, and the distribution of the language characteristic elements of the learning data is inconsistent with the distribution of the elements of the real examination. According to the article organized by the original material, the learner easily gives up the learning curve too steeply or becomes too gentle in the learning process to lose interest. The automatic slow playing technology reduces the playing speed of the whole sentence, also causes the pronunciation of a single word to be deformed, and the hearing training can not achieve the due effect.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for response-type learning assistance, which solve the problems in the prior art that the normal learning process of a user is interfered when feedback information of user learning is acquired, the information acquisition is not intuitive, and the learning data is not updated in time.
According to one aspect of the present application, there is provided a method of responsive learning assistance, the method comprising:
acquiring multi-dimensional data of a learning object on terminal equipment by a user;
performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, and performing single training according to word calculation results to obtain word models;
performing single training according to the output result of the word fed back by the user by using the word model to obtain a sentence model;
acquiring learning materials in a storage library, and extracting an object to be pushed from the learning materials;
and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model, and pushing the objects to be pushed to the user according to the learning difficulty values.
Further, the method comprises:
and acquiring the learning feedback of the user to the received push object, calculating the index of the learning speed of the user, the index of the standard feedback times and the average learning time index according to the learning feedback, and training a teaching training model according to the indexes.
Further, the multi-dimensional tag values include a word dimension value, a grammar dimension value, a pronunciation dimension value, a hearing dimension value, and a thinking dimension value.
Further, performing word calculation according to the multidimensional label value, and performing single training according to the calculation result of the word to obtain a word model, including:
taking words fed back by a user and a two-dimensional vector calculation result consisting of a pronunciation dimension value and a hearing dimension value as a word training set;
training a first preset neural network model by using the word training set to obtain a to-be-corrected word model;
and correcting the word model to be corrected by using a first loss function to obtain a word model.
Further, obtaining a sentence model according to a single training of an output result of the word fed back by the user by using the word model, including:
obtaining an output result of using the word model to feed back words to the user, and determining the current thinking dimension and the grammar dimension of the user;
using a sentence object fed back by a user and a two-dimensional vector calculation result consisting of a current thinking dimension and a grammar dimension as a sentence training set;
training a second preset neural network model by using the sentence training set to obtain a sentence model to be corrected;
and correcting the sentence model to be corrected by using a second loss function to obtain a sentence model.
Further, the method comprises:
determining a word matrix corresponding to each word, and taking the word matrix as an input layer of the first preset neural network model;
determining a convolutional layer and a pool layer of the first preset neural network model;
and taking the difficulty value of the user to the word as an output layer of the first preset neural network model.
Further, the method comprises:
determining a difficulty value matrix, a grammar dimension matrix and a thinking dimension matrix which are formed by the difficulty values of the words in each sentence;
and determining a matrix to be input according to the difficulty value matrix, the grammar dimension matrix and the thinking dimension matrix, and taking the matrix to be input as an input layer of a second preset neural network model.
Further, pushing the object to be pushed to the user according to the learning difficulty value includes:
arranging all objects to be pushed according to the learning difficulty values in a growing mode to generate an inverted index;
and sequentially pushing the objects to be pushed to the user according to the inverted index.
According to another aspect of the present application, there is also provided a system for responsive learning assistance, the system comprising: a learning interaction module, an individual model training module, a learning material generation module and a material pushing module,
the learning interaction module is used for acquiring multi-dimensional data of a learning object on the terminal equipment by a user;
the individual model training module is used for performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, performing single training according to word calculation results to obtain word models, and performing single training according to output results of words fed back by the user by using the word models to obtain sentence models;
the learning material generation module is used for acquiring learning materials in a storage library, extracting objects to be pushed from the learning materials, and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model;
and the data pushing module is used for pushing the object to be pushed to the user according to the learning difficulty value.
Further, the system comprises: and the teaching training module is used for acquiring the learning feedback of the user to the received push object, calculating the index of the learning speed of the user, the index of the standard reaching feedback times and the average learning time index according to the learning feedback, and training a teaching training model according to the index.
Further, the system comprises: and the process driving module is used for triggering the operations of the individual model training module, the learning material generation module and the material pushing module based on a data stream formed by generating data items for the multi-dimensional data of the learning object on the terminal equipment by the user.
According to yet another aspect of the present application, there is also provided an apparatus for responsive learning assistance, the apparatus comprising:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method as previously described.
According to yet another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the method as described above.
Compared with the prior art, the method and the device have the advantages that multi-dimensional data of the user on the learning object on the terminal device are obtained; performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, and performing single training according to word calculation results to obtain word models; performing single training according to the output result of the word fed back by the user by using the word model to obtain a sentence model; acquiring learning materials in a storage library, and extracting an object to be pushed from the learning materials; and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model, and pushing the objects to be pushed to the user according to the learning difficulty values. Therefore, the personalized information of the user can be obtained under the condition of not interfering the user to carry out modeling and the problem of dynamically and automatically recommending the learning materials for the user in real time is solved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method of responsive learning assistance provided in accordance with an aspect of the subject application;
FIG. 2 is a scene diagram illustrating learning interactions of a user in a specific application scenario of the present application;
FIG. 3 illustrates a schematic structural diagram of a responsive learning-aided system provided in accordance with another aspect of the subject application;
fig. 4 shows a schematic diagram of a framework of a responsive personalized learning assistance system in an embodiment of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change RAM (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassette tape, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 shows a schematic flow diagram of a method for responsive learning assistance according to an aspect of the present application, the method comprising: step S11 to step S15,
in step S11, multi-dimensional data of the user on the terminal device for the learning object is acquired; here, the multidimensional data is a feedback result of the user on the learning object, and comprises data of the user on language thinking, grammar structure, word pronunciation and word meaning and feedback information of the user; the learning object is an object for learning one or more languages, such as English learning. When a user learns a learning object on the terminal equipment, a sliding mode and a mode of clicking a region and clicking a word can be used according to conditions, so that English sentences of the learning object generate playing pronunciation, slow playing is distinguished at intervals, and characters on the character face and Chinese meanings are displayed. Therefore, data such as feedback or waiting time length and operation times of the user on four dimensions of thinking, a grammatical structure, word pronunciation and word meaning are obtained under the condition of not interfering the learning of the user. As shown in fig. 2, a scene diagram of user interaction in a specific application scene is shown, learning materials are displayed on a screen of a terminal device, the learning materials are composed of sentences, the learning materials are automatically played, a sliding screen automatically plays english voice after the sentences are moved upwards in place, a stay time T between the next upward sliding is regarded as a dimension value of an english thinking of a user, when the user clicks a gap region, a display mode is changed, the playing is performed again, and at the time, the user clicks a slow playing mode at the gap region, and the value is collected to represent a grammatical structure dimension of the sentence of the user. After the user clicks the interval to be enlarged, the English literal meaning of the user is displayed, and the operation is collected and represents the dimension value of the voice of the user to the word. The user clicks on english to show its chinese meaning and the user clicks on a dimension value collected representing the user's word meaning for that word.
In step S12, performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, and performing single training according to word calculation results to obtain word models; and performing label calculation by using the collected user feedback result, extracting a corresponding original sentence for calculation after the user feedback result is returned to the server, and performing single training on an individual model by using five labels of words, grammar, pronunciation, hearing and thinking, wherein the individual model comprises a word model and a sentence model. Wherein the multi-dimensional label value comprises a word dimension value, a grammar dimension value, a pronunciation dimension value, a hearing dimension value and a thinking dimension value. Specifically, the user feedback result [ T, G, P, M ] thinking value represents real feedback data of the user's mastery degree of the sentence and each word, the object [23,45,24,67,99,208] of the sentence is obtained from the database, and each ID represents a word or punctuation mark in the word bank; creating an individual model for a user, inputting a sentence into the individual model, and obtaining the grasping degree of the sentence by the user, wherein the grasping degree is determined by a word dimension W, a grammar dimension G, a pronunciation dimension P, a hearing dimension M and a thinking dimension T. And calculating the mastering degree of the user to the words according to the multi-dimensional label values of the user, and training the neural network model once according to the calculation result to obtain a word model so as to obtain the difficulty value of each word to the user.
In step S13, a sentence model is obtained by performing a single training on the output result of the user feedback word using the word model; each sentence is composed of words, a word model is used for obtaining a difficulty matrix of the words fed back by the user, and a neural network model of the sentence is trained once according to the difficulty matrix of the words and the feedback of the user to obtain a sentence model.
In step S14, learning materials in the repository are obtained, and an object to be pushed is extracted from the learning materials; all the learning materials are taken out from the material preparation repository, the learning materials are split according to the natural sentence as a unit, and objects to be pushed, such as extracted sentences or words in the sentences, are extracted.
In step S15, the sentence model is used to calculate learning difficulty values of all the objects to be pushed to the user, and the objects to be pushed are pushed to the user according to the learning difficulty values. In order to provide a smooth learning experience for the user, a sentence model is used to calculate a learning difficulty value of each object to be pushed to the user, so that the object to be pushed is pushed to the user according to the difficulty degree.
In an embodiment of the present application, the method includes: and acquiring the learning feedback of the user to the received push object, calculating the index of the learning speed of the user, the index of the standard feedback times and the average learning time index according to the learning feedback, and training a teaching training model according to the indexes. Here, a teaching training model, i.e., an AI teacher training model, can be trained, taking learning english as an example, learning feedback of a user on a received sentence is obtained from a database, and records of the user on three indexes, i.e., a learning speed index, a standard feedback number index and an average learning time index, are calculated according to the learning feedback, where the learning speed index is a change degree of an individual model to a learning material calculated value and a ratio of the training times, the standard feedback number index is a training number required by the individual model to reach a specific value of the learning material calculated value, and the average learning time is a duration for the user to learn. The neural network model is adopted to calculate by using a large number of user learning tracks, wherein the learning tracks are pushed easy-to-difficult data and real learning results (three indexes). The AI teacher model is used to calculate the value of a sentence, i.e., a measure of the learner's ability to improve after learning, and the value of the sentence represents the size and importance of the sentence in achieving the learning goal. The specific calculation process is as follows: the method comprises the steps of training an AI teacher model by an optimization function of index maximum values of improving learning speed, average learning time and reduced feedback times through data recorded really for learning tracks of all learners, calculating loss values of the model by using the loss functions in the training process, optimizing model parameters by using the loss values, and using a first-order optimization function including a gradient descent function (GD), a random gradient descent function (SGD), a batch gradient descent function (BGD), Adam and the like in the optimization process to solve first-order derivatives of the parameters, wherein the values of the first-order derivatives are fine adjustment values of the parameters in the model.
In an embodiment of the present application, in step S12, a word training set is calculated according to the word fed back by the user and a two-dimensional vector consisting of a pronunciation dimension value and a hearing dimension value; training a first preset neural network model by using the word training set to obtain a to-be-corrected word model; and correcting the word model to be corrected by using a first loss function to obtain a word model. And taking the word fed back by the user and the calculation result of the P and M two-dimensional vectors representing the familiarity degree as a label (a real result) to form a training set, training a first preset neural network model by using the training set, and correcting the parameters of the model by using a first loss function in the training process to obtain a word model. The first loss function is used for describing the degree of deviation between the degree of the word model considering the user's grasp on the word and the degree of the real user's grasp on the word, and the first loss function can be cross entropy, Hinge loss, square error, absolute value, exponential error and other algorithms. After the word model is trained to a certain degree, the calculation result of the loss function tends to 0, which indicates that the model is in good accordance with the actual knowledge grasping condition of the user, and the knowledge grasping condition of the user is changed, so that continuous training of the model needs to be continuously maintained. In order to reduce the calculation amount, the training can be performed once after the feedback results of 500-.
In view of the above embodiments, the method includes: determining a word matrix corresponding to each word, and taking the word matrix as an input layer of the first preset neural network model; determining a convolutional layer and a pool layer of the first preset neural network model; and taking the difficulty value of the user to the word as an output layer of the first preset neural network model. Here, the structure of the first preset neural network model of the training word model satisfies the following condition: each word is composed of letters, and a [0,1] matrix of 26 × N (N is the number of letters of the word) can be formed to be used as an input layer of the neural network; the output layer of the neural network has 10 values from 0 to 9, the 10 values represent the difficulty value of the user to the word, and the internal layers of the neural network are a convolution layer network and a pool layer.
In an embodiment of the present application, in step S13, obtaining an output result of the word fed back to the user by using the word model, and determining a current thinking dimension and a grammar dimension of the user; using a sentence object fed back by a user and a two-dimensional vector calculation result consisting of a current thinking dimension and a grammar dimension as a sentence training set; training a second preset neural network model by using the sentence training set to obtain a sentence model to be corrected; and correcting the sentence model to be corrected by using a second loss function to obtain a sentence model. Here, sentence objects fed back by the user and T and G two-dimensional vector calculation results representing the degree of familiarity of the user are taken as labels [ real results ], and a training set is constructed. And training a second preset neural network model by using a training set, correcting the model by using a second loss function in the training process, wherein the second loss function and the first loss function can use the same or different functions, and the result of the second loss function can reversely correct the parameters of the model, so that a sentence model which can represent the actual knowledge grasping condition of a user is obtained.
In view of the above embodiments, the method includes: determining a difficulty value matrix, a grammar dimension matrix and a thinking dimension matrix which are formed by the difficulty values of the words in each sentence; and determining a matrix to be input according to the difficulty value matrix, the grammar dimension matrix and the thinking dimension matrix, and taking the matrix to be input as an input layer of a second preset neural network model. Here, each sentence is formed with [0,1] matrix of 12 × N by the difficulty matrix 10 × N (N is the number of words), the grammar structure G matrix 10 × 1 and the thinking T matrix 10 × 1 of each word as an output layer of the second preset neural network model, wherein the difficulty matrix of each word is obtained from the output result of the word model, the grammar structure G matrix is a two-dimensional space vector distance calculated after the number of all simple sentences and the average word length in the sentence are subjected to the unwelcome, and the thinking T matrix is obtained by the proportion of the representational words in the sentence; the output layer of the second preset neural network model is 10 values from 0 to 9, and the internal layer of the network is a convolutional network and a pool layer. In order to reduce the requirement of the calculation amount, the feedback results of 100-500 sentences can be accumulated to perform one training.
In an embodiment of the present application, in step S15, all objects to be pushed are arranged according to the learning difficulty value in a growing manner, and an inverted index is generated; and sequentially pushing the objects to be pushed to the user according to the inverted index. Here, the objects to be pushed are learning objects such as sentences or words, taking sentences as an example, the learning difficulty of all sentences to the user is calculated by using a sentence model and stored in a database, and all records are arranged according to the sequence from low learning difficulty to high learning difficulty, namely, inverted indexes are obtained; parameters of the sentence model can change along with the use of the user, and the sentence needs to be calculated again after the change, and inverted index is carried out again; after the indexes are inverted, the learning materials are sequentially pushed to the user from easy to difficult.
Fig. 3 is a schematic structural diagram of a responsive learning assistance system according to another aspect of the present application, the system including: the system comprises a learning interaction module 11, an individual model training module 12, a learning material generation module 13 and a material pushing module 14, wherein the learning interaction module 11 is used for acquiring multidimensional data of a user on a learning object on terminal equipment; the individual model training module 12 is configured to perform label calculation according to the multidimensional data to obtain multidimensional label values, perform word calculation according to the multidimensional label values, perform single training according to word calculation results to obtain word models, and perform single training according to output results of words fed back by users by using the word models to obtain sentence models; the learning material generation module 13 is configured to obtain learning materials in a repository, extract objects to be pushed from the learning materials, and calculate learning difficulty values of all the objects to be pushed to the user by using the sentence model; the data pushing module 14 is configured to push the object to be pushed to the user according to the learning difficulty value. It should be noted that the content executed by the learning interaction module 11, the individual model training module 12, the learning material generation module 13, and the material pushing module 14 is the same as or corresponding to the content in the above steps S11, S12, S13, S14, and S15, respectively, and for brevity, no further description is provided herein. It should be noted that the data pushing module is configured to push updated learning data contents to the client for the user to learn, and when the inverted index of the learning data is updated, a sentence with a higher difficulty in the current difficult sentences needs to be pushed to the client, so as to replace the originally pushed contents for the user to learn, thereby providing a smooth learning experience for the user.
In one embodiment of the present application, the system comprises: and the teaching training module is used for acquiring the learning feedback of the user to the received push object, calculating the index of the learning speed of the user, the index of the standard reaching feedback times and the average learning time index according to the learning feedback, and training a teaching training model according to the index. Here, as shown in the system framework diagram of fig. 4, the tutorial training module is an AI teacher training module for calculating the value of a sentence, thereby providing tutorial assistance to the user. The specific process is as follows: the method comprises the steps of training an AI teacher model by an optimization function of index maximum values of improving learning speed, average learning time and reduced feedback times through data recorded really for learning tracks of all learners, calculating loss values of the model by using the loss functions in the training process, optimizing model parameters by using the loss values, and using a first-order optimization function including a gradient descent function (GD), a random gradient descent function (SGD), a batch gradient descent function (BGD), Adam and the like in the optimization process to solve first-order derivatives of the parameters, wherein the values of the first-order derivatives are fine adjustment values of the parameters in the model. The difficulty sequence of a user to a word and a sentence is calculated through the individual model, the value of the word and the sentence in a certain examination is calculated through the AI teacher model, and the recommendation value of the word to a certain examination of a certain student can be calculated by using the ratio of the word to the sentence when the data is pushed by the user.
In connection with the above embodiment, the system includes: and the process driving module is used for triggering the operations of the individual model training module, the learning material generation module and the material pushing module based on a data stream formed by generating data items for the multi-dimensional data of the learning object on the terminal equipment by the user. Continuing to refer to fig. 4, the system converges the operation data returned by the user into a stream composed of data items, and triggers the operations of the individual model calculation module, the data preparation module, the learning data generation module and the data pushing module through the flow driving module, wherein the data preparation module is used for using the content of language data (such as english data) as a material, splitting the material according to a natural sentence as a unit, and storing the split material for other modules; the operation data is data [ T, G, P, M ] generated by feedback of a user on each sentence, and the process of returning the data stream is a trigger point for the operation of the whole system, so that the state change of the whole system is promoted to be executed.
By the method and the system, the training data labels required by training the individual models representing the language abilities of the users can be acquired under the condition that the listening, speaking, reading and writing training of the users can not be interfered in the aspect of information acquisition, such as English ability learning, feedback information in the learning process of acquiring the response duration of the whole sentence hearing, increasing word intervals and judging whether the words are listened to or not, checking English word faces and understanding, checking Chinese meaning and understanding or not and the like can be acquired. And screening a pushing object of the learning data by using the ratio of the sentence value to the learning difficulty predicted by the individual model, wherein the learning difficulty measurement standard is used for measuring the elements such as words, grammar, sound and thinking. Therefore, the problem of dynamically and automatically recommending learning materials for learners in real time and the training problem of AI teachers can be solved.
In addition, a computer readable medium is provided, on which computer readable instructions are stored, the computer readable instructions being executable by a processor to implement the method of responsive learning assistance.
In an embodiment of the present application, there is also provided a device for responsive learning assistance, the device including:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method as previously described.
For example, the computer readable instructions, when executed, cause the one or more processors to:
acquiring multi-dimensional data of a learning object on terminal equipment by a user;
performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, and performing single training according to word calculation results to obtain word models;
performing single training according to the output result of the word fed back by the user by using the word model to obtain a sentence model;
acquiring learning materials in a storage library, and extracting an object to be pushed from the learning materials;
and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model, and pushing the objects to be pushed to the user according to the learning difficulty values.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (13)

1. A method of responsive learning assistance, the method comprising:
acquiring multi-dimensional data of a learning object on terminal equipment by a user;
performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, and performing single training according to word calculation results to obtain word models;
performing single training according to the output result of the word fed back by the user by using the word model to obtain a sentence model;
acquiring learning materials in a storage library, and extracting an object to be pushed from the learning materials;
and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model, and pushing the objects to be pushed to the user according to the learning difficulty values.
2. The method according to claim 1, characterized in that it comprises:
and acquiring the learning feedback of the user to the received push object, calculating the index of the learning speed of the user, the index of the standard feedback times and the average learning time index according to the learning feedback, and training a teaching training model according to the indexes.
3. The method of claim 1, wherein the multi-dimensional tag values include a word dimension value, a grammar dimension value, a pronunciation dimension value, a hearing dimension value, and a thinking dimension value.
4. The method of claim 3, wherein performing word computation according to the multidimensional label values and performing a single training to obtain a word model according to the computation result of the word comprises:
taking words fed back by a user and a two-dimensional vector calculation result consisting of a pronunciation dimension value and a hearing dimension value as a word training set;
training a first preset neural network model by using the word training set to obtain a to-be-corrected word model;
and correcting the word model to be corrected by using a first loss function to obtain a word model.
5. The method according to claim 3 or 4, wherein obtaining a sentence model from a single training of the output result of the user feedback word using the word model comprises:
obtaining an output result of using the word model to feed back words to the user, and determining the current thinking dimension and the grammar dimension of the user;
using a sentence object fed back by a user and a two-dimensional vector calculation result consisting of a current thinking dimension and a grammar dimension as a sentence training set;
training a second preset neural network model by using the sentence training set to obtain a sentence model to be corrected;
and correcting the sentence model to be corrected by using a second loss function to obtain a sentence model.
6. The method of claim 4, wherein the method comprises:
determining a word matrix corresponding to each word, and taking the word matrix as an input layer of the first preset neural network model;
determining a convolutional layer and a pool layer of the first preset neural network model;
and taking the difficulty value of the user to the word as an output layer of the first preset neural network model.
7. The method of claim 5, wherein the method comprises:
determining a difficulty value matrix, a grammar dimension matrix and a thinking dimension matrix which are formed by the difficulty values of the words in each sentence;
and determining a matrix to be input according to the difficulty value matrix, the grammar dimension matrix and the thinking dimension matrix, and taking the matrix to be input as an input layer of a second preset neural network model.
8. The method according to claim 1, wherein pushing the object to be pushed to the user according to the learning difficulty value comprises:
arranging all objects to be pushed according to the learning difficulty values in a growing mode to generate an inverted index;
and sequentially pushing the objects to be pushed to the user according to the inverted index.
9. A system for responsive learning assistance, the system comprising: a learning interaction module, an individual model training module, a learning material generation module and a material pushing module,
the learning interaction module is used for acquiring multi-dimensional data of a learning object on the terminal equipment by a user;
the individual model training module is used for performing label calculation according to the multidimensional data to obtain multidimensional label values, performing word calculation according to the multidimensional label values, performing single training according to word calculation results to obtain word models, and performing single training according to output results of words fed back by the user by using the word models to obtain sentence models;
the learning material generation module is used for acquiring learning materials in a storage library, extracting objects to be pushed from the learning materials, and calculating the learning difficulty values of all the objects to be pushed to the user by using the sentence model;
and the data pushing module is used for pushing the object to be pushed to the user according to the learning difficulty value.
10. The system of claim 9, wherein the system comprises: and the teaching training module is used for acquiring the learning feedback of the user to the received push object, calculating the index of the learning speed of the user, the index of the standard reaching feedback times and the average learning time index according to the learning feedback, and training a teaching training model according to the index.
11. The system of claim 10, wherein the system comprises: and the process driving module is used for triggering the operations of the individual model training module, the learning material generation module and the material pushing module based on a data stream formed by generating data items for the multi-dimensional data of the learning object on the terminal equipment by the user.
12. An apparatus for responsive learning assistance, the apparatus comprising:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 8.
13. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 8.
CN202011241632.9A 2020-11-09 2020-11-09 Response type learning assistance method, system and equipment Pending CN112364152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011241632.9A CN112364152A (en) 2020-11-09 2020-11-09 Response type learning assistance method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011241632.9A CN112364152A (en) 2020-11-09 2020-11-09 Response type learning assistance method, system and equipment

Publications (1)

Publication Number Publication Date
CN112364152A true CN112364152A (en) 2021-02-12

Family

ID=74510218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011241632.9A Pending CN112364152A (en) 2020-11-09 2020-11-09 Response type learning assistance method, system and equipment

Country Status (1)

Country Link
CN (1) CN112364152A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573985A (en) * 2016-03-04 2016-05-11 北京理工大学 Sentence expression method based on Chinese sentence meaning structural model and topic model
CN106897950A (en) * 2017-01-16 2017-06-27 北京师范大学 One kind is based on word cognitive state Model suitability learning system and method
KR101896973B1 (en) * 2018-01-26 2018-09-10 가천대학교 산학협력단 Natural Laguage Generating System Using Machine Learning Moodel, Method and Computer-readable Medium Thereof
CN110189238A (en) * 2019-05-22 2019-08-30 网易有道信息技术(北京)有限公司江苏分公司 Method, apparatus, medium and the electronic equipment of assisted learning
CN110276456A (en) * 2019-06-20 2019-09-24 山东大学 A kind of machine learning model auxiliary construction method, system, equipment and medium
CN110473438A (en) * 2019-07-30 2019-11-19 北京捷足先登教育科技有限公司 A kind of word assistant learning system and method based on quantitative analysis
CN110473435A (en) * 2019-07-30 2019-11-19 北京捷足先登教育科技有限公司 A kind of the word assistant learning system and method for the quantification with learning cycle
CN111126552A (en) * 2019-12-26 2020-05-08 深圳前海黑顿科技有限公司 Intelligent learning content pushing method and system
CN111241397A (en) * 2020-01-09 2020-06-05 南京贝湾信息科技有限公司 Content recommendation method and device and computing equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573985A (en) * 2016-03-04 2016-05-11 北京理工大学 Sentence expression method based on Chinese sentence meaning structural model and topic model
CN106897950A (en) * 2017-01-16 2017-06-27 北京师范大学 One kind is based on word cognitive state Model suitability learning system and method
KR101896973B1 (en) * 2018-01-26 2018-09-10 가천대학교 산학협력단 Natural Laguage Generating System Using Machine Learning Moodel, Method and Computer-readable Medium Thereof
CN110189238A (en) * 2019-05-22 2019-08-30 网易有道信息技术(北京)有限公司江苏分公司 Method, apparatus, medium and the electronic equipment of assisted learning
CN110276456A (en) * 2019-06-20 2019-09-24 山东大学 A kind of machine learning model auxiliary construction method, system, equipment and medium
CN110473438A (en) * 2019-07-30 2019-11-19 北京捷足先登教育科技有限公司 A kind of word assistant learning system and method based on quantitative analysis
CN110473435A (en) * 2019-07-30 2019-11-19 北京捷足先登教育科技有限公司 A kind of the word assistant learning system and method for the quantification with learning cycle
CN111126552A (en) * 2019-12-26 2020-05-08 深圳前海黑顿科技有限公司 Intelligent learning content pushing method and system
CN111241397A (en) * 2020-01-09 2020-06-05 南京贝湾信息科技有限公司 Content recommendation method and device and computing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈庶英;: "基于聚合统整的汉语在线学习资源认知模型建设", 中国远程教育, no. 04 *
茹韶燕;邓金钉;张志威;钟洁仪;陈国雄;谭志坚;: "基于深度学习的英语单词学习系统应用研究", 电脑编程技巧与维护, no. 02 *

Similar Documents

Publication Publication Date Title
US8774705B2 (en) Learning support system and learning support method
US20160293036A1 (en) System and method for adaptive assessment and training
US20080126319A1 (en) Automated short free-text scoring method and system
WO2022170985A1 (en) Exercise selection method and apparatus, and computer device and storage medium
US11417339B1 (en) Detection of plagiarized spoken responses using machine learning
CN111126552B (en) Intelligent learning content pushing method and system
Brown Going back or going forward? Tensions in the formulation of a new national curriculum in mathematics
CN107688583A (en) The method and apparatus for creating the training data for natural language processing device
CN112184503A (en) Children multinomial ability scoring method and system for preschool education quality evaluation
CN114254122A (en) Test question generation method and device, electronic equipment and readable storage medium
WO2020074067A1 (en) Automatic language proficiency level determination
Zhang et al. How Students Search Video Captions to Learn: An Analysis of Search Terms and Behavioral Timing Data.
CN112507792A (en) Online video key frame positioning method, positioning system, equipment and storage medium
CN112364152A (en) Response type learning assistance method, system and equipment
Zhang et al. Improving lexical access and acquisition through reading the news: case studies of senior high school students in China.
CN111967255A (en) Internet-based automatic language test paper evaluation method and storage medium
Kharwal et al. Spaced Repetition Based Adaptive E-Learning Framework
Gao Implementation of Business English (BE) Teaching Assistant System (TAS) Based on B/S Mode
CN117150151B (en) Wrong question analysis and test question recommendation system and method based on large language model
Lee Investigation of visualization literacy: A visualization sensemaking model, a visualization literacy assessment test, and the effects of cognitive characteristics
CN117672027B (en) VR teaching method, device, equipment and medium
CN113704610B (en) Learning style portrait generation method and system based on learning growth data
KR102569339B1 (en) Speaking test system
Basheer et al. Mapping Arabic Text Studying Material With The Most Suitable Student Grade
CN117711404A (en) Method, device, equipment and storage medium for evaluating oral-language review questions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination