CN113094404A - Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system - Google Patents

Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system Download PDF

Info

Publication number
CN113094404A
CN113094404A CN202110435723.4A CN202110435723A CN113094404A CN 113094404 A CN113094404 A CN 113094404A CN 202110435723 A CN202110435723 A CN 202110435723A CN 113094404 A CN113094404 A CN 113094404A
Authority
CN
China
Prior art keywords
memory
learning
word
review
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110435723.4A
Other languages
Chinese (zh)
Other versions
CN113094404B (en
Inventor
杨玉德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Shunshi Education Technology Co ltd
Original Assignee
Shandong Shunshi Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Shunshi Education Technology Co ltd filed Critical Shandong Shunshi Education Technology Co ltd
Priority to CN202110435723.4A priority Critical patent/CN113094404B/en
Publication of CN113094404A publication Critical patent/CN113094404A/en
Application granted granted Critical
Publication of CN113094404B publication Critical patent/CN113094404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system. The scheme comprises the steps of logging in through an account password, obtaining the current learning times, learning words, and generating review prompt time, test questions, test correct answers and test answer time limits of the words; after the user finishes the review task, generating the memory index of each word and the current memory strength of each word; generating a memory stock extraction function according to the memory index, the memory strength and the current memory stock, and calculating the time point of the golden memory in each memory period; extracting historical test data, obtaining a target training function through historical data training, and determining golden memory time of all users; and comprehensively generating a comprehensive memory strength score of each word. According to the scheme, word memory is performed according to multiple core parameters such as learning times, error times and memory strength, reasonable review time of each student is given, and memory efficiency of the students is improved.

Description

Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system
Technical Field
The invention relates to the technical field of foreign language memory, in particular to a large data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system.
Background
Memory is the basis for learning and plays an important role for everyone. Particularly, in the current teenager education process, how to effectively utilize and improve the memory capacity of the teenager education process plays an important role in improving the learning performance of the teenager. And foreign language learning is an important link in education, and the requirement on memory is higher than that in the learning process of other subjects. Therefore, a large number of words and vocabularies, and even sentences, need to be forcibly memorized. Word memory has become an important consideration for handicapped foreign language learning.
The existing memory method mostly depends on the existing theoretical knowledge, the learning state of students is judged according to the preset experience, the actual learning state of each student cannot be known, the learning and review suggested time is given in a targeted manner, and the test questions are provided in a targeted manner, so that the learning degree of each word can be guessed only according to the experience in the learning process, and whether the learning of each word is finished or whether the learning is already learned is judged subjectively. Resulting in probable word relearning and unlikely word forgetting. On one hand, the learning efficiency is reduced, on the other hand, the review times are also reduced, and the possibility of forgetting is increased.
Disclosure of Invention
In view of the above problems, the invention provides a large data acquisition multi-core parameter adaptive time-sharing memory driving method and system, which carry out word memory in a targeted manner through a plurality of core parameters such as learning times, error times, memory strength and the like, give reasonable review time of each student, and improve the memory efficiency of the students.
In a first aspect of the embodiments of the present invention, a large data acquisition multi-core parameter adaptive time-sharing memory driving method is provided.
In one or more embodiments, preferably, the large data acquisition multi-core parameter adaptive time-sharing memory driving method includes:
logging in through an account password to obtain the current learning frequency, performing word learning when the current learning frequency is 0, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command;
generating a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer and a test answer time limit of the word according to the first updated learning state command, the first word learning range and the first word learning peak value;
logging in through an account password to obtain the current learning frequency, when the current learning frequency is not 0, performing word review and learning, and after receiving a second learning state updating command, recording the current word learning range and the current word learning peak value;
sending a review prompt command to the user according to the first review prompt time, the second review prompt time and the third review prompt time, and generating a memory index of each word and the current memory intensity of each word after the user finishes a review task;
generating a memory stock extraction function according to the memory index, the memory intensity and the current memory stock, and calculating the time point of the golden memory in each memory period;
extracting historical test data of corresponding users, extracting characteristics according to the historical data of a single user, obtaining a target training function through historical data training, and determining golden memory time of all the users;
and obtaining the current learning times and error times of each user, whether the user makes a pair at this time, the memory intensity and the memory index, and comprehensively generating a comprehensive memory intensity score of each word of each user.
In one or more embodiments, preferably, the obtaining, by logging in an account password, a current learning time, where the current learning time is 0, performing word learning, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command specifically includes:
logging in through an account password to obtain the current learning times, and starting a learning command when the current learning times is 0;
after receiving the learning command, determining a preset learning type according to user registration information, selecting a word bank needing learning, and initializing a learning task sequence;
initializing the current learning times and the error times, wherein the initialized current learning times and the initialized error times are both 0 times;
extracting a first learning task according to the learning task sequence, wherein the first learning task comprises a first vocabulary to be memorized and a first total learning duration;
executing the first learning task, stopping learning when the first total learning duration is reached, and sending a first learning state updating command;
and recording the word learning range and the word learning peak value after receiving the first learning state updating command.
In one or more embodiments, preferably, the generating a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer, and a test question answering time limit for a word according to the first updated learning state command, the first word learning range, and the first word learning peak value specifically includes:
after a first learning state updating command is received, adding 1 to the current learning times, storing the current learning times, and starting timing of first review time;
setting a review word library according to the first word learning range, and automatically generating the test question according to the word library;
generating the correct test answer and the test answer time limit according to the test question;
generating a current word memory inventory quantity extracting function in the form of a first calculation formula according to the first word learning peak value and the first word learning range;
calculating the time required by the remaining 80% of the word stock as the first review prompt time according to the current word memory stock extraction function;
calculating the time required for remaining 60% of the word stock as the second review prompt time according to the current word memory stock extraction function;
calculating the time required by the remaining 40% of word stock as the third review prompt time according to the current word memory stock extraction function;
the first calculation formula is:
Figure BDA0003032945640000041
wherein, a0Is the initial memory decay index, t is the review interval, y0The inventory is memorized for the current word.
In one or more embodiments, preferably, the obtaining of the current learning times through account password login, when the current learning times is not 0, performing word review and learning, and after receiving a second learning state updating command, recording a current word learning range and a current word learning peak value specifically includes:
logging in through an account password to obtain the current learning times, and starting a review command when the current learning times is not 0;
after receiving a review command, acquiring the current test question, reviewing according to the test correct answer and the test answer time limit, updating the error times after reviewing is finished, and sending a review learning command;
after receiving the review learning command, determining a preset learning type according to user registration information, and selecting a word bank needing to be learned;
acquiring the current learning times and the error times, and extracting a current learning task according to the learning task sequence, wherein the current learning task comprises a second vocabulary to be memorized and a second total learning duration;
executing the current learning task, stopping learning when the second total learning duration is reached, and sending a second learning state updating command;
and recording the current word learning range and the current word learning peak value after receiving the second learning state updating command.
In one or more embodiments, preferably, the sending a review prompt command to the user according to the first review prompt time, the second review prompt time, and the third review prompt time, and after the user completes the review task, generating a memory index of each word and a current memory strength of each word specifically includes:
automatically judging whether the first review prompt time arrives, and sending a first short message prompt to a user after judging that the first review prompt time arrives, wherein the first short message prompt comprises a test link;
automatically judging whether the second review prompt time arrives, and sending a second short message prompt to the user after judging that the second review prompt time arrives, wherein the second short message prompt comprises a test link and a lost memory estimation;
automatically judging whether the third review prompt time is reached, and sending a third short message prompt to the user after judging that the third review prompt time is reached, wherein the third short message prompt comprises a test link, a lost memory estimation and a possible all forgetting risk prompt;
and after the user finishes the test according to the login of the test link, generating the memory index and the memory strength of each word.
In one or more embodiments, preferably, the generating a memory inventory quantity extraction function according to the memory index, the memory strength, and the current memory inventory quantity, and calculating a time point of the golden memory in each memory cycle specifically includes:
automatically acquiring the memory index, the memory intensity and the current memory stock of the user with the current learning frequency less than 10 times in all the users every 1 day, and storing the memory index, the memory intensity and the current memory stock as temporary storage memory data;
obtaining a decay index by using a second calculation formula according to the temporary storage memory data of all users;
calculating a memory bank extraction function of each user according to the attenuation index and the third calculation formula;
calculating the time points of the golden memories in each memory period by using a fourth calculation formula according to the current memory bank extraction function of each user;
the second calculation formula:
Figure BDA0003032945640000051
wherein, a1iThe attenuation index, k, for the ith user1iIs the memory index, k, of the ith user2iThe memory strength, k, of the ith user3iThe current memory stock amount of the i-th user, a0The initial memory decay index is obtained, min is the minimum value in the temporary memory data of all users, and the temporary memory data is the sum of the memory index, the memory intensity and the current memory stock;
the third calculation formula:
Figure BDA0003032945640000052
wherein, a1iThe decay index for the ith user, t the review time interval, yiMemorizing stock quantity for the current word of the ith user;
the fourth calculation formula:
Figure BDA0003032945640000061
wherein, TiIs the time point of golden memory of the ith user in each memory cycle, Yi1A predetermined storage amount for the ith user, a1iThe decay index for the ith user, the memory cycle comprising a transient memory, a short term memory cycle, a long term memory cycle.
In one or more embodiments, preferably, the extracting historical test data of corresponding users, performing feature extraction according to historical data of a single user, obtaining a target training function through historical data training, and determining golden memory times of all users specifically includes:
acquiring historical data of all users, classifying the historical data and generating single-user historical data;
extracting features according to historical data of a single user to generate training times, review intervals and error rate after training;
setting the training model in an initial state, wherein the training model in the initial state is in a fifth calculation formula form;
inputting historical data of a single user into the training model according to the training times sequence;
calculating the value of the optimal objective function by using a sixth calculation formula;
obtaining a target parameter value when the optimal target function is minimum by using a seventh calculation formula;
generating the target training function according to the current target parameter value;
judging a current memory cycle according to the training times and the review interval, wherein the memory cycle comprises an instantaneous memory, a short-term memory cycle and a long-term memory cycle;
carrying out normalization processing according to the instantaneous memory, the short-term memory period and the gold memory point review time period preset in the long-term memory period, and determining the expected word loss ratio;
calculating a corresponding review interval by using the target training function according to the expected word loss ratio, wherein the corresponding review interval is used as the next golden memory time;
calculating the golden memory time of all users, and generating the corresponding test questions in the golden memory time;
the fifth calculation formula is:
Figure BDA0003032945640000071
wherein f isn(tk) For the training model, AkIs the k-th training parameter value, tkTest interval for kth review, Tk() For the kth part of the target training function, k and n are both positive integersThe number k can be in the range of 1 to n, and n is the number of the existing data groups of the user;
the sixth calculation formula is:
Figure BDA0003032945640000072
wherein, L (y)i,fn(ti) For the optimal objective function, argmin for the function that corresponds to the training parameter value when the minimum value of the optimal objective function is obtained, AkIs the k-th training parameter value, ykError rate after k training, tkThe test interval of the kth review is defined, k and n are positive integers, k can be in a range from 1 to n, and n is the number of data groups existing in the user;
the seventh calculation formula is:
Figure BDA0003032945640000073
wherein, L (y)i,fn(ti) Is the optimal objective function, y is the post-training error rate,
Figure BDA0003032945640000074
for predicted post-training error rates, ykError rate after k training, tkAnd in the test interval of the kth review, k and n are positive integers, k can be in a range from 1 to n, and n is the number of the data groups existing in the user.
In one or more embodiments, preferably, the obtaining the current learning times and the error times of each user, whether to make a pair this time, the memory strength, and the memory index, and comprehensively generating a comprehensive memory strength score of each word of each user specifically includes:
acquiring the current learning times, the error times, whether the current pair is made, the memory intensity and the memory index;
comprehensively evaluating the online memory effect according to an eighth calculation formula;
outputting the composite memory strength score for each word;
the eighth calculation formula is:
Figure BDA0003032945640000075
wherein, b1jThe current number of learning for the jth word, b2jThe number of errors for the jth word, b3jWhether the current time of the jth word is right, b4jThe memory strength for the jth word, b5jSaid memory index, P, for the jth wordjScoring the composite memory strength for the jth word, QjQMAX is the largest of the memory strength scores of all words, which is the memory strength score of the jth word.
In a second aspect of the embodiments of the present invention, a large data acquisition multi-core parameter adaptive time-sharing memory driving system is provided.
In one or more embodiments, preferably, the big data acquisition multi-kernel parameter adaptive time-sharing memory driving system includes:
the first learning module is used for obtaining the current learning times through account password login, performing word learning when the current learning times is 0, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command;
the automatic testing module generates first review prompt time, second review prompt time, third review prompt time, test questions, test correct answers and test answer time limits of the words according to the first updated learning state command, the first word learning range and the first word learning peak value;
the review learning module is used for logging in through an account password to obtain the current learning times, conducting word review and learning when the current learning times is not 0, and recording the current word learning range and the current word learning peak value after receiving a second learning state updating command;
the review prompting module is used for sending a review prompting command to the user according to the first review prompting time, the second review prompting time and the third review prompting time, and generating the memory index of each word and the current memory intensity of each word after the user finishes a review task;
the first review test time determining module is used for generating a memory stock extraction function according to the memory index, the memory intensity and the current memory stock and calculating the time point of the golden memory in each memory period;
the second review test time determining module is used for extracting historical test data of corresponding users, extracting features according to the historical data of a single user, obtaining a target training function through historical data training and determining golden memory time of all the users;
and the comprehensive evaluation module is used for obtaining the current learning times and error times of each user, whether the user makes a pair at this time, the memory intensity and the memory index, and comprehensively generating the comprehensive memory intensity score of each word of each user.
According to a third aspect of embodiments of the present invention, there is provided an electronic device, comprising a memory and a processor, the memory being configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the steps of any one of the first aspects of embodiments of the present invention.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
1) according to the embodiment of the invention, the target training function is obtained by learning the historical data of the students for multiple times, the current state of each student is captured in real time, and the golden memory time and the corresponding test subject are formulated in a self-adaptive manner according to the state, so that the possibility of forgetting to learn is reduced;
2) according to the embodiment of the invention, the learning state of each student is comprehensively evaluated, so that the comprehensive memory strength is generated, each student can conveniently know the learning effect after learning in real time, and the readability of the word memory effect is improved;
3) according to the embodiment of the invention, different learning times and different memory cycles are realized, and different review time intervals are adopted, so that each student can adaptively change the learning state according to the self memory characteristics, and the learning efficiency is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
Fig. 2 is a flowchart of obtaining a current learning time through account password login in a large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, where when the current learning time is 0, word learning is performed, and after a first learning state updating command is received, a first word learning range and a first word learning peak value are recorded.
Fig. 3 is a flowchart of generating a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer, and a test answer time limit for a word according to the first update learning state command, the first word learning range, and the first word learning peak in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
Fig. 4 is a flowchart of obtaining the current learning times through account password login in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, where when the current learning times is not 0, word review and learning are performed, and after receiving a second learning state updating command, a current word learning range and a current word learning peak value are recorded.
Fig. 5 is a flowchart of sending a review prompt command to a user according to the first review prompt time, the second review prompt time, and the third review prompt time in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, and after the user completes a review task, generating a memory index of each word and a current memory intensity of each word.
Fig. 6 is a flowchart of generating a memory inventory quantity extraction function according to the memory index, the memory strength and the current memory inventory quantity and calculating time points of golden memory in each memory cycle in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
Fig. 7 is a flowchart of extracting historical test data of a corresponding user, performing feature extraction according to historical data of a single user, obtaining a target training function through historical data training, and determining golden memory time of all users in the big data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
Fig. 8 is a flowchart of obtaining the current learning times and error times, whether to make a pair this time, the memory strength, and the memory index of each user in the big data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, and comprehensively generating a comprehensive memory strength score of each word of each user.
Fig. 9 is a structural diagram of a large data acquisition multi-core parameter adaptive time-sharing memory driving system according to an embodiment of the present invention.
Fig. 10 is a block diagram of an electronic device in one embodiment of the invention.
Detailed Description
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Memory is the basis for learning and plays an important role for everyone. Particularly, in the current teenager education process, how to effectively utilize and improve the memory capacity of the teenager education process plays an important role in improving the learning performance of the teenager. And foreign language learning is an important link in education, and the requirement on memory is higher than that in the learning process of other subjects. Therefore, a large number of words and vocabularies, and even sentences, need to be forcibly memorized. Word memory has become an important consideration for handicapped foreign language learning.
The existing memory method mostly depends on the existing theoretical knowledge, the learning state of students is judged according to the preset experience, the actual learning state of each student cannot be known, the learning and review suggested time is given in a targeted manner, and the test questions are provided in a targeted manner, so that the learning degree of each word can be guessed only according to the experience in the learning process, and whether the learning of each word is finished or whether the learning is already learned is judged subjectively. Resulting in probable word relearning and unlikely word forgetting. On one hand, the learning efficiency is reduced, on the other hand, the review times are also reduced, and the possibility of forgetting is increased.
The embodiment of the invention provides a large data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system. According to the scheme, word memory is performed according to multiple core parameters such as learning times, error times and memory strength, reasonable review time of each student is given, and memory efficiency of the students is improved.
In a first aspect of the embodiments of the present invention, a large data acquisition multi-core parameter adaptive time-sharing memory driving method is provided.
Fig. 1 is a flowchart of a large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
As shown in fig. 1, in one or more embodiments, preferably, the method for driving large data acquisition multi-core parameter adaptive time-sharing memory includes:
s101, logging in through an account password to obtain the current learning frequency, performing word learning when the current learning frequency is 0, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command;
specifically, in the implementation process, the scheme is not limited to learning words, and can also learn words and sentences. When vocabulary and sentence learning is performed, the corresponding scheme is consistent with the word learning scheme. For example, the current learning frequency is obtained by logging in an account password, when the current learning frequency is 0, sentence learning is performed, and after a first learning state updating command is received, a sentence learning range and a sentence learning peak value are recorded;
s102, generating a first review prompt time, a second review prompt time, a third review prompt time, test questions, test correct answers and test answer time limits of the words according to the first updated learning state command, the first word learning range and the first word learning peak value;
specifically, when the test question is generated, the learning contents (words or sentences and the like) can be analyzed one by one according to grade, version and unit classification, parameters (dozens of parameters such as reaction time, correct times, error times, correct rate, error rate, forgetting interval time, forgetting interval frequency, memory strength, memory stage, memory difficulty, memory time after each operation and the like) of each user in the learning process of the learning contents are recorded, and then a new test question, a test correct answer and a test answer time limit are generated;
s103, logging in through an account password to obtain the current learning frequency, when the current learning frequency is not 0, performing word review and learning, and after receiving a second learning state updating command, recording the current word learning range and the current word learning peak value;
s104, sending a review prompt command to a user according to the first review prompt time, the second review prompt time and the third review prompt time, and generating a memory index of each word and the current memory intensity of each word after the user finishes a review task;
s105, generating a memory stock extraction function according to the memory index, the memory strength and the current memory stock, and calculating the time point of the golden memory in each memory cycle;
in step S105, the time points of the golden memories in each memory cycle are a relative time ratio. However, the specific golden memory time is not really obtained directly, and the real golden memory time is completed in step S106.
S106, extracting historical test data of corresponding users, extracting features according to the historical data of a single user, obtaining a target training function through historical data training, and determining golden memory time of all the users;
specifically, the determined golden memory time of all users comprises the golden memory time of each user.
S107, obtaining the current learning times and error times of each user, whether the user makes a pair or not, the memory intensity and the memory index, and comprehensively generating a comprehensive memory intensity score of each word of each user.
Specifically, when the comprehensive memory strength of each word of each user is calculated, a multi-core parameter may be used, and the multi-core new parameter further includes: reaction time, correct times, error times, correct rate, error rate, forgetting interval time, forgetting interval frequency, memory intensity, memory stage, memory difficulty, memory time after each operation and other dozens of parameters.
In the specific implementation process, the sequence of step S106 and step S107 can be interchanged, and the normal operation of the whole time-sharing memory driving method is not affected.
In the embodiment of the invention, by extracting various types of information of a single user, the automatically issued test questions, the self-adaptive adjustment of golden memory time and the comprehensive learning evaluation are generated, the learning efficiency of each student can be effectively improved, and the forgetting rate of the students for learning English words is reduced. The automatically issued test questions refer to first review prompt time, second review prompt time, third review prompt time, test questions, test correct answers and test answer time limits for generating words; the self-adaptive adjustment of the golden memory time refers to the steps of extracting features according to historical data of a single user, obtaining a target training function through historical data training, and determining the golden memory time of all users; the comprehensive and comprehensive learning evaluation means that learning times and error times of each user, whether pairing is carried out or not at this time, memory intensity and current gear are obtained, and comprehensive word comprehensive intensity scores of each user are comprehensively generated.
Fig. 2 is a flowchart of obtaining a current learning time through account password login in a large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, where when the current learning time is 0, word learning is performed, and after a first learning state updating command is received, a first word learning range and a first word learning peak value are recorded.
As shown in fig. 2, in one or more embodiments, preferably, the obtaining, by logging in an account password, a current learning time, performing word learning when the current learning time is 0, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command specifically includes:
s201, logging in through an account password to obtain the current learning times, and starting a learning command when the current learning times is 0;
s202, after receiving the learning command, determining a preset learning type according to user registration information, selecting a word bank needing learning, and initializing a learning task sequence;
s203, initializing the current learning times and the error times, wherein the initialized current learning times and the initialized error times are both 0 times;
s204, extracting a first learning task according to the learning task sequence, wherein the first learning task comprises a first vocabulary to be memorized and a first total learning duration;
s205, executing the first learning task, stopping learning when the first total learning duration is reached, and sending a first learning state updating command;
and S206, after receiving the first learning state updating command, recording the word learning range and the word learning peak value.
Specifically, the registered users will have their accounts and passwords, and each registered user will set a learning type at the initial stage of registration, wherein the learning type may include college vocabulary, middle school vocabulary, elementary school vocabulary, and the like.
Specifically, the learning type corresponds to a preset learning plan, and may include tens of courses or tens of learning, or may be hundreds or more, which is not limited in the present invention.
In the embodiment of the invention, the current learning times are obtained by verifying the account password, and the next review and learning strategy is provided according to the learning times. The important point of the part is to provide a learning scheme when the learning frequency is 0, the learning scheme is set for the first time, so that the learning range and the learning duration are set, and the word learning of the user can be completed according to the limitation of the learning range and the learning duration. After this word learning is completed, the range of word learning and the peak state of learning will be recorded. Wherein, the word learning range and the learning peak value are the data base established by the follow-up review and the new learning.
Fig. 3 is a flowchart of generating a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer, and a test answer time limit for a word according to the first update learning state command, the first word learning range, and the first word learning peak in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
As shown in fig. 3, in one or more embodiments, preferably, the generating a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer, and a test question answering time limit for a word according to the first updated learning state command, the first word learning range, and the first word learning peak value specifically includes:
s301, after receiving a first learning state updating command, adding 1 to the current learning frequency, storing the current learning frequency, and starting timing of first review time;
s302, setting a review word library according to the first word learning range, and automatically generating the test question according to the word library;
s303, generating the correct test answer and the test answer time limit according to the test question;
s304, generating a current word memory stock quantity extraction function in the form of a first calculation formula according to the first word learning peak value and the first word learning range;
s305, calculating the time required by the remaining 80% of word stock as the first review prompting time according to the current word memory stock extraction function;
s306, calculating the time required by the residual 60% of word stock as the second review prompt time according to the current word memory stock extraction function;
s307, calculating the time required by the remaining 40% of word stock as the third review prompt time according to the current word memory stock extraction function;
the first calculation formula is:
Figure BDA0003032945640000161
wherein, a0Is the initial memory decay index, t is the review interval, y0The inventory is memorized for the current word.
In the embodiment of the invention, on the basis of the primary learning, a word bank for the primary learning is generated, and the three review prompt times for the primary learning are set according to the word bank. In practice, most users may delay review when reviewing for the first time, and the user is ensured to actually and effectively begin reviewing for the first time by prompting at different word stock time. Before the first review, the review time and the decay of the initial word stock exhibit exponential decay, which is characterized according to a first calculation formula.
Fig. 4 is a flowchart of obtaining the current learning times through account password login in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, where when the current learning times is not 0, word review and learning are performed, and after receiving a second learning state updating command, a current word learning range and a current word learning peak value are recorded.
As shown in fig. 4, in one or more embodiments, preferably, the obtaining of the current learning time through account password login includes performing word review and learning when the current learning time is not 0, and recording a current word learning range and a current word learning peak after receiving a second learning state updating command, and specifically includes:
s401, logging in through an account password to obtain the current learning times, and starting a review command when the current learning times are not 0;
s402, after a review command is received, acquiring the current test question, reviewing according to the test correct answer and the test answer time limit, updating the error times after the review is finished, and sending a review learning command;
s403, after receiving the review learning command, determining a preset learning type according to user registration information, and selecting a word bank needing to be learned;
s404, acquiring the current learning times and the error times, and extracting a current learning task according to the learning task sequence, wherein the current learning task comprises a second vocabulary to be memorized and a second total learning duration;
s405, executing the current learning task, stopping learning when the second total learning duration is reached, and sending a second learning state updating command;
and S406, recording the current word learning range and the current word learning peak value after receiving the second learning state updating command.
Specifically, the word learning range is the whole vocabulary range which has been learned at the current time; the current word learning peak value is the word content which really completes the instantaneous memory in all the learned words;
in the implementation of the invention, after the account is logged in, when the learning frequency is not 0, a review and learning processing mode is provided. The specific process is that corresponding learning tasks are carried out according to the learning types and the learning sequence, review tasks and learning tasks are obtained, error times are updated according to the review tasks respectively, and the learning tasks are updated according to the learning tasks.
Fig. 5 is a flowchart of sending a review prompt command to a user according to the first review prompt time, the second review prompt time, and the third review prompt time in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, and after the user completes a review task, generating a memory index of each word and a current memory intensity of each word.
As shown in fig. 5, in one or more embodiments, preferably, the issuing a review prompt command to the user according to the first review prompt time, the second review prompt time, and the third review prompt time, and after the user completes the review task, generating a memory index of each word and a current memory strength of each word specifically includes:
s501, automatically judging whether the first review prompt time is up, and sending a first short message prompt to a user after judging that the first review prompt time is up, wherein the first short message prompt comprises a test link;
s502, automatically judging whether the second review prompt time arrives, and sending a second short message prompt to the user after judging that the second review prompt time arrives, wherein the second short message prompt comprises a test link and a lost memory estimation;
s503, automatically judging whether the third review prompt time is up, and sending a third short message prompt to the user after judging that the third review prompt time is up, wherein the third short message prompt comprises a test link, a lost memory estimation and a possible all forgetting risk prompt;
s504, after the user finishes the test according to the test link login, the memory index and the memory strength of each word are generated.
Specifically, the memory index of each word is the review period range, the first gear is the instantaneous memory index, the second gear is the short-term memory index, the third gear is the long-term memory index, and the fourth gear is the permanent memory index.
Specifically, the current memory strength of each word is the comprehensive average of the answer correctness of the word in the review test.
In the embodiment of the invention, when the system receives specific prompt commands such as the first review prompt time, the second review prompt time, the third review prompt time and the like, timing is started, and when the timing is finished, the timing is respectively sent to the user through the mobile phone short message. Because the time of the third prompt is not too long and the past word degrees of the users are not consistent, in practice, only the test link is sent first, and the information quantity of the prompt content is continuously increased in the subsequent process, so that the users feel the lost information capacity. Until the user completes the review test, the specific memory strength of each word and the memory index of each word will be generated.
Fig. 6 is a flowchart of generating a memory inventory quantity extraction function according to the memory index, the memory strength and the current memory inventory quantity and calculating time points of golden memory in each memory cycle in the large data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
As shown in fig. 6, in one or more embodiments, preferably, the generating a memory inventory amount extraction function according to the memory index, the memory strength, and the current memory inventory amount, and calculating a time point of the golden memory in each memory cycle specifically includes:
s601, automatically acquiring the memory index, the memory intensity and the current memory stock of the user with the current learning frequency less than 10 times in all the users at intervals of 1 day, and storing the memory index, the memory intensity and the current memory stock as temporary memory data;
s602, obtaining attenuation indexes by using a second calculation formula according to the temporary storage memory data of all users;
s603, calculating a memory bank extraction function of each user according to the attenuation index and the third calculation formula;
s604, calculating time points of the golden memories in all memory periods by using a fourth calculation formula according to the current memory bank extraction function of each user;
the second calculation formula:
Figure BDA0003032945640000191
wherein, a1iThe attenuation for the ith userIndex, k1iIs the memory index, k, of the ith user2iThe memory strength, k, of the ith user3iThe current memory stock amount of the i-th user, a0The initial memory decay index is obtained, min is the minimum value in the temporary memory data of all users, and the temporary memory data is the sum of the memory index, the memory intensity and the current memory stock;
the third calculation formula:
Figure BDA0003032945640000192
wherein, a1iThe decay index for the ith user, t the review time interval, yiMemorizing stock quantity for the current word of the ith user;
the fourth calculation formula:
Figure BDA0003032945640000201
wherein, TiIs the time point of golden memory of the ith user in each memory cycle, Yi1A predetermined storage amount for the ith user, a1iThe decay index for the ith user, the memory cycle comprising a transient memory, a short term memory cycle, a long term memory cycle.
In the embodiment of the invention, the memory storage, the memory index and the memory strength of the user with less historical data are automatically updated every day. Because, when the historical data is less, the actual memory decay rule of each user cannot be fitted, and the memory golden time point of the corresponding user is set according to the rule. Therefore, the adopted method is to perform self-adaptive normalization processing on the attenuation degrees of all users by utilizing the initial attenuation index to obtain the estimated attenuation degree of the current user, further obtain the memory bank extraction function of each user in the current day by utilizing the estimated attenuation index, and generate the golden memory time point of a single user on the basis.
Fig. 7 is a flowchart of extracting historical test data of a corresponding user, performing feature extraction according to historical data of a single user, obtaining a target training function through historical data training, and determining golden memory time of all users in the big data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention.
As shown in fig. 7, in one or more embodiments, preferably, the extracting historical test data of corresponding users, performing feature extraction according to historical data of a single user, obtaining a target training function through historical data training, and determining golden memory times of all users specifically includes:
s701, acquiring historical data of all users, classifying the historical data, and generating single-user historical data;
s702, extracting features according to historical data of a single user to generate training times, review intervals and error rate after training;
s703, setting the training model to be in an initial state, wherein the training model in the initial state is in a fifth calculation formula form;
s704, inputting historical data of a single user into the training model according to the training frequency sequence;
s705, calculating the value of the optimal objective function by using a sixth calculation formula;
s706, obtaining a target parameter value when the optimal target function is minimum by using a seventh calculation formula;
s707, generating the target training function according to the current target parameter value;
s708, judging a current memory cycle according to the training times and the review interval, wherein the memory cycle comprises an instantaneous memory cycle, a short-term memory cycle and a long-term memory cycle;
s709, carrying out normalization processing according to the instantaneous memory, the short-term memory period and the golden memory point review time period preset in the long-term memory period, and determining the expected word loss ratio;
s710, according to the expected word loss ratio, calculating a corresponding review interval by using the target training function, wherein the corresponding review interval is used as the next golden memory time;
s711, calculating the golden memory time of all users, and generating the corresponding test questions in the golden memory time;
the fifth calculation formula is:
Figure BDA0003032945640000211
wherein f isn(tk) For the training model, AkIs the k-th training parameter value, tkTest interval for kth review, Tk() The k part of the target training function is defined, k and n are positive integers, k can be in a range from 1 to n, and n is the number of data groups existing in the user;
the sixth calculation formula is:
Figure BDA0003032945640000212
wherein, L (y)i,fn(ti) For the optimal objective function, argmin for the function that corresponds to the training parameter value when the minimum value of the optimal objective function is obtained, AkIs the k-th training parameter value, ykError rate after k training, tkThe test interval of the kth review is defined, k and n are positive integers, k can be in a range from 1 to n, and n is the number of data groups existing in the user;
the seventh calculation formula is:
Figure BDA0003032945640000213
wherein, L (y)i,fn(ti) Is the optimal objective function, y is the post-training error rate,
Figure BDA0003032945640000221
for predicted post-training error rates, ykError rate after k training, tkAnd in the test interval of the kth review, k and n are positive integers, k can be in a range from 1 to n, and n is the number of the data groups existing in the user.
In the embodiment of the invention, data classification is carried out on historical data of all users, and information such as training times, review intervals, error rate after training and the like of each user is extracted. Before the target training function is determined, the function is emptied, a continuous training model is obtained by sequentially inputting single historical data, then the optimal target function is used for correcting the model, the optimal target function is the square of the deviation between a predicted value and an actual value in the process, and after the fitting value coefficient under the condition that the deviation of all predicted values and actual values is minimum is determined, the coefficient obtained by fitting can be self-adaptive, and further the next golden memory time is automatically determined.
Fig. 8 is a flowchart of obtaining the current learning times and error times, whether to make a pair this time, the memory strength, and the memory index of each user in the big data acquisition multi-core parameter adaptive time-sharing memory driving method according to an embodiment of the present invention, and comprehensively generating a comprehensive memory strength score of each word of each user.
As shown in fig. 8, in one or more embodiments, preferably, the obtaining the current learning times and the error times of each user, whether to make a pair this time, the memory strength, and the memory index, and comprehensively generating a comprehensive memory strength score of each word of each user specifically includes:
s801, acquiring the current learning times, the error times, whether the current pair is made, the memory intensity and the memory index;
s802, comprehensively evaluating the online memory effect according to an eighth calculation formula;
s803, outputting the comprehensive memory strength score of each word;
the eighth calculation formula is:
Figure BDA0003032945640000222
wherein, b1jThe current number of learning for the jth word, b2jThe number of errors for the jth word, b3jWhether the current time of the jth word is right, b4jThe memory strength for the jth word, b5jSaid memory index, P, for the jth wordjScoring the composite memory strength for the jth word, QjQMAX is the largest of the memory strength scores of all words, which is the memory strength score of the jth word.
In the embodiment of the invention, the system-level online evaluation of the memory effect is carried out according to all the detected data, the current memory intensity of each word of each user is scored in real time in an evaluation system, and the comprehensive memory intensity scoring provides a certain basis for the user to comprehensively know the learning state.
In a second aspect of the embodiments of the present invention, a large data acquisition multi-core parameter adaptive time-sharing memory driving system is provided. Fig. 9 is a structural diagram of a large data acquisition multi-core parameter adaptive time-sharing memory driving system according to an embodiment of the present invention. As shown in fig. 9, in one or more embodiments, preferably, the large data acquisition multi-kernel parameter adaptive time-sharing memory driving system includes:
a first learning module 901, configured to log in through an account password to obtain a current learning frequency, perform word learning when the current learning frequency is 0, and record a first word learning range and a first word learning peak after receiving a first learning state updating command;
an automatic test module 902, configured to generate a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer, and a test answer time limit for a word according to the first updated learning state command, the first word learning range, and the first word learning peak;
the review learning module 903 is used for obtaining the current learning frequency through account password login, conducting word review and learning when the current learning frequency is not 0, and recording the current word learning range and the current word learning peak value after receiving a second learning state updating command;
a review prompt module 904, configured to send a review prompt command to the user according to the first review prompt time, the second review prompt time, and the third review prompt time, and generate a memory index of each word and a current memory strength of each word after the user completes a review task;
a first review test time determination module 905, configured to generate a memory stock extraction function according to the memory index, the memory strength, and the current memory stock, and calculate a time point of the golden memory in each memory cycle;
a second review test time determination module 906, configured to extract historical test data of a corresponding user, perform feature extraction according to the historical data of a single user, obtain a target training function through historical data training, and determine golden memory times of all users;
and the comprehensive evaluation module 907 acquires the current learning times and error times of each user, whether to make a pair at this time, the memory strength and the memory index, and comprehensively generates a comprehensive memory strength score of each word of each user.
According to a third aspect of the embodiments of the present invention, there is provided an electronic apparatus. Fig. 10 is a block diagram of an electronic device in one embodiment of the invention. The electronic device shown in fig. 10 is a general memory drive device, which includes a general computer hardware structure, which includes at least a processor 1001 and a memory 1002. The processor 1001 and the memory 1002 are connected by a bus 1003. The memory 1002 is adapted to store instructions or programs executable by the processor 1001. Processor 1001 may be a stand-alone microprocessor or may be a collection of one or more microprocessors. Thus, the processor 1001 implements the processing of data and the control of other devices by executing instructions stored by the memory 1002 to perform the method flows of embodiments of the present invention as described above. The bus 1003 connects the above components together, and also connects the above components to a display controller 1004 and a display device and an input/output (I/O) device 1005. Input/output (I/O) devices 1005 may be a mouse, keyboard, modem, network interface, touch input device, motion sensing input device, printer, and other devices known in the art. Typically, input/output devices 1005 are connected to the system through an input/output (I/O) controller 1006.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
1) according to the embodiment of the invention, the target training function is obtained by learning the historical data of the students for multiple times, the current state of each student is captured in real time, and the golden memory time and the corresponding test subject are formulated in a self-adaptive manner according to the state, so that the possibility of forgetting to learn is reduced;
2) according to the embodiment of the invention, the learning state of each student is comprehensively evaluated, so that the comprehensive memory strength is generated, each student can conveniently know the learning effect after learning in real time, and the readability of the word memory effect is improved;
3) according to the embodiment of the invention, different learning times and different memory cycles are realized, and different review time intervals are adopted, so that each student can adaptively change the learning state according to the self memory characteristics, and the learning efficiency is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A big data acquisition multi-core parameter self-adaptive time-sharing memory driving method is characterized by comprising the following steps:
logging in through an account password to obtain the current learning frequency, performing word learning when the current learning frequency is 0, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command;
generating a first review prompt time, a second review prompt time, a third review prompt time, a test question, a test correct answer and a test answer time limit of the word according to the first updated learning state command, the first word learning range and the first word learning peak value;
logging in through an account password to obtain the current learning frequency, when the current learning frequency is not 0, performing word review and learning, and after receiving a second learning state updating command, recording the current word learning range and the current word learning peak value;
sending a review prompt command to the user according to the first review prompt time, the second review prompt time and the third review prompt time, and generating a memory index of each word and the current memory intensity of each word after the user finishes a review task;
generating a memory stock extraction function according to the memory index, the memory intensity and the current memory stock, and calculating the time point of the golden memory in each memory period;
extracting historical test data of corresponding users, extracting characteristics according to the historical data of a single user, obtaining a target training function through historical data training, and determining golden memory time of all the users;
and obtaining the current learning times and error times of each user, whether the user makes a pair at this time, the memory intensity and the memory index, and comprehensively generating a comprehensive memory intensity score of each word of each user.
2. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 1, wherein the current learning frequency is obtained by logging in an account password, when the current learning frequency is 0, word learning is performed, and after receiving a first learning state updating command, a first word learning range and a first word learning peak value are recorded, specifically comprising:
logging in through an account password to obtain the current learning times, and starting a learning command when the current learning times is 0;
after receiving the learning command, determining a preset learning type according to user registration information, selecting a word bank needing learning, and initializing a learning task sequence;
initializing the current learning times and the error times, wherein the initialized current learning times and the initialized error times are both 0 times;
extracting a first learning task according to the learning task sequence, wherein the first learning task comprises a first vocabulary to be memorized and a first total learning duration;
executing the first learning task, stopping learning when the first total learning duration is reached, and sending a first learning state updating command;
and recording the word learning range and the word learning peak value after receiving the first learning state updating command.
3. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 1, wherein the generating of the first review prompt time, the second review prompt time, the third review prompt time, the test question, the test correct answer, and the test answer time limit for the word according to the first update learning state command, the first word learning range, and the first word learning peak value specifically includes:
after a first learning state updating command is received, adding 1 to the current learning times, storing the current learning times, and starting timing of first review time;
setting a review word library according to the first word learning range, and automatically generating the test question according to the word library;
generating the correct test answer and the test answer time limit according to the test question;
generating a current word memory inventory quantity extracting function in the form of a first calculation formula according to the first word learning peak value and the first word learning range;
calculating the time required by the remaining 80% of the word stock as the first review prompt time according to the current word memory stock extraction function;
calculating the time required for remaining 60% of the word stock as the second review prompt time according to the current word memory stock extraction function;
calculating the time required by the remaining 40% of word stock as the third review prompt time according to the current word memory stock extraction function;
the first calculation formula is:
Figure FDA0003032945630000031
wherein, a0Is the initial memory decay index, t is the review interval, y0The inventory is memorized for the current word.
4. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 2, wherein the current learning frequency is obtained by logging in an account password, when the current learning frequency is not 0, word review and learning are performed, and after receiving a second learning state updating command, a current word learning range and a current word learning peak value are recorded, and the method specifically includes:
logging in through an account password to obtain the current learning times, and starting a review command when the current learning times is not 0;
after receiving a review command, acquiring the current test question, reviewing according to the test correct answer and the test answer time limit, updating the error times after reviewing is finished, and sending a review learning command;
after receiving the review learning command, determining a preset learning type according to user registration information, and selecting a word bank needing to be learned;
acquiring the current learning times and the error times, and extracting a current learning task according to the learning task sequence, wherein the current learning task comprises a second vocabulary to be memorized and a second total learning duration;
executing the current learning task, stopping learning when the second total learning duration is reached, and sending a second learning state updating command;
and recording the current word learning range and the current word learning peak value after receiving the second learning state updating command.
5. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 1, wherein the sending a review prompt command to a user according to the first review prompt time, the second review prompt time, and the third review prompt time, and after the user completes a review task, generating a memory index of each word and a current memory intensity of each word specifically comprises:
automatically judging whether the first review prompt time arrives, and sending a first short message prompt to a user after judging that the first review prompt time arrives, wherein the first short message prompt comprises a test link;
automatically judging whether the second review prompt time arrives, and sending a second short message prompt to the user after judging that the second review prompt time arrives, wherein the second short message prompt comprises a test link and a lost memory estimation;
automatically judging whether the third review prompt time is reached, and sending a third short message prompt to the user after judging that the third review prompt time is reached, wherein the third short message prompt comprises a test link, a lost memory estimation and a possible all forgetting risk prompt;
and after the user finishes the test according to the login of the test link, generating the memory index and the memory strength of each word.
6. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 1, wherein the generating a memory stock extraction function according to the memory index, the memory strength and the current memory stock and calculating a time point of golden memory in each memory cycle specifically comprises:
automatically acquiring the memory index, the memory intensity and the current memory stock of the user with the current learning frequency less than 10 times in all the users every 1 day, and storing the memory index, the memory intensity and the current memory stock as temporary storage memory data;
obtaining a decay index by using a second calculation formula according to the temporary storage memory data of all users;
calculating a memory bank extraction function of each user according to the attenuation index and the third calculation formula;
calculating the time points of the golden memories in each memory period by using a fourth calculation formula according to the current memory bank extraction function of each user;
the second calculation formula:
Figure FDA0003032945630000041
wherein, a1iThe attenuation index, k, for the ith user1iIs the memory index, k, of the ith user2iThe memory strength, k, of the ith user3iThe current memory stock amount of the i-th user, a0The initial memory decay index is obtained, min is the minimum value in the temporary memory data of all users, and the temporary memory data is the sum of the memory index, the memory intensity and the current memory stock;
the third calculation formula:
Figure FDA0003032945630000051
wherein, a1iThe decay index for the ith user, t the review time interval, yiMemorizing stock quantity for the current word of the ith user;
the fourth calculation formula:
Figure FDA0003032945630000052
wherein, TiIs the time point of golden memory of the ith user in each memory cycle, Yi1A predetermined storage amount for the ith user, a1iThe decay index for the ith user, the memory cycle comprising a transient memory, a short term memory cycle, a long term memory cycle.
7. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 1, wherein the extracting historical test data of corresponding users, performing feature extraction according to historical data of a single user, obtaining a target training function through historical data training, and determining golden memory time of all users specifically comprises:
acquiring historical data of all users, classifying the historical data and generating single-user historical data;
extracting features according to historical data of a single user to generate training times, review intervals and error rate after training;
setting the training model in an initial state, wherein the training model in the initial state is in a fifth calculation formula form;
inputting historical data of a single user into the training model according to the training times sequence;
calculating the value of the optimal objective function by using a sixth calculation formula;
obtaining a target parameter value when the optimal target function is minimum by using a seventh calculation formula;
generating the target training function according to the current target parameter value;
judging a current memory cycle according to the training times and the review interval, wherein the memory cycle comprises an instantaneous memory, a short-term memory cycle and a long-term memory cycle;
carrying out normalization processing according to the instantaneous memory, the short-term memory period and the gold memory point review time period preset in the long-term memory period, and determining the expected word loss ratio;
calculating a corresponding review interval by using the target training function according to the expected word loss ratio, wherein the corresponding review interval is used as the next golden memory time;
calculating the golden memory time of all users, and generating the corresponding test questions in the golden memory time;
the fifth calculation formula is:
Figure FDA0003032945630000061
wherein f isn(tk) For the training model, AkIs the k-th training parameter value, tkTest interval for kth review, Tk() The k part of the target training function is defined, k and n are positive integers, k can be in a range from 1 to n, and n is the number of data groups existing in the user;
the sixth calculation formula is:
Figure FDA0003032945630000062
wherein, L (y)i,fn(ti) For the optimal objective function, argmin for the function that corresponds to the training parameter value when the minimum value of the optimal objective function is obtained, AkIs the k-th training parameter value, ykError rate after k training, tkThe test interval of the kth review is defined, k and n are positive integers, k can be in a range from 1 to n, and n is the number of data groups existing in the user;
the seventh calculation formula is:
Figure FDA0003032945630000063
wherein, L (y)i,fn(ti) Is the optimumAn objective function, y is the error rate after training,
Figure FDA0003032945630000064
for predicted post-training error rates, ykError rate after k training, tkAnd in the test interval of the kth review, k and n are positive integers, k can be in a range from 1 to n, and n is the number of the data groups existing in the user.
8. The big data acquisition multi-core parameter adaptive time-sharing memory driving method according to claim 1, wherein the obtaining of the current learning times, the error times, whether to make a pair this time, the memory strength and the memory index of each user and the comprehensive generation of the comprehensive memory strength score of each word of each user specifically comprises:
acquiring the current learning times, the error times, whether the current pair is made, the memory intensity and the memory index;
comprehensively evaluating the online memory effect according to an eighth calculation formula;
outputting the composite memory strength score for each word;
the eighth calculation formula is:
Figure FDA0003032945630000071
wherein, b1jThe current number of learning for the jth word, b2jThe number of errors for the jth word, b3jWhether the current time of the jth word is right, b4jThe memory strength for the jth word, b5jSaid memory index, P, for the jth wordjScoring the composite memory strength for the jth word, QjQMAX is the largest of the memory strength scores of all words, which is the memory strength score of the jth word.
9. A big data acquisition multi-core parameter self-adaptive time-sharing memory driving system is characterized by comprising:
the first learning module is used for obtaining the current learning times through account password login, performing word learning when the current learning times is 0, and recording a first word learning range and a first word learning peak value after receiving a first learning state updating command;
the automatic testing module generates first review prompt time, second review prompt time, third review prompt time, test questions, test correct answers and test answer time limits of the words according to the first updated learning state command, the first word learning range and the first word learning peak value;
the review learning module is used for logging in through an account password to obtain the current learning times, conducting word review and learning when the current learning times is not 0, and recording the current word learning range and the current word learning peak value after receiving a second learning state updating command;
the review prompting module is used for sending a review prompting command to the user according to the first review prompting time, the second review prompting time and the third review prompting time, and generating the memory index of each word and the current memory intensity of each word after the user finishes a review task;
the first review test time determining module is used for generating a memory stock extraction function according to the memory index, the memory intensity and the current memory stock and calculating the time point of the golden memory in each memory period;
the second review test time determining module is used for extracting historical test data of corresponding users, extracting features according to the historical data of a single user, obtaining a target training function through historical data training and determining golden memory time of all the users;
and the comprehensive evaluation module is used for obtaining the current learning times and error times of each user, whether the user makes a pair at this time, the memory intensity and the memory index, and comprehensively generating the comprehensive memory intensity score of each word of each user.
10. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the steps of any of claims 1-8.
CN202110435723.4A 2021-04-22 2021-04-22 Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system Active CN113094404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110435723.4A CN113094404B (en) 2021-04-22 2021-04-22 Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110435723.4A CN113094404B (en) 2021-04-22 2021-04-22 Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system

Publications (2)

Publication Number Publication Date
CN113094404A true CN113094404A (en) 2021-07-09
CN113094404B CN113094404B (en) 2021-11-19

Family

ID=76679253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110435723.4A Active CN113094404B (en) 2021-04-22 2021-04-22 Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system

Country Status (1)

Country Link
CN (1) CN113094404B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117094861A (en) * 2023-09-01 2023-11-21 吉林农业科技学院 Language learning control test system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413478A (en) * 2013-07-09 2013-11-27 复旦大学 Word memory intelligent learning method and system thereof
CN109935120A (en) * 2019-03-15 2019-06-25 山东顺势教育科技有限公司 Multicore memory driving and its accumulating method
US20190392027A1 (en) * 2018-06-21 2019-12-26 Microsoft Technology Licensing, Llc Event detection based on text streams
CN111861374A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Foreign language review mechanism and device
CN111861372A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and system for testing word mastering degree
CN111861371A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and equipment for calculating optimal word review time

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413478A (en) * 2013-07-09 2013-11-27 复旦大学 Word memory intelligent learning method and system thereof
US20190392027A1 (en) * 2018-06-21 2019-12-26 Microsoft Technology Licensing, Llc Event detection based on text streams
CN109935120A (en) * 2019-03-15 2019-06-25 山东顺势教育科技有限公司 Multicore memory driving and its accumulating method
CN111861374A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Foreign language review mechanism and device
CN111861372A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and system for testing word mastering degree
CN111861371A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and equipment for calculating optimal word review time

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张秀明: "初中英语单词记忆方法与强化策略探析", 《中学生英语》 *
徐男: "初中生如何快速有效的记忆单词", 《中学英语之友》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117094861A (en) * 2023-09-01 2023-11-21 吉林农业科技学院 Language learning control test system
CN117094861B (en) * 2023-09-01 2024-03-12 吉林农业科技学院 Language learning control test system

Also Published As

Publication number Publication date
CN113094404B (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN112508334B (en) Personalized paper grouping method and system integrating cognition characteristics and test question text information
US9412281B2 (en) Learning system self-optimization
US9446314B2 (en) Vector-based gaming content management
CN103559894A (en) Method and system for evaluating spoken language
CN111831831A (en) Knowledge graph-based personalized learning platform and construction method thereof
Mahana et al. Automated essay grading using machine learning
JP2018205354A (en) Learning support device, learning support system, and program
CN111753846A (en) Website verification method, device, equipment and storage medium based on RPA and AI
CN114742679A (en) Online education management control system based on internet
CN110473435A (en) A kind of the word assistant learning system and method for the quantification with learning cycle
CN113094404B (en) Big data acquisition multi-core parameter self-adaptive time-sharing memory driving method and system
CN108073603B (en) Job distribution method and device
Bexte et al. Similarity-based content scoring-how to make S-BERT keep up with BERT
CN112015783B (en) Interactive learning process generation method and system
CN114333787A (en) Scoring method, device, equipment, storage medium and program product for spoken language examination
CN112419812A (en) Exercise correction method and device
CN117332054A (en) Form question-answering processing method, device and equipment
Ling et al. Human-assisted computation for auto-grading
KR102329611B1 (en) Pre-training modeling system and method for predicting educational factors
KR102385073B1 (en) Learning problem recommendation system that recommends evaluable problems through unification of the score probability distribution form and operation thereof
KR20140051607A (en) Apparatus providing analysis information based on level of a student and method thereof
CN114510617A (en) Online course learning behavior determination method and device
CN112199476A (en) Automated decision making to select a leg after partial correct answers in a conversational intelligence tutor system
CN110008229A (en) Method based on computer long-term memory linguistic unit information for identification
CN117993366B (en) Evaluation item dynamic generation method and system, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant