CN116662527A - Method for generating learning resources and related products - Google Patents

Method for generating learning resources and related products Download PDF

Info

Publication number
CN116662527A
CN116662527A CN202310503391.8A CN202310503391A CN116662527A CN 116662527 A CN116662527 A CN 116662527A CN 202310503391 A CN202310503391 A CN 202310503391A CN 116662527 A CN116662527 A CN 116662527A
Authority
CN
China
Prior art keywords
learning
content
user
network model
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310503391.8A
Other languages
Chinese (zh)
Inventor
詹梓钊
顾红清
罗婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Youdao Information Technology Hangzhou Co Ltd
Original Assignee
Netease Youdao Information Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Youdao Information Technology Hangzhou Co Ltd filed Critical Netease Youdao Information Technology Hangzhou Co Ltd
Priority to CN202310503391.8A priority Critical patent/CN116662527A/en
Publication of CN116662527A publication Critical patent/CN116662527A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

Embodiments of the present invention provide a method for generating learning resources and related products. Wherein the method comprises the following steps: acquiring learning interaction information generated by a user in a learning process; analyzing the learning interaction information by using the trained neural network model to obtain behavior characteristics and learning content characteristics of the user; and generating learning resources associated with the user learning state according to the behavior characteristics and the learning content characteristics. According to the technical scheme, the actual learning states such as the user learning capacity and the actual learning requirements can be determined by utilizing the behavior characteristics and the learning content characteristics of the user, and learning resources related to the user learning states can be dynamically generated. Therefore, the learning resources of the user can be dynamically changed along with the learning state change of the user, so that the dynamic learning effect of thousands of people and thousands of faces is achieved, and the actual requirements of the user are met.

Description

Method for generating learning resources and related products
Technical Field
Embodiments of the present invention relate to the field of information processing technology, and more particularly, to a method for generating learning resources, and an electronic device and a computer-readable storage medium that perform the foregoing method.
Background
This section is intended to provide a background or context to the embodiments of the application that are recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Accordingly, unless indicated otherwise, what is described in this section is not prior art to the description and claims of the present application and is not admitted to be prior art by inclusion in this section.
How to effectively read and learn is increasingly valued by parents, and usually parents buy a reading and learning plan to help children to read and learn. Currently, the main stream of reading and learning resource packages are content classification based on the age of the user, that is, learning resources with a fixed theme or scope are output by inputting the age of the user. The learning resources are obtained through content grading, the content of the learning resources is fixed, and finally, the learning effect of thousands of people is achieved.
However, in practical applications, the learning ability and the requirement of the user are dynamically changed, and the learning resource with fixed content obviously cannot meet the learning requirement of the user.
Disclosure of Invention
The known learning resources with fixed content are not ideal for the user to learn assistance, which is a very annoying process.
For this reason, an improved scheme for generating learning resources is highly needed, which can dynamically generate learning resources associated with learning states of users so as to meet actual demands of users.
In this context, embodiments of the present invention desire to provide a method and related product for generating learning resources.
In a first aspect of an embodiment of the present invention, a method for generating learning resources is presented, comprising: acquiring learning interaction information generated by a user in a learning process; analyzing the learning interaction information by using the trained neural network model to obtain behavior characteristics and learning content characteristics of the user; and generating learning resources associated with a user learning state according to the behavior characteristics and the learning content characteristics.
In one embodiment of the present invention, wherein the learning interaction information includes user behavior data and learning content data, and analyzing the learning interaction information using the trained neural network model includes: and analyzing the user behavior data and the learning content data based on a neural network model respectively to obtain the behavior characteristics and the learning content characteristics.
In another embodiment of the present invention, wherein the neural network model includes a first network model and a second network model, analyzing the user behavior data and the learning content data based on the neural network model includes: performing basic text parsing on the user behavior data and the learning content data based on the first network model; and carrying out feature analysis on the output result of the first network model based on the second network model so as to obtain the behavior feature and the learning content feature.
In yet another embodiment of the present invention, wherein the first network model and the second network model are trained based on an artificial intelligence generation content AIGC model.
In yet another embodiment of the present invention, generating a learning resource associated with a user learning state from the behavioral characteristics and the learning content characteristics includes: determining candidate content according to the behavior characteristics and the learning content characteristics; and screening learning resources associated with the learning state of the user from the candidate content.
In one embodiment of the invention, determining candidate content from the behavioral characteristics and the learning content characteristics includes: and matching the behavior characteristic and the learning content characteristic in a preset database to obtain the candidate content.
In another embodiment of the present invention, the screening learning resources associated with the user learning state from the candidate content includes: acquiring a forward learning record of the user; performing preliminary screening on the candidate content based on the forward learning record so as to filter the content which has completed learning in the candidate content; and screening out the learning resources from the candidate content after the preliminary screening.
In yet another embodiment of the present invention, the screening the learning resources from the pre-screened candidate content includes: acquiring content of which the correlation degree with the content which has completed learning in the candidate content meets a preset threshold value; sorting the acquired content according to the content weight; and screening at least one candidate content as the learning resource according to the ranking of the candidate content.
In a second aspect of the embodiments of the present invention, there is provided an electronic device, including: a processor; and a memory storing computer instructions for generating a learning resource, which when executed by the processor, cause the electronic device to perform the method according to the preceding and following embodiments.
In a third aspect of embodiments of the present invention, a computer readable storage medium is provided, containing program instructions for generating learning resources, which when executed by a processor, cause the implementation of a method according to the foregoing and the following examples.
According to the method for generating the learning resources and the related products, the learning resources can be dynamically generated according to the behavior characteristics and the learning content characteristics which are analyzed from the learning interaction information. It can be seen that the scheme of the invention can utilize the behavior characteristics and learning content characteristics of the user to determine the actual learning states such as the learning ability and the actual learning requirement of the user, and dynamically generate the learning resources related to the learning states of the user. Therefore, the learning resources of the user can be dynamically changed along with the learning state change of the user, so that the dynamic learning effect of thousands of people and thousands of faces is achieved, and the actual requirements of the user are met.
In addition, in some embodiments of the present invention, the trained neural network model may include a first network model supporting basic text parsing and a second network model supporting feature parsing, so that basic big data analysis can be performed based on the first network model to improve the operation efficiency of the whole network model, and further fine analysis is performed by overlaying the second network model to improve the operation accuracy of the whole network model. Therefore, the whole neural network model can be ensured to have both operation efficiency and accuracy in the process of analyzing and learning interactive information.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a block diagram of an exemplary computing system 100 suitable for implementing embodiments of the invention;
FIG. 2 schematically illustrates a flow diagram of a method for generating learning resources according to one embodiment of the invention;
FIG. 3 schematically illustrates a flow diagram of a method for generating learning resources according to another embodiment of the invention;
FIG. 4 schematically illustrates a flow diagram of a method for generating learning resources according to yet another embodiment of the invention;
FIG. 5 schematically illustrates a schematic diagram of a training process of an AIGC model according to an embodiment of the invention;
FIG. 6 schematically illustrates a diagram of a process of parsing user behavior features and learning content features based on an AIGC model according to an embodiment of the invention;
FIG. 7 schematically illustrates a schematic diagram of a process for adaptively generating learning resources according to an embodiment of the present invention;
FIG. 8 schematically illustrates a schematic diagram of a user behavior feature and learning content feature matching calculation process according to an embodiment of the present invention; and
fig. 9 schematically shows a structural diagram of an electronic device according to an embodiment of the invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable those skilled in the art to better understand and practice the invention and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 illustrates a block diagram of an exemplary computing system 100 suitable for implementing embodiments of the invention. As shown in fig. 1, a computing system 100 may include: a Central Processing Unit (CPU) 101, a Random Access Memory (RAM) 102, a Read Only Memory (ROM) 103, a system bus 104, a hard disk controller 105, a keyboard controller 106, a serial interface controller 107, a parallel interface controller 108, a display controller 109, a hard disk 110, a keyboard 111, a serial peripheral 112, a parallel peripheral 113, and a display 114. Of these devices, coupled to the system bus 104 are a CPU 101, a RAM 102, a ROM 103, a hard disk controller 105, a keyboard controller 106, a serial controller 107, a parallel controller 108, and a display controller 109. The hard disk 110 is coupled to the hard disk controller 105, the keyboard 111 is coupled to the keyboard controller 106, the serial external device 112 is coupled to the serial interface controller 107, the parallel external device 113 is coupled to the parallel interface controller 108, and the display 114 is coupled to the display controller 109. It should be understood that the block diagram depicted in FIG. 1 is for illustrative purposes only and is not intended to limit the scope of the present invention. In some cases, some devices may be added or subtracted as the case may be.
Those skilled in the art will appreciate that embodiments of the invention may be implemented as a system, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: all hardware, all software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software, is generally referred to herein as a "circuit," module, "" unit, "or" system. Furthermore, in some embodiments, the invention may also be embodied in the form of a computer program product in one or more computer-readable media, which contain computer-readable program code.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive example) of the computer-readable storage medium could include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer, for example, through the internet using an internet service provider.
Embodiments of the present invention will be described below with reference to flowchart illustrations of methods and block diagrams of apparatus (or systems) according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
According to an embodiment of the invention, a method for generating learning resources and related products are presented. Furthermore, any number of elements in the figures is for illustration and not limitation, and any naming is used for distinction only and not for any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments thereof.
Summary of The Invention
The inventor finds that the prior learning resource adopting fixed content has unsatisfactory effect of assisting the user in learning. Specifically, the currently mainstream reading learning solution mainly provides a fixed reading learning resource package, and only supports the user to select a specific learning resource for fixed learning. For example, the content may be subjected to a pre-ranking process to obtain a resource packet 1, a resource packet 2, a resource packet 3, and the like. Wherein the theme, content, etc. of each resource package is fixed. In a specific use process, a user selects a resource package corresponding to the grading according to the actual age of the user, so that the user can complete the learning of the fixed content. Therefore, the resource packages which can be selected by different users in the same age range are fixed and have the same content, and finally, the learning effect of thousands of people is achieved. However, each user has its learning ability and demand dynamically changed during the learning process, and the fixed resource packages obviously cannot meet the actual learning demands.
In the related art, a scheme for supporting a user-defined learning plan may be added on the basis of content classification. However, the custom dimension is mainly focused on learning time management, such as custom adjustment for learning duration and learning frequency. This custom adjustment of learning frequency or duration involves only a change in learning time, and does not change the content of a specific resource package, which still fails to meet the user's dynamic learning needs.
Based on this, the inventors have found through research that it is possible to determine an actual learning state such as a user learning ability and an actual learning requirement by using behavior characteristics and learning content characteristics of a user, and dynamically generate learning resources associated with the user learning state. Therefore, the learning resources of the user can be dynamically changed along with the learning state change of the user, so that the actual learning requirement of the user is met.
Having described the basic principles of the present invention, various non-limiting embodiments of the invention are described in detail below.
Exemplary method
A method for generating learning resources according to an exemplary embodiment of the present invention is described below with reference to fig. 2. It should be noted that embodiments of the present invention may be applied to any scenario where applicable.
Fig. 2 schematically shows a flow diagram of a method 200 for generating learning resources according to one embodiment of the invention.
As shown in fig. 2, at step S201, learning interaction information generated by a user during learning may be acquired. The learning interaction information may be understood as information generated by a user in various learning scenarios, and may include, for example, user age, user school age, user learning duration, user learning content, content theme, difficulty of content, feedback information of test exercise, and the like. In a specific application, the learning interaction information may include various forms of information such as pictures, videos, characters, and the like, and may be obtained in various manners. For example, in some embodiments, an information input interface may be presented on which a user may input the learning interaction information described above. In yet other embodiments, the user may be provided with a learning scenario by an online learning system and various data generated during the user's learning process under the learning scenario may be collected. In addition, various intelligent learning terminals (such as dictionary pens and intelligent learning desk lamps) used by the user can be linked, and the learning interaction information can be obtained from the intelligent learning terminals. It should be noted that, the detailed description of learning interactive information is only exemplary, and the scheme of the present invention is not limited thereto, and may be specifically adjusted in connection with an actual application scenario.
Next, at step S202, the learning interaction information may be analyzed using the trained neural network model to obtain the behavior characteristics and learning content characteristics of the user. The neural network model is trained in advance, and after the learning interaction information is acquired, the learning interaction information can be input into the neural network model for analysis so as to analyze the behavior characteristics and the learning content characteristics of the user in the learning process from the learning interaction information. For example, in some implementations, the user interaction information may include information about the user's age, reading interests, reading capabilities, reading records, the grade to which the content itself pertains, type, difficulty, and the like. The neural network model can be used for analyzing the behavior data of the user such as age, reading interest, reading ability, reading record and the like, so that the behavior characteristics of the X-grade user such as books which like to read the forensic theme, strong autonomous reading ability and the like are obtained. And the neural network model can be used for analyzing the content data such as the grade, type, difficulty and the like of the content, so that the content suitable for reading by the X-grade user is the content characteristics such as the content of the subject, history and plant theme. It should be noted that the detailed description of the behavior characteristics and learning content characteristics of the user is only an exemplary illustration. In a specific application, the learning interaction information is dynamically changed, and the analyzed behavior characteristics and learning content characteristics are also dynamically changed.
Finally, at step S203, learning resources associated with the user' S learning state may be generated from the aforementioned behavior features and learning content features.
Wherein the aforementioned behavioral characteristics may characterize the learning capabilities of the user, and the learning content characteristics may be used to characterize the user's needs for learning content. Thus, the actual learning state of the user can be determined by the behavior characteristics and the learning content characteristics, and learning resources associated with the learning state of the user can be dynamically generated.
According to the scheme, the learning resources of the user can be dynamically changed along with the learning state change of the user, so that a dynamic learning effect of thousands of people and thousands of faces is achieved, and the actual requirements of the user are met.
Fig. 3 schematically shows a flow diagram of a method 300 for generating learning resources according to another embodiment of the invention. It is to be appreciated that the method 300 is a further definition and/or extension of the method 200 of fig. 2. Accordingly, the foregoing detailed description in connection with fig. 2 applies equally as well to the following.
As shown in fig. 3, at step S301, user behavior data and learning content data generated by the user during learning may be acquired. In the present embodiment, learning interactive information can be largely divided into user behavior data and learning content data. The user behavior data may include, but is not limited to, various data capable of reflecting the learning ability of the user, such as the school age, learning duration, learning record, and the like of the user. The learning content data may include, but is not limited to, data capable of embodying information of the content itself, such as a grade, a genre, a difficulty, and the like to which the content belongs. In specific application, the user behavior data and the learning content data can be obtained through various modes such as active uploading of a user, automatic acquisition in a background or intelligent learning terminals such as dictionary pens and desk lamps used in the user learning process. It should be noted that the detailed descriptions of the user behavior data and the learning content data are merely exemplary, and the aspects of the present invention are not limited thereto.
Next, at step S302, the user behavior data and the learning content data described above may be subjected to basic text parsing using the first network model. In some embodiments, the trained neural network model may specifically include a first network model and a second network model. The first network model may be used to solve the computational problem of the entire neural network model, and is specifically configured to perform basic text parsing on user behavior data and learning content data. For example, basic text parsing operations such as data cleansing, data format conversion, data normalization processing, etc. may be performed.
At step S303, the output result of the first network model may be further subjected to feature analysis by using the second network model, so as to obtain a behavior feature and a learning content feature. The second network model is specifically configured to perform feature extraction on an output result obtained after analysis by the first network model, the first network model shares large-scale basic data analysis, and the second network model is responsible for accurate feature analysis.
It should be noted that, here, the deep neural network model responsible for analyzing the learning interaction information (including the user behavior data and the learning content data) adopts a dual network model (i.e., a first network model and a second network model). In some embodiments, the first network model and the second network model may be trained based on an artificial intelligence generation content (AI Generated Content, AIGC) model. The AIGC model can be understood as a large language model similar to GPT3.0 and above, and the model has strong text parsing and content generating capability. And performing basic big data analysis by using the trained first network model to improve the operation efficiency of the whole network model, and superposing the trained second network model to perform further fine analysis to improve the operation accuracy of the whole network model. Therefore, the whole neural network model can be ensured to have both operation efficiency and accuracy in the process of analyzing and learning interactive information.
The above specific parsing process of data using the dual network model is only exemplary. For example, in practical applications, the deep neural network model may also employ a single network model architecture, based on which the above-described basic text parsing operation and feature parsing operation are implemented.
After the behavioral characteristics and the learning content characteristics are obtained, at step S304, candidate contents may be determined from the behavioral characteristics and the learning content characteristics. For example, in some embodiments, the foregoing behavioral characteristics and learning content characteristics may be matched in a predetermined database to obtain candidate content. It should be noted that the description of the determination process of the candidate content is only an exemplary description, and the aspect of the present invention is not limited thereto. For example, the obtained behavior characteristics and learning content characteristics may be uploaded to a cloud or server side for matching comparison, so as to obtain candidate content.
Finally, at step S305, learning resources associated with the user' S learning state may be screened from the foregoing candidate content. Specifically, a forward learning record of the user may be obtained. For example. The forward learning record can be obtained by a plurality of modes such as the active uploading of the user or the linkage of other intelligent learning terminals. The forward learning record may include content that the user has completed learning, content that the user is not interested in, and the like. The candidate content may then be initially filtered based on the forward learning record to filter out content in the candidate content that has completed learning or content that is not of interest to the user, and so on. And then screening out learning resources from the candidate content after the preliminary screening.
In some embodiments, content, of the candidate content, whose relevance to the content for which learning has been completed satisfies a predetermined threshold, may be acquired specifically. The acquired content may then be ranked according to the content weight, and at least one candidate content may be screened out as a learning resource based on the ranking of the candidate content. The higher the content weight of the candidate content, the higher the association degree with the learning state of the user, the higher the ranking, that is, the higher the probability of being screened as learning resources. The predetermined threshold value can be set and adjusted according to the actual application requirements. And content weights may be determined based on human settings and system calculation assignments. The system calculation assignment is obtained by weighting and summing the calculation indexes of the content, and the related calculation indexes (such as click effect, search efficiency, reading effect and the like of the content on a platform) can be adjusted and set according to actual requirements. It should be noted that the detailed description of the learning resource screening process is merely an exemplary illustration, and the present invention is not limited thereto.
Therefore, the actual learning states such as the user learning capacity and the actual learning requirement are determined by utilizing the behavior characteristics and the learning content characteristics of the user, and the learning resources related to the user learning states are dynamically generated, so that the dynamic change of the learning resources of the user along with the change of the user learning states can be realized, and the dynamic thousands of people and thousands of faces learning effect is truly realized. In addition, by means of text analysis and content generation capability of the AIGC model, the efficiency of matching users and contents is greatly improved, and the adaptivity of learning resources is effectively realized.
Fig. 4 schematically illustrates a flow chart of a method 400 for generating learning resources according to yet another embodiment of the invention. It is to be understood that method 400 may be understood as one specific technical implementation of method 200 or method 300. Thus, the foregoing description in connection with the relevant details of fig. 2 and 3 applies equally as well to the following.
As shown in fig. 4, at step S401, training of the educational scene AIGC model may be performed. In some embodiments, an AIGC model training platform may be provided for model training based on a dataset of a current business scenario (e.g., educational scenario) to generate a model with sufficient understanding capabilities for the business of the scenario.
Fig. 5 illustrates one possible training approach for the educational scenario AIGC model. As shown in fig. 5, at step S501, data preparation may be performed. In particular, the educational scenario AIGC model may include an AIGC base large model (i.e., the first network model of the foregoing) and an AIGC scenario small model (i.e., the second network model of the foregoing). Thus, in the data preparation phase, training data and test data can be prepared for the two models, respectively.
At step S502, a model selection is performed. Specifically, an appropriate AIGC model needs to be selected as a basic model for training in combination with business requirements. In some embodiments, multiple selection dimensions of data preprocessing, feature selection, machine learning algorithms, and evaluation methods need to be considered in the model selection process. For example, the AIGC base big model needs to support base text analysis, which requires a strong data processing capability to solve the computational power problem. In some implementation scenarios, a language model GPT-4, a language characterization model (BERT model for short), a powerful optimized language characterization model (RoBERTa model for short) and the like can be adopted as a basic model for training the AIGC basic large model. The AIGC scene small model needs to support feature parsing, which requires precise parsing capability to solve the precision problem. In some implementations, a flying horse model (PEGASUS model for short), a unified language model (UniLM model for short), or the like may be employed as a base model for training AIGC scene small models. It should be noted that the description of the model that can be used as the base model is only illustrative, and the aspect of the present invention is not limited thereto.
At step S503, data preprocessing may be performed. For example, the training data may be preprocessed, specifically including data cleansing, data format conversion, data normalization, and the like. At step S504, feature selection may be performed. Specifically, in selecting features for training data, it is necessary to select features that are most helpful to the problem. For example, the user's school age may be more prominent in the user's learning ability relative to his age, at which point the user's school age may be selected as a behavioral characteristic. At step S505, model training may be performed. After preparing the training data, the base model, etc., the AIGC model as the base model may be trained using the training data, and model parameters are continuously adjusted so that the model performance is optimal. At step S506, a model evaluation may be performed. Specifically, the trained model may be evaluated using the foregoing test data, and the model may be further optimized according to the evaluation result. At step S507, application of the model may be implemented. Specifically, the trained model can be applied to actual problems (such as extraction of behavior characteristics and learning content characteristics in educational scenes, etc.), and prediction and decision can be performed. At step S508, model iterations may be performed. Specifically, the model can be iterated and optimized according to the prediction result of the model in practical application, so as to ensure the effectiveness and reliability of the model. Thus, training of the educational scene AIGC model is completed.
In this embodiment, the educational scene AIGC model includes an AIGC base large model and an AIGC scene small model, and therefore the training process needs to train for these two models separately. Of course, if the basic text analysis of the AIGC basic large model itself can be well completed, training can also be performed only for the AIGC scene small model. In addition, in practical application, the educational scene AIGC model may also be a single AIGC model having both basic text parsing and feature parsing. The training for the single AIGC model may refer to the model training procedure described above, and will not be described here again.
Returning to fig. 4, at step S402, parsing of user behavior and learning content may be performed using the trained educational scenario AIGC model. Specifically, user behavior data and learning content data are input, and a trained educational scene AIGC model is used for analyzing, so that corresponding user behavior characteristic data and learning content characteristic data are generated for establishing and using a subsequent matching relationship.
FIG. 6 illustrates one possible way to parse user behavior and learn content using a trained educational scenario AIGC model. As shown in fig. 6, in the model selection, an AIGC basic large model supporting basic text parsing may be used in a model superposition manner, and an AIGC scene small model supporting feature parsing may be used in a superposition manner. In practical application, the specific gravity of the processing information of the educational scene AIGC model can be adjusted by adjusting the influence degree of each model in the educational scene AIGC model.
Next, at step S403, matching of learning resources may be performed. Specifically, the obtained behavior characteristics and learning content characteristics can be subjected to matching calculation in the database so as to finally screen out learning resources associated with the learning state of the user. As shown in fig. 7, the user state changes dynamically during the user learning process, and learning resources related to different user states can be matched according to the behavior characteristics and the learning content characteristics. For example, in the case of the user state 1, learning resources 1 to 4 may be obtained by matching. With the change of the user state, when the user state is changed to the user state 2, the learning resource 1 and the learning resources 5 to 7 can be obtained by matching. When the user state 3 is changed, learning resources 8 to 10 can be obtained by matching. It should be noted that the user state change process and the learning resources associated therewith are only described here by way of example, and the division of the user state and the number of learning resources are not limited.
In some embodiments, the matching calculation process of the behavior feature and the learning content feature may specifically involve the processes of filtering the content, filtering, sorting, valuing, and the like. Fig. 8 illustrates schematically one possible matching calculation procedure of behavior features and learning content features. As shown in fig. 8, at step S801, content may be screened. Specifically, the user characteristics generated by the trained AIGC model calculation and the learning content characteristics are matched in a database. For example, the database contains a plurality of learning resources with labels, and candidate contents can be obtained by matching user behavior characteristics (i.e. user labels) such as age, reading capability and reading interest of a user with learning content characteristics (i.e. content labels) suitable for reading age, content theme and content adaptability of the content. Next, at step S802, filtering may be performed. Specifically, the forward reading record of the user can be combined, and the content of which the user has completed learning in the candidate content is filtered.
Then, at step S803, sorting may be performed. Specifically, according to the correlation index between the contents, the contents (such as the content subject or the content adaptability level) with high correlation with the content that the user has completed learning at present can be preferentially obtained, and then the contents are ranked according to the content weight. Wherein, the content weight may include two parts: human intervention (i.e., the human settings previously described, which may be, for example, 50% of the total), and system calculation assignments (which may be, for example, 50% of the total). In some embodiments, human intervention may be provided by a platform reading expert based on industry experience, with a range of values from 0 to 5 minutes (the specific range of values is not limited and may be adjusted in conjunction with actual needs). The system calculation assignment can be obtained by carrying out weighted calculation according to the calculation indexes of the contents such as the click effect, the search effect, the reading effect, the payment conversion effect and the like of the contents on the platform, wherein the value range is 0-5 minutes (the specific value range is not limited and can be adjusted according to actual requirements). For example, the system calculates a valuation = click score 0.2+ search score 0.3+ reading score 0.3+ payment conversion score 0.2. The related calculation indexes and the proportion in the formula are only exemplary, and can be specifically adjusted according to the actual service condition.
Finally, at step S804, a screening value is performed. Specifically, the candidate content may be ordered according to the size of the content weight. For example, the candidate content may be ranked from high to low, and then valued from high to low in combination with the number of content resources that need to be output (specifically, may be adjusted according to the scene requirement). Thus, the matching calculation of the user behavior characteristics and the learning content characteristics is completed.
Returning to fig. 4, at step S404, an update of the adaptive learning resource may be performed. For a continuously-learned user, matching learning resources are obtained adaptively along with the change of self learning behaviors and capabilities in the learning process. The learning resources associated with each user state can be 1 or more, and the learning resources can be sequentially displayed to the user or simultaneously displayed to the user for the user to select learning at will. In addition, the specific type of learning resources is not limited herein, and may include, for example, various contents suitable for learning by the user, such as reading contents, practice problems, and the like.
The following further explains the scheme of the invention in connection with a specific application scenario.
The actual application scene is as follows: for a class 1 user, a learning resource is generated that helps him read the study. The learning materials may include, but are not limited to, books, questions, and the like.
In order to meet the requirement of the application scene, training data and test data such as user reading behavior data, reading content data and the like in a reading and learning scene can be collected first to perform model training so as to obtain an AIGC model capable of fully understanding the reading and learning ability and reading content of a user. Then, the behavior data such as age, reading interest, reading ability, reading record and the like of the user can be input into the trained AIGC model for analysis, and the calculated behavior characteristics can be calculated. For example, grade 1 children like to read books of a forensic theme, have strong autonomous reading ability, and other behavioral characteristics. Meanwhile, learning content data such as the grade, type, difficulty and the like of the content can be input into the trained AIGC model for analysis, and learning content characteristics can be output. For example, the content that the 1 st child is suitable to read is the learning content features of the subject enlightenment, history, content of the plant subject, difficulty level of the content, and the like. It should be noted that, the parsing process of the behavior feature and the learning content feature is dynamically updated.
The obtained user behavior features and learning content features may then be matched. Such as: grade 1 likes the scout theme and has stronger reading ability, matches books with upper scout themes and reading ability requirements exceeding grade 1, and matches reading understanding questions (such as multiple choice questions and the like) suitable for the user ability. Finally, along with the change of reading capability and demand in the learning process of the user, corresponding learning resources can be acquired in a self-adaptive manner. For example, although the user is of the order of 1, a lot of books for reading of the high order and reading questions of higher difficulty are recommended to the user according to the reading trace for a period of time. Similarly, if the user is found to have poor reading effect within a period of time, the reading capability is not improved, and more easily learned learning content and simpler reading questions can be continuously pushed.
Therefore, the result of AIGC model analysis is matched with the content to calculate the user information, and the effect of adaptively generating different learning resources according to different states of the user is realized through dynamic matching.
Exemplary apparatus
Having introduced the method of the exemplary embodiment of the present invention, next, a description will be made of a related product of the method for generating learning resources of the exemplary embodiment of the present invention with reference to fig. 9.
Fig. 9 schematically shows a schematic block diagram of an electronic device 900 according to an embodiment of the invention. As shown in fig. 9, the electronic device 900 may include a processor 901 and a memory 902. In which a memory 902 stores computer instructions for generating learning resources, which, when executed by a processor 901, cause an electronic device 900 to perform a method according to the method described hereinbefore in connection with fig. 2-5 and 8. For example, in some embodiments, the electronic device 900 may be configured to obtain learning interaction information, train a neural network model that supports analysis of learning interaction information, perform parsing of behavioral characteristics and learning content characteristics, adaptively dynamically generate learning resources that are associated with a user's learning state, and so forth. Based on this, by determining the actual learning states such as the user learning ability and the actual learning requirement by using the behavior features and the learning content features of the user through the electronic device 900, and dynamically generating the learning resources associated with the user learning states, the learning resources of the user can be dynamically changed along with the change of the user learning states, and the dynamic thousands of people and thousands of faces learning effect can be truly realized.
It should be noted that although several means or sub-means of the device are mentioned in the above detailed description, this division is not mandatory only. Indeed, the features and functions of two or more of the devices described above may be embodied in one device, in accordance with embodiments of the present invention. Conversely, the features and functions of one device described above may be further divided into multiple devices to be embodied.
Use of the verb "comprise," "include" and its conjugations in this application does not exclude the presence of elements or steps other than those stated in the application. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
While the spirit and principles of the present invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments nor does it imply that features of the various aspects are not useful in combination, nor are they useful in any combination, such as for convenience of description. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (10)

1. A method for generating learning resources, comprising:
acquiring learning interaction information generated by a user in a learning process;
analyzing the learning interaction information by using the trained neural network model to obtain behavior characteristics and learning content characteristics of the user; and
And generating learning resources associated with the learning state of the user according to the behavior characteristics and the learning content characteristics.
2. The method of claim 1, wherein the learning interaction information includes user behavior data and learning content data, and wherein analyzing the learning interaction information using the trained neural network model comprises:
and analyzing the user behavior data and the learning content data based on a neural network model respectively to obtain the behavior characteristics and the learning content characteristics.
3. The method of claim 2, wherein the neural network model comprises a first network model and a second network model, and analyzing the user behavior data and the learning content data based on the neural network model comprises:
performing basic text parsing on the user behavior data and the learning content data based on the first network model; and
and carrying out feature analysis on the output result of the first network model based on the second network model so as to obtain the behavior feature and the learning content feature.
4. The method of claim 3, wherein the first network model and the second network model are trained based on an artificial intelligence generation content AIGC model.
5. The method of claim 1, wherein generating a learning resource associated with a user learning state from the behavioral characteristics and the learning content characteristics comprises:
determining candidate content according to the behavior characteristics and the learning content characteristics; and
learning resources associated with a user learning state are screened from the candidate content.
6. The method of claim 5, wherein determining candidate content from the behavioral characteristics and the learning content characteristics comprises:
and matching the behavior characteristic and the learning content characteristic in a preset database to obtain the candidate content.
7. The method of claim 5, wherein screening learning resources associated with a user learning state from the candidate content comprises:
acquiring a forward learning record of the user;
performing preliminary screening on the candidate content based on the forward learning record so as to filter the content which has completed learning in the candidate content; and
and screening the learning resources from the candidate content after the primary screening.
8. The method of claim 7, wherein screening the learning resources from the pre-screened candidate content comprises:
Acquiring content of which the correlation degree with the content which has completed learning in the candidate content meets a preset threshold value;
sorting the acquired content according to the content weight; and
and screening at least one candidate content as the learning resource according to the ranking of the candidate content.
9. An electronic device, comprising:
a processor; and
a memory storing computer instructions for generating a learning resource, which when executed by the processor, cause the electronic device to perform the method of any of claims 1-8.
10. A computer readable storage medium, characterized by containing program instructions for generating learning resources, which when executed by a processor, cause the method according to any of claims 1-8 to be implemented.
CN202310503391.8A 2023-04-28 2023-04-28 Method for generating learning resources and related products Pending CN116662527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310503391.8A CN116662527A (en) 2023-04-28 2023-04-28 Method for generating learning resources and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310503391.8A CN116662527A (en) 2023-04-28 2023-04-28 Method for generating learning resources and related products

Publications (1)

Publication Number Publication Date
CN116662527A true CN116662527A (en) 2023-08-29

Family

ID=87718082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310503391.8A Pending CN116662527A (en) 2023-04-28 2023-04-28 Method for generating learning resources and related products

Country Status (1)

Country Link
CN (1) CN116662527A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117094360A (en) * 2023-10-18 2023-11-21 杭州同花顺数据开发有限公司 User characterization extraction method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117094360A (en) * 2023-10-18 2023-11-21 杭州同花顺数据开发有限公司 User characterization extraction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20220215032A1 (en) Ai-based recommendation method and apparatus, electronic device, and storage medium
WO2021139325A1 (en) Media information recommendation method and apparatus, electronic device, and storage medium
CN110781321B (en) Multimedia content recommendation method and device
US20230009814A1 (en) Method for training information recommendation model and related apparatus
CN111090756B (en) Artificial intelligence-based multi-target recommendation model training method and device
KR20220113881A (en) Method and apparatus for generating pre-trained model, electronic device and storage medium
CN110023928B (en) Predictive search engine ranking signal values
WO2017117230A1 (en) Method and apparatus for facilitating on-demand building of predictive models
CN111444357B (en) Content information determination method, device, computer equipment and storage medium
US20140244614A1 (en) Cross-Domain Topic Space
CN111242310A (en) Feature validity evaluation method and device, electronic equipment and storage medium
CN112257841A (en) Data processing method, device and equipment in graph neural network and storage medium
US11842204B2 (en) Automated generation of early warning predictive insights about users
CN111625715A (en) Information extraction method and device, electronic equipment and storage medium
Mukunthu et al. Practical automated machine learning on Azure: using Azure machine learning to quickly build AI solutions
CN116662527A (en) Method for generating learning resources and related products
Mishra et al. Dynamic identification of learning styles in MOOC environment using ontology based browser extension
CN112269943B (en) Information recommendation system and method
CN114817692A (en) Method, device and equipment for determining recommended object and computer storage medium
CN108550019A (en) A kind of resume selection method and device
CN115080856A (en) Recommendation method and device and training method and device of recommendation model
US20220366264A1 (en) Procedurally generating realistic interfaces using machine learning techniques
CN111274480B (en) Feature combination method and device for content recommendation
CN114357236A (en) Music recommendation method and device, electronic equipment and computer readable storage medium
CN114996435A (en) Information recommendation method, device, equipment and storage medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination