CN117473076A - Knowledge point generation method and system based on big data mining - Google Patents

Knowledge point generation method and system based on big data mining Download PDF

Info

Publication number
CN117473076A
CN117473076A CN202311819594.4A CN202311819594A CN117473076A CN 117473076 A CN117473076 A CN 117473076A CN 202311819594 A CN202311819594 A CN 202311819594A CN 117473076 A CN117473076 A CN 117473076A
Authority
CN
China
Prior art keywords
text
course
evaluation index
knowledge point
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311819594.4A
Other languages
Chinese (zh)
Other versions
CN117473076B (en
Inventor
黎国权
朱晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Xinjufeng Technology Co ltd
Original Assignee
Guangdong Xinjufeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Xinjufeng Technology Co ltd filed Critical Guangdong Xinjufeng Technology Co ltd
Priority to CN202311819594.4A priority Critical patent/CN117473076B/en
Publication of CN117473076A publication Critical patent/CN117473076A/en
Application granted granted Critical
Publication of CN117473076B publication Critical patent/CN117473076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

According to the knowledge point generation method and system based on big data mining, by comprehensively considering editing behavior intention and track, course knowledge point labeling text blocks suitable for different learner demands can be flexibly generated, and personalized learning support is provided. The knowledge point generation method based on the quantized semantic representation, the editing behavior intention characteristic and the editing behavior track characteristic has the advantage of resource conservation. By quantifying the semantic representation, a large amount of text information can be effectively compressed and stored, reducing the need for storage resources. Meanwhile, knowledge points are generated by utilizing the editing behavior intention characteristic and the editing behavior track characteristic, so that response can be accurately carried out according to the demands and behaviors of learners, unnecessary redundant generation is avoided, and the accuracy and flexibility of a generated result are improved.

Description

Knowledge point generation method and system based on big data mining
Technical Field
The application relates to the technical field of big data, in particular to a knowledge point generation method and system based on big data mining.
Background
An offline learning platform refers to an educational or learning application that can be used without a network connection. These platforms are typically provided for downloading or installation onto a device so that users can access course content, learning materials, practice problems, and other learning resources without requiring an internet connection. The offline learning platform allows the user to learn according to his own time and place and is not limited by network access. They can be used on personal computers, tablet computers, smart phones and other devices.
With the increasing popularity of offline learning platforms, some lesson resource text optimization requirements for offline learning are increasing, such as knowledge point generation requirements involving lesson resource text. However, the conventional knowledge point generation technology has the defects of high resource overhead, low precision and low flexibility.
Disclosure of Invention
In order to improve the technical problems in the related art, the application provides a knowledge point generation method and system based on big data mining.
In a first aspect, an embodiment of the present application provides a knowledge point generating method based on big data mining, which is applied to a big data mining system, and the method includes:
when learning behavior analysis is carried out on a target offline learning user to obtain a course resource text to be processed, initial editing text block distribution data of the target offline learning user in the course resource text to be processed is obtained;
based on the initial editing text block distribution data, text description mining is carried out by a text description mining component of a course knowledge point generation network, and target text description quantization semantics are obtained;
based on the target text description quantization semantics, performing behavior intention analysis by a behavior intention analysis component of the course knowledge point generation network to obtain editing behavior intention characteristics and editing behavior track characteristics of each learning content editing text block of the target offline learning user;
And generating course knowledge points through a course knowledge point generating component of the course knowledge point generating network based on the editing behavior intention characteristic and the editing behavior track characteristic to obtain a course knowledge point labeling text block of the corresponding learning content editing text block.
Under some schemes, the text description mining component includes a first text description mining branch and a first text description splicing branch, and the text description mining component generates a network of text description mining components through course knowledge points based on the initial editing text block distribution data to obtain target text description quantization semantics, including:
obtaining text description mining information of the initial editing text block distribution data;
generating a first text description quantization semantic corresponding to the to-be-processed course resource text through the first text description mining branch based on the text description mining information of the initial editing text block distribution data;
and performing text description splicing on the first text description quantified semantics corresponding to the to-be-processed course resource text and the first text description quantified semantics corresponding to the last course resource text through the first text description splicing branch to obtain the target text description quantified semantics, wherein the last course resource text and the to-be-processed course resource text are in the same course resource text set, and the last course resource text in the course resource text set is before the to-be-processed course resource text and is associated with the to-be-processed course resource text.
Under some aspects, the text-description mining component further includes a second text-description mining branch and a second text-description stitching branch, the obtaining text-description mining information of the initial compiled text block distribution data before the second text-description mining branch and the second text-description stitching branch are in the first text-description mining branch in the course knowledge point generation network includes:
performing text description mining on the initial editing text block distribution data through the second text description mining branch to obtain second text description quantization semantics corresponding to the to-be-processed course resource text;
determining the second text description quantization semantics corresponding to the to-be-processed course resource text as the text description mining information;
the text description mining information based on the initial editing text block distribution data generates a first text description quantization semantic corresponding to the to-be-processed course resource text through the first text description mining branch, and the text description quantization semantic comprises:
performing semantic splicing on the second text description quantified semantics corresponding to the to-be-processed course resource text and the second text description quantified semantics corresponding to the previous course resource text through the second text description splicing branch to obtain text description quantified spliced semantics;
And carrying out feature quantization processing on the text description quantization splicing semantics through the first text description mining branch to obtain the first text description quantization semantics.
In some aspects, the method further comprises:
and labeling text blocks according to the course knowledge points, and generating a course knowledge relation network of the target offline learning user.
Under some schemes, the original decision tree algorithm corresponding to the course knowledge point generation network comprises a basic text description mining component, a basic behavior intention analysis component and a basic course knowledge point generation component, and the method further comprises:
acquiring the distribution data of the past initial editing text blocks of the past offline learning user in the past course resource text;
based on the past initial editing text block distribution data, text description mining is carried out through the basic text description mining component, and target past text description quantization semantics are obtained;
based on the target past text description quantization semantics, carrying out behavior intention analysis through the basic behavior intention analysis component to obtain past editing behavior intention characteristics and past editing behavior track characteristics of each learning content editing text block of the past offline learning user;
Based on the past editing behavior intention characteristic and the past editing behavior track characteristic, generating course knowledge points through the basic course knowledge point generating component to obtain past course knowledge point labeling text blocks of the corresponding learning content editing text blocks;
generating a target algorithm network debugging evaluation index based on the past course knowledge point labeling text block;
and updating and improving algorithm variables of the original decision tree algorithm based on the target algorithm network debugging evaluation index to obtain the course knowledge point generation network.
Under some schemes, the generating the target algorithm network debugging evaluation index based on the past course knowledge point labeling text block includes:
generating an intention mining algorithm network debugging evaluation index, an intention updating algorithm network debugging evaluation index and a disturbance algorithm network debugging evaluation index respectively based on the past course knowledge point labeling text blocks, wherein the intention mining algorithm network debugging evaluation index is used for representing the accuracy of editing intention mining, the intention updating algorithm network debugging evaluation index is used for representing the change coefficient of editing behavior adjustment between different course resource texts, and the disturbance algorithm network debugging evaluation index is used for representing the confidence level of editing intention mining;
Generating the target algorithm network debugging evaluation index based on at least one of the intention mining algorithm network debugging evaluation index, the intention updating algorithm network debugging evaluation index and the disturbance algorithm network debugging evaluation index.
Under some schemes, the original decision tree algorithm further comprises a first multi-layer perceptron unit, and the step of generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block comprises the following steps:
carrying out knowledge detection on the past course knowledge point labeling text blocks through the first multi-layer perceptron unit to obtain a first knowledge detection result;
and generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block and the first knowledge detection result.
Under some aspects, the original decision tree algorithm further comprises a second multi-layer perceptron unit, the method further comprising;
carrying out knowledge detection on the past editing behavior intention characteristic based on the second multi-layer perceptron unit to obtain a second knowledge detection result;
the generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block and the first knowledge detection result comprises the following steps:
And generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block, the first knowledge detection result and the second knowledge detection result.
Under some schemes, the step of generating the intention mining algorithm network debugging evaluation index based on the past course knowledge point labeling text block comprises the following steps:
determining an algorithm network debugging evaluation index LOSS1 based on the past course knowledge point labeling text block and the priori course knowledge point labeling text block;
determining an algorithm network debugging evaluation index LOSS2 based on the past editing behavior intention characteristic and the prior editing behavior intention characteristic corresponding to the past course knowledge point labeling text block;
determining an algorithm network debugging evaluation index LOSS3 based on the past editing behavior track characteristics and the priori editing behavior track characteristics corresponding to the past course knowledge point labeling text blocks;
and carrying out global fusion on the algorithm network debugging evaluation index LOSS1, the algorithm network debugging evaluation index LOSS2 and the algorithm network debugging evaluation index LOSS3 to obtain the intention mining algorithm network debugging evaluation index.
Under some schemes, the step of generating the intention updating algorithm network debugging evaluation index based on the past course knowledge point labeling text block comprises the following steps of:
Determining an algorithm network debugging evaluation index LOSS4 according to a distinguishing coefficient between the text blocks of the past course knowledge point labels corresponding to the two continuous past course resource texts and the first priori distinguishing coefficient;
determining an algorithm network debugging evaluation index LOSS5 based on a distinguishing coefficient between the previous editing behavior intention features corresponding to the two continuous previous course resource texts and a second priori distinguishing coefficient;
determining an algorithm network debugging evaluation index LOSS6 based on a distinguishing coefficient between the previous editing behavior track features corresponding to the two continuous previous course resource texts and a third priori distinguishing coefficient;
and carrying out global fusion on the algorithm network debugging evaluation index LOSS4, the algorithm network debugging evaluation index LOSS5 and the algorithm network debugging evaluation index LOSS6 to obtain the intention updating algorithm network debugging evaluation index.
In a second aspect, the present application also provides a big data mining system, comprising a processor and a memory; the processor is in communication with the memory, and the processor is configured to read and execute a computer program from the memory to implement the method described above.
In a third aspect, the present application also provides a computer readable storage medium having stored thereon a program which, when executed by a processor, implements the method described above.
By applying the embodiment of the application, the lesson knowledge point labeling text blocks adapting to the requirements of different learners can be flexibly generated by comprehensively considering the editing behavior intention and the track, and personalized learning support is provided. The knowledge point generation method based on the quantized semantic representation, the editing behavior intention characteristic and the editing behavior track characteristic has the advantage of resource conservation. By quantifying the semantic representation, a large amount of text information can be effectively compressed and stored, reducing the need for storage resources. Meanwhile, knowledge points are generated by utilizing the editing behavior intention characteristic and the editing behavior track characteristic, so that response can be accurately carried out according to the demands and behaviors of learners, unnecessary redundant generation is avoided, and the accuracy and flexibility of a generated result are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow chart of a knowledge point generating method based on big data mining according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with aspects of the present application.
It should be noted that the terms "first," "second," and the like in the description of the present application and the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided by the embodiments of the present application may be performed in a big data mining system, a computer device, or similar computing device. Taking the example of running on a big data mining system, the big data mining system may comprise one or more processors (which may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory for storing data, and optionally the big data mining system may further comprise a transmission device for communication functions. It will be appreciated by those of ordinary skill in the art that the above-described architecture is merely illustrative and is not intended to limit the architecture of the big data mining system described above. For example, the big data mining system may also include more or fewer components than shown above, or have a different configuration than shown above.
The memory may be used to store a computer program, for example, a software program of application software and a module, for example, a computer program corresponding to a knowledge point generating method based on big data mining in the embodiments of the present application, and the processor executes the computer program stored in the memory, thereby performing various functional applications and data processing, that is, implementing the method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to the big data mining system through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a large data mining system. In one example, the transmission means comprises a network adapter (Network Interface Controller, simply referred to as NIC) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Referring to fig. 1, fig. 1 is a flowchart of a knowledge point generating method based on big data mining according to an embodiment of the present application, where the method is applied to a big data mining system, and further may include steps 110 to 140.
And 110, when learning behavior analysis is carried out on a target offline learning user to obtain a course resource text to be processed, obtaining initial editing text block distribution data of the target offline learning user in the course resource text to be processed.
The initial editing text block distribution data is used for representing position data of learning content editing text blocks, and the learning content editing text blocks can be text blocks which are edited by a target offline learning user, and the editing process includes marking processing behaviors such as scribing, masking and the like.
And 120, performing text description mining through a text description mining component of a course knowledge point generation network based on the initial editing text block distribution data to obtain target text description quantization semantics.
The course knowledge point generation network can be a structural semantic model used for mining text descriptions (text semantic features), and the target text description quantization semantics are used for reflecting the text semantic features of the text blocks edited by the corresponding learning content.
And 130, based on the target text description quantification semantics, performing behavior intention analysis by a behavior intention analysis component of the course knowledge point generation network to obtain editing behavior intention characteristics and editing behavior track characteristics of each learning content editing text block of the target offline learning user.
The behavior intention analysis can extract the behavior intention and the behavior track of the target offline learning user when editing the text block for editing the learning content, so that a decision basis is provided for the subsequent knowledge point generation and annotation.
And 140, based on the editing behavior intention characteristic and the editing behavior track characteristic, generating course knowledge points through a course knowledge point generating component of the course knowledge point generating network to obtain a course knowledge point labeling text block of the corresponding learning content editing text block.
The course knowledge point generation can be used for carrying out knowledge point labeling or knowledge point supplementing on the learning content editing text block, so that a richer course knowledge point labeling text block is obtained, and the course knowledge point generation comprises but is not limited to hidden annotation, multi-layer mask text overlapping display and the like.
In step 110, learning behavior analysis of the target offline learning user is performed, and initial editing text block distribution data in the course resource text to be processed is obtained. These initial editing text block distribution data are used to describe the location information of the learning content editing text blocks in the text. The learning content editing text block refers to a portion of text that is edited by the target offline learning user, such as marking actions of a scribe line, a mask, and the like. The position distribution condition of the text blocks edited by the learning content in the text can be known by analyzing the initial text block distribution data of the target offline learning user in the text of the course resource to be processed. This information will be used in subsequent steps for further tasks such as text description mining, behavioral intent resolution, and course knowledge point generation. Step 110 is directed to obtaining initial compiled text block distribution data of a target offline learning user in a course resource text to be processed for subsequent analysis and processing.
When taking the history discipline as an example, the learning content editing text block in step 110 may include the following examples: marking important events: the target offline learning user may score or mark keywords, phrases, or sentences in the text that are related to the historical event. These scored or marked text blocks may be considered as learning content editing text blocks for representing user attention and special attention to important events; adding endorsements: the user may add annotations beside or at the bottom of the text explaining the meaning or context of a certain historical concept, character or event. These endorsement text blocks can also be considered as learning content editing text blocks, as they increase the personal understanding and replenishment of the learning material; and (3) manufacturing a time axis: the user may make a timeline on text or white paper and add a mark at the location of a particular event or period. These timeline charts can be considered as blocks of learning content editing text because they provide a visual presentation of the historical event chronology; writing summaries or notes: users may write summaries, summaries or notes in the blank of the text to comb and record their understanding and generalization of the knowledge of the learned history. These summary or note text blocks may also be considered learning content editing text blocks as they demonstrate the user's personal thinking and arrangement of learning materials.
When the history is taken as an example, the description of step 120 will be described as follows: it is assumed that the target offline learning user adds a time axis to the text of the course resource to be processed of the history subject, and marks important events and characters of the sui-tang period on the time axis. The initial edit text block distribution data records the location information of these time axis marked text blocks in the text.
These initial edit text blocks may be further analyzed by the text description mining component of step 120 to extract the target text description quantization semantics. The component may perform image and text analysis to identify and understand key events and important characters for the inert times represented in the timeline-tagged text blocks. In particular, the component may identify keywords and phrases in the timeline-tagged text block. It can also infer the order of occurrence of events based on the location information of the time axis. Through text description mining, target text description quantization semantics, such as descriptions of time frames, important events, and people of the inert and tangshen periods, can be obtained. This helps to better understand the target offline learning user's knowledge of the history of the inert and tangling periods and the chronological understanding. The text description mining component in step 120 may analyze the initial edit text block in the history discipline, extract key concepts, temporal order, and semantic information, thereby generating target text description quantified semantics. This provides a basis for subsequent behavioral intent resolution and course knowledge point generation to further understand the learner's focus and in-depth understanding of the history subject matter.
Further, the quantization features in the target text description quantization semantics have the following benefits in terms of computational resource savings: data compression: the editing process of the subject resource by the target offline learning user is converted into the quantitative semantic features, so that the storage space requirement of the data can be greatly reduced. Compared with the original text data, the quantization characteristics are usually expressed in a more compact form, so that the storage of redundant information is avoided, and the computing resources are saved; calculation efficiency: by extracting and computing target text descriptions quantifying semantic features, complex text analysis and understanding tasks can be translated into simpler and efficient computing operations. Thus, the demand on computing resources (such as CPU and memory) can be reduced, the processing speed is improved, and the offline learning process is accelerated; training a learning model: the availability and compactness of target text descriptions to quantify semantic features makes machine learning model training using these features more efficient. The traditional text-based machine learning method needs to process a large amount of text data, and the quantization characteristic greatly reduces the scale and complexity of training data, and saves the calculation resources and time cost required by training a model; results visualization and summary presentation: the target text description quantized semantic features may be used to generate a summary, visualize and present the learning outcome. Through the processing and analysis of the quantized features, the attention points, the understanding degree and the learning progress of the learner can be displayed in a simpler and more visual mode, and the computing resource cost in the visualization and abstract generation process is reduced.
It can be seen that the use of target text description quantized semantic features can bring about a number of benefits in terms of computational resource savings, including data compression, computational efficiency improvement, training model efficiency improvement, and convenience in result visualization and summary presentation. These effects help optimize the performance of the offline learning system, making it more efficient, fast, and scalable.
In the example of the history discipline, step 130 will be further illustratively described focusing on the edit behavior intent features and edit behavior trace features and illustrating their role in later knowledge point generation labeling.
Editing behavioral intention characteristics: by analyzing the editing behavior intention characteristics of the target offline learning users in the to-be-processed course resource texts of the historical subjects, the attention points and learning requirements of the users in the inert and tangling periods can be known. For example, if learners frequently add notes or tags to content related to a popular regime, they may infer that they have a high interest and attention in this topic. These editing behavioral intent features can help learn the learner's personalized needs and generate relevant knowledge point labels targeted.
Editing behavior track characteristics: by analyzing the learner's editing behavior trace in the text, it is possible to understand their understanding and learning path during learning. For example, in the text of the Su Tang period, if the learner first reads the contents of the management of Kang Huang and then turns to the study of events such as disorder of the security history and change of the Xuanwu gate, the degree of understanding of the history period and the learning emphasis can be deduced. These edit behavior trace features can help evaluate the learner's learning progress and generate relevant knowledge point labels based on their learning path.
When knowledge points are generated and marked in the later stage, the editing behavior intention characteristic and the editing behavior track characteristic play a key role: by combining the editing behavior intention characteristic and the editing behavior track characteristic, the attention point, the understanding degree and the learning path of the learner to the inert and tangsheng period can be determined. Based on the characteristics, relevant knowledge point labels can be generated, important concepts, events and people are matched with interests and demands of learners, and targeted learning auxiliary information is provided; personalized feedback and recommendations can be provided to the learner by analyzing the editing behavior intent features and editing behavior trace features. Based on their points of interest and learning paths, related learning resources may be recommended, materials may be read deep, or customized learning plans may be provided to support them in better grasp of historical knowledge.
In conclusion, the editing behavior intention feature and the editing behavior track feature play an important role in the later generation and annotation of knowledge points. By understanding the learner's focus, degree of understanding, and learning path, personalized knowledge point labels can be generated and customized feedback and recommendations provided to support them to learn the gladness and tangy period content in the historic disciplines more effectively.
According to one application scenario of step 140, namely, knowledge point generation and annotation are performed on the text block edited by the learning content through the editing behavior intention feature and the editing behavior track feature, the following is an example: suppose that a learner is researching the scientific system of the inert Tang period. He performs editing actions in the related learning resources, and the editing action intention characteristics and the editing action track characteristics can be utilized to generate a course knowledge point labeling text block.
Editing behavioral intention characteristics: the learner's editing behavior is analyzed and found to add a plurality of notes and marks, focusing on the content related to the science popularization system. This indicates that the learner has a great deal of interest and attention to the scientific degrees, which becomes the editing behavior intention feature.
Editing behavior track characteristics: further analyzing the edit behavior trace of the learner, finding that he first reads the article about the education reform in the Su-Tang period, knows the historical background of the scientific investigation system, and then goes to the procedure of researching the scientific investigation and the requirements of each stage. The edit behavior trace reflects the learning path of the learner, and the process of detail study is known from the whole.
Based on the editing behavior intention feature and the editing behavior track feature, a course knowledge point annotation text block can be generated. For example, for a regime of scientific lifting focused on by a learner, the following knowledge point annotation text block may be generated:
Scientific lifting System
Definition: an examination system for choosing officials is carried out in the period of Su Tang dynasty.
History background: inactive, developed and perfected in the Tang dynasty.
The purpose is as follows: the official selection fairness and capability guidance are realized through examination selection talents.
Stages and procedures: including rural, interview and palace, each stage has different examination contents and requirements.
Influence and meaning: the scientific system has important influence on the aspects of social mobility, education development, political stability and the like.
Such curriculum knowledge point annotation text blocks can provide learner definitions of key concepts, historical context, major phases and procedures, and related influences and meanings. The learner can deeply understand the science-lifting system of the Su-Tang period by reading the knowledge point labeling text blocks and acquire the required knowledge and concepts from the knowledge point labeling text blocks.
Based on the technical scheme, first, initial editing text block distribution data of a course resource text to be processed is obtained by analyzing learning behaviors of a target offline learning user, and the data characterizes position information of a learning content editing text block. Secondly, utilizing the initial editing text block distribution data, and carrying out text description mining through a text description mining component of a course knowledge point generation network, thereby obtaining the quantized semantic representation of the target text description. And then, based on the target text description quantification semantics, carrying out behavior intention analysis by a behavior intention analysis component of a course knowledge point generation network so as to extract editing behavior intention characteristics and editing behavior track characteristics of a target offline learning user on each learning content editing text block. And finally, based on the editing behavior intention characteristic and the editing behavior track characteristic, generating course knowledge points through a course knowledge point generating component of a course knowledge point generating network to obtain a course knowledge point labeling text block corresponding to the learning content editing text block. Therefore, the behavior of the learner can be effectively analyzed, text semantic features are extracted, course knowledge points related to learning content are generated or marked, and targeted learning support and resources are provided for the learner.
Still further, first, semantic features of text can be extracted from editing behavior of a target offline learning user by quantifying semantic representations using a text description mining component. The advantage of such a quantized semantic representation is that it can convert text information into a computer-processable numerical representation, thereby saving storage and processing resources. Compared with the method for directly storing a large amount of original text data, the method has the advantages that the quantized semantic representation can more efficiently express key information of learning content, and the performance and response speed of the system are improved. Secondly, the editing behavior intention feature and the editing behavior track feature have important significance for knowledge point generation. By analyzing the editing behavior intention characteristics of the learner, the intention and the target of the learner in the editing process can be known, so that curriculum knowledge points conforming to the learning requirements of the learner can be generated in a targeted manner. In addition, the characteristic of the editing behavior track reveals the behavior path of the learner on the text block edited by the learning content, so that the learning recognition point generation is more accurate and complete. By comprehensively considering the editing behavior intention and the track, curriculum knowledge point annotation text blocks adapting to different learner demands can be flexibly generated, and personalized learning support is provided. The knowledge point generation method based on the quantized semantic representation, the editing behavior intention characteristic and the editing behavior track characteristic has the advantage of resource conservation. By quantifying the semantic representation, a large amount of text information can be effectively compressed and stored, reducing the need for storage resources. Meanwhile, knowledge points are generated by utilizing the editing behavior intention characteristic and the editing behavior track characteristic, so that response can be accurately carried out according to the demands and behaviors of learners, unnecessary redundant generation is avoided, and the accuracy and flexibility of a generated result are improved. In summary, by combining the resource saving advantage of the quantized semantic representation and the knowledge point generation precision and flexibility of the editing behavior intention feature and the editing behavior track feature, key information can be extracted more effectively, personalized knowledge point labeling text blocks can be generated, and higher-quality and strong-pertinence learning support is provided for learners in the learning process.
In some examples, the text description mining component includes a first text description mining branch and a first text description stitching branch. Based on this, the text description mining component of the text description mining network based on the initial editing text block distribution data described in step 120 performs text description mining to obtain the target text description quantization semantics, including steps 121-123.
And step 121, obtaining text description mining information of the initial editing text block distribution data.
And 122, generating a first text description quantization semantic corresponding to the to-be-processed course resource text through the first text description mining branch based on the text description mining information of the initial editing text block distribution data.
And 123, performing text description splicing on the first text description quantified semantics corresponding to the to-be-processed course resource text and the first text description quantified semantics corresponding to the last course resource text through the first text description splicing branch to obtain the target text description quantified semantics, wherein the last course resource text and the to-be-processed course resource text are in the same course resource text set, and the last course resource text in the course resource text set is before the to-be-processed course resource text and is associated with the to-be-processed course resource text.
First, in step 121, text description mining information of the initially edited text block distribution data is obtained. This means that semantic feature information about learner behavior and text content is extracted from the initial compiled text block distribution data. Next, in step 122, using the first text description mining branch, first text description quantization semantics corresponding to the lesson resource text to be processed are generated based on the text description mining information of the initial editing text block distribution data. This step can translate the lesson resource text to be processed into a specific semantic representation for better understanding and processing of the learning content. Next, in step 123, through the first text description splicing branch, text description splicing is performed on the first text description quantization semantics corresponding to the to-be-processed lesson resource text and the first text description quantization semantics corresponding to the previous lesson resource text. The purpose of this is to link the curriculum resource text currently to be processed with the semantic representation of the previous piece of curriculum resource text to obtain a more comprehensive, consistent, target text description quantified semantics.
The following is an example application scenario: there is an offline education platform where learners are learning English writing. In initially editing the text block distribution data, it was found that the learner marked some important sentences and paragraphs in the curriculum materials. Branches are mined by the first text description, and semantic features associated with the sentences and paragraphs are extracted from the tags. For example, the sentences and paragraphs may be converted into representations of keywords, topics, or summaries.
In the first text description stitching branch, the semantic representations of these sentences and paragraphs are stitched with the semantic representations of the sentences and paragraphs of the last learning stage. The purpose of this is to connect knowledge points of the current learning phase with knowledge points of the previous learning phase to obtain a more complete text description while maintaining consistency of the learning process.
Through the processing of steps 121-123, key semantic feature information can be extracted from the initial compiled text block distribution data and converted into a quantized text description. This helps the system to better understand the behavior and learning content of the learner and provides personalized learning support. Meanwhile, by connecting knowledge points in the current learning stage and the previous learning stage, a continuous and orderly knowledge structure can be established, so that the learning process is more coherent and complete. Thus, overall, the process of steps 121-123 helps to improve the understanding of learner behavior and learning content, and provides more efficient personalized learning support for the learner.
In other examples, the text-description mining component further includes a second text-description mining branch and a second text-description stitching branch, the second text-description mining branch and the second text-description stitching branch preceding the first text-description mining branch in the curriculum knowledge point generation network. Based on this, the text description mining information described in step 121 to obtain the initial editing text block distribution data includes steps 1211 to 1212.
And 1211, performing text description mining on the initial editing text block distribution data through the second text description mining branch to obtain second text description quantization semantics corresponding to the to-be-processed course resource text.
And 1212, determining the second text description quantization semantics corresponding to the text of the course resource to be processed as the text description mining information.
Further, text description mining information based on the initial editing text block distribution data described in step 122 generates, through the first text description mining branch, first text description quantization semantics corresponding to the to-be-processed lesson resource text, including steps 1221-1222.
And step 1221, performing semantic splicing on the second text description quantization semantics corresponding to the to-be-processed course resource text and the second text description quantization semantics corresponding to the previous course resource text through the second text description splicing branch to obtain text description quantization splicing semantics.
Step 1222, performing feature quantization processing on the text description quantization splicing semantics through the first text description mining branch to obtain the first text description quantization semantics.
First, text description mining information for initially editing text block distribution data is obtained in step 121 according to the condition that the text description mining component includes a second text description mining branch and a second text description stitching branch. In step 1211, text description mining is performed on the initial editing text block distribution data through the second text description mining branch, so as to obtain second text description quantization semantics corresponding to the to-be-processed course resource text. This means that semantic features related to the text of the lesson resource to be processed are extracted from the initial compiled text block distribution data using the second text description mining branch and quantized into a computer-processable form. In step 1212, a second text description quantification semantic corresponding to the lesson resource text to be processed is determined as text description mining information. This is done to save and utilize the quantized semantic representation for processing and generation of subsequent steps.
Further, in step 122, based on the text description mining information of the initial editing text block distribution data, a first text description quantization semantic corresponding to the text of the to-be-processed lesson resource is generated through the first text description mining branch. In step 1221, semantic stitching is performed on the second text description quantified semantics corresponding to the lesson resource text to be processed and the second text description quantified semantics corresponding to the previous lesson resource text through the second text description stitching branch. The purpose of this is to connect knowledge points of the current learning stage with knowledge points of the previous learning stage at the quantized semantic level to obtain a more complete, consistent text description. In step 1222, feature quantization processing is performed on the text description quantization splicing semantics through the first text description mining branch, so as to obtain first text description quantization semantics corresponding to the to-be-processed course resource text. This step may further quantify and represent the text description for subsequent processing and generation.
Suppose that a learner is learning a programming course. In the initial editing of the text block distribution data, it is found that the learner has marked some marks in the code example portion and some additional description. The branches are mined by the second text description, and semantic features related to programming logic and grammar are extracted from these tags and additional description. These semantic features are converted into first text description quantized semantics through the first text description mining branch, representing learner understanding and annotating the code instance. And in the second text description splicing branch, carrying out semantic splicing on the second text description quantization semantics of the current learning stage and the second text description quantization semantics of the previous learning stage. This can be done to organically link current programming knowledge with previously learned knowledge to form a coherent programming knowledge structure. In the first text description mining branch, the spliced text description quantization semantics are utilized to further extract the first text description quantization semantics, and the quantization semantics represent key features and concepts of programming knowledge.
In general, steps 1211-1212 provide the generation and preservation of second text description quantified semantics for the text of the curriculum resources to be processed. This allows the system to better understand learner understanding, annotating, and tagging of course resources and translate these semantic features into computer-processable forms.
Through steps 1221-1222, not only the current course resource text to be processed is connected with knowledge points in the previous learning stage, but also the spliced text description is extracted and quantized, so that complete and consistent text description quantization semantics are formed. This further strengthens the relevance between knowledge points in the learning process, providing deeper semantic understanding and more accurate knowledge expression. By the design, the system can more comprehensively and accurately understand the behaviors and learning contents of learners by mining text description information, so that more intelligent and personalized learning support is provided; by connecting knowledge points of the current learning stage and the previous learning stage, the system can establish a consistent and orderly knowledge structure, thereby helping learners to better understand and master learning contents; by quantifying and presenting the textual descriptions, the system is able to record and analyze learner understanding and annotation of course resources in a computer-processable form, providing a more accurate, detailed knowledge representation.
In other possible embodiments, the method further comprises: and labeling text blocks according to the course knowledge points, and generating a course knowledge relation network of the target offline learning user.
An offline learning platform is provided in which a plurality of courses are available for selection by a learner. To assist the targeted offline learning user in better understanding and mastering the course content, additional steps in the method are used to generate a course knowledge relationship network. First, labeling is performed in each course resource text according to the existing course knowledge points. These annotations may be labels of text blocks or paragraphs associated with a particular knowledge point to represent the location of that knowledge point in the curriculum asset and the content involved. Then, by analyzing and processing the labeling information of all course resources, a course knowledge relation network of the target offline learning user can be constructed. This relational network reflects the relevance and dependency between knowledge points to help users understand better the structure and organization of learning content. For example, suppose that a target offline learning user is learning a mathematical course including various teaching materials, exercise books, video lectures, and the like. Based on the mathematical knowledge points determined, corresponding text blocks are marked in each resource, for example, knowledge points such as derivative, limit and differential equation can be involved in a video related to calculus. Then, by analyzing the labeling information of all the resources, a course knowledge relation network can be constructed. This network of relationships may show links and dependencies between different knowledge points, e.g. derivatives are the basis for understanding differential equations, limit concepts play an important role in calculating derivatives, etc. By generating the course knowledge relationship network, the target offline learning user can more clearly know the relationship between the structure of the course content and the knowledge points. This helps them learn and master courses more systematically and can purposefully find, understand and consolidate relevant knowledge points.
In some possible embodiments, the course knowledge point generation network corresponding original decision tree algorithm includes a basic text description mining component, a basic behavioral intention parsing component, and a basic course knowledge point generation component. Based on this, the method further comprises steps 210-260.
Step 210, obtaining the distribution data of the past initial editing text blocks of the past offline learning user in the past course resource text.
And 220, performing text description mining through the basic text description mining component based on the past initial editing text block distribution data to obtain target past text description quantization semantics.
And 230, performing behavior intent analysis through the basic behavior intent analysis component based on the target past text description quantification semantics to obtain past editing behavior intent characteristics and past editing behavior track characteristics of each learning content editing text block of the past offline learning user.
And 240, generating course knowledge points through the basic course knowledge point generating component based on the past editing behavior intention characteristic and the past editing behavior track characteristic to obtain a past course knowledge point labeling text block of the corresponding learning content editing text block.
And 250, generating a target algorithm network debugging evaluation index based on the past course knowledge point labeling text block.
And 260, updating and improving algorithm variables of the original decision tree algorithm based on the target algorithm network debugging evaluation index to obtain the course knowledge point generation network.
In the above scenario, the algorithm network debugging evaluation index may be understood as a loss function or evaluation index, which is used to measure the performance and accuracy of the course knowledge point generating network. The design purpose of the index is to optimize the algorithm variables of the original decision tree algorithm so that the algorithm variables can better generate course knowledge points meeting the demands of learners. Specifically, in step 250, a target algorithm network debug evaluation index is generated based on the past course knowledge point annotation text blocks. This means that by referring to the result of labeling course resources by the learner in the past, an evaluation index is constructed to measure the performance of the course knowledge point generation network. The index may take a variety of forms, such as average cross entropy loss, mean square error, etc. They are used to compare the differences between the generated curriculum knowledge point annotation text blocks and the actual annotations. If the generated knowledge point labels are consistent with the actual labels, the value of the loss function is lower; conversely, if there is a large difference, the value of the loss function will be high. By using such a loss function for optimization, the algorithm variables of the original decision tree algorithm can be updated and improved to minimize the value of the loss function. By the aid of the method, accuracy and performance of a course knowledge point generation network can be improved, and related knowledge points can be generated according to demands of learners better. Summarizing, the algorithm network debugging evaluation index plays a role of a loss function in the scene, and is used for measuring the performance and accuracy of the course knowledge point generation network. By optimizing the index, the algorithm variable of the original decision tree algorithm can be improved, and the effect of course knowledge point generation network and the personalized learning supporting capability are improved.
Assume an offline learning platform in which a plurality of offline learning users have been engaged in different course resources. Step 210-step 260 of the method are used to generate a course knowledge point generation network to provide better learning support.
In step 210, initial compiled text block distribution data of past offline learning users in past course resource text is obtained. These data reflect the behavior and corresponding location of the learner as noted and edited on the specific course resources.
In step 220, text description mining is performed on the past initial editing text block distribution data using a base text description mining component to obtain target past text description quantization semantics. This enables semantic features to be extracted from past learning behavior and quantified into a computer-processable form.
In step 230, the target past text description quantization semantics are analyzed by the underlying behavior intent parsing component to obtain past editing behavior intent features and past editing behavior trace features for each block of learning content editing text of the past offline learning user. These features can help understand the learner's behavioral trends and edit patterns during past learning.
In step 240, the base course knowledge point generation component is utilized to generate course knowledge points according to the past editing behavior intention feature and the past editing behavior track feature, and the past course knowledge point annotation text blocks of the learning content editing text blocks are annotated. By doing so, the behavior and knowledge points of the past learner can be related, and a foundation is provided for subsequent learning support.
In step 250, a target algorithm network debugging evaluation index is generated based on the text blocks of past course knowledge points. These metrics can evaluate the performance and accuracy of the original decision tree algorithm in generating course knowledge points.
In step 260, the algorithm variables of the original decision tree algorithm are updated and modified by using the target algorithm network debugging evaluation index, thereby obtaining a more accurate and efficient course knowledge point generating network.
Applying steps 210-260, the generated course knowledge point generation network can provide personalized learning support according to each learner's needs and preferences by analyzing past offline learning user behavior and editing patterns. This helps to improve the engagement, interest and effect of the learner; the generated course knowledge point generation network can reveal the relevance and the dependency relationship between different knowledge points. The learner can more comprehensively understand the structure and organization of the learning content, so that the learner can better master the related knowledge; course knowledge point generation network enables learner to accurately know core knowledge points and key concepts of each learning content. The method is helpful for learners to learn and consolidate important knowledge more pertinently, so that learning effect and learning result are improved; the original decision tree algorithm is updated and improved by using the target algorithm network debugging evaluation index, and the course knowledge point generation network can continuously optimize and adjust algorithm variables. This improves the accuracy and efficiency of course knowledge point generation.
In some alternative embodiments, the generating of the target algorithm network debug evaluation index based on the past course knowledge point annotation text blocks in step 250 includes steps 251-252.
Step 251, generating an intention mining algorithm network debugging evaluation index, an intention updating algorithm network debugging evaluation index and a disturbance algorithm network debugging evaluation index respectively based on the past course knowledge point labeling text blocks, wherein the intention mining algorithm network debugging evaluation index is used for representing the accuracy of editing intention mining, the intention updating algorithm network debugging evaluation index is used for representing the change coefficient of editing behavior adjustment between different course resource texts, and the disturbance algorithm network debugging evaluation index is used for representing the confidence level of editing intention mining.
Step 252, generating the target algorithm network debugging evaluation index based on at least one of the intent mining algorithm network debugging evaluation index, the intent updating algorithm network debugging evaluation index and the disturbance algorithm network debugging evaluation index.
In step 251, three target algorithm network debugging evaluation indexes are generated based on the past course knowledge point labeling text blocks: the method comprises the steps of intention mining algorithm network debugging evaluation indexes, intention updating algorithm network debugging evaluation indexes and disturbance algorithm network debugging evaluation indexes.
Network debugging evaluation index of intention mining algorithm: the index is used to characterize the accuracy of the editing intent mining. The method can measure the accuracy of past intention or target of offline learning users when marking and editing in course resource texts. For example, if a learner marks a certain text block and the matching degree of the mark and an actual knowledge point is high, the network debugging evaluation index of the intention mining algorithm reflects higher precision.
Network debugging evaluation index of intent updating algorithm: the index is used for representing the change coefficient of editing behavior adjustment between different course resource texts. It can measure the adjustment degree of the editing behavior of the learner when processing different learning contents. For example, if a learner exhibits consistent editing behavior patterns and intent adjustment capabilities in different course resources, the intent update algorithm network debug evaluation index may reflect a higher coefficient of variation.
Network debugging evaluation indexes of disturbance algorithm: the index is used to characterize the confidence of the edit intent mining. It can evaluate the certainty and confidence level exhibited by the offline learning user in the past when editing the course resource text. For example, if a learner exhibits a high degree of consistency and confidence in the labeling and editing process, the perturbation algorithm network debugging evaluation index will reflect a higher confidence.
In step 252, at least one of these generated evaluation indicators is selected, and a target algorithm network debug evaluation indicator is generated. According to specific requirements and scenes, the indexes can be combined according to weights or rules to obtain a comprehensive evaluation index. This target algorithm network debugging evaluation index will be used to evaluate the performance and accuracy of the original decision tree algorithm in generating course knowledge points, thereby providing guidance for improving and optimizing the algorithm.
It can be seen that the intention mining algorithm network debugging evaluation index, the intention updating algorithm network debugging evaluation index and the disturbance algorithm network debugging evaluation index in step 251, and the target algorithm network debugging evaluation index in step 252 provide a method for measuring and evaluating the course knowledge point generation network performance. These metrics can help understand learner edit behavior, intent adjustments, and confidence levels, and provide guidance for improved algorithms and better learning support.
In other examples, the original decision tree algorithm further includes a first multi-layer perceptron unit (first discrimination unit), based on which the step of generating the disturbance algorithm network debug evaluation index based on the past course knowledge point labeling text block in step 251 includes: carrying out knowledge detection on the past course knowledge point labeling text blocks through the first multi-layer perceptron unit to obtain a first knowledge detection result; and generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block and the first knowledge detection result.
In this example, the original decision tree algorithm includes a first multi-layer perceptron unit (first discriminant unit). In step 251, it will be described in detail how to generate a disturbance algorithm network commissioning evaluation index, which relates to the application of the first multi-layer perceptron unit.
Carrying out knowledge detection through a first multi-layer perceptron unit: first, a first multi-layer perceptron unit is used for carrying out knowledge detection on a past course knowledge point labeling text block. This means that the unit is used to determine whether the text block contains key knowledge points required by the learner. By training the perceptron unit, in combination with the annotation data, it is able to identify and distinguish between text blocks with important knowledge and text blocks that are not important.
Obtaining a first knowledge detection result: based on the output of the first multi-layer perceptron unit, a first knowledge detection result is obtained. This result reflects the probability or confidence that each past course knowledge point labeled text block is determined to contain important knowledge points. From this result, the accuracy and consistency of labeling of different text blocks by the learner in the past can be further analyzed and understood.
Generating disturbance algorithm network debugging evaluation indexes: based on the past course knowledge point labeling text block and the first knowledge detection result, a disturbance algorithm network debugging evaluation index can be generated. These metrics are used to measure confidence in editing intent mining, i.e., the degree of certainty and confidence that a learner exhibits when editing course resource text. By combining the labeling data of the learner in the past with the first knowledge detection result, the confidence of the learner with respect to the knowledge points during the editing process can be inferred.
In the above examples it was mentioned that the original decision tree algorithm comprises a first multi-layer perceptron unit and details how this unit is used for knowledge detection. In step 251, based on the text blocks of past course knowledge point labels and the first knowledge detection result, a disturbance algorithm network debugging evaluation index is generated to measure the confidence of the learner about the knowledge points in the editing process. These metrics provide a deeper understanding of learner editing behavior, help improve algorithms and provide more accurate personalized learning support.
In still other examples, the original decision tree algorithm further includes a second multi-layer perceptron unit. Based thereon, the method further comprises; and carrying out knowledge detection on the past editing behavior intention characteristic based on the second multi-layer perceptron unit to obtain a second knowledge detection result. Further, the generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block and the first knowledge detection result includes: and generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block, the first knowledge detection result and the second knowledge detection result.
In this example, the original decision tree algorithm includes not only the first multi-layer perceptron unit, but also the second multi-layer perceptron unit. Based on this setting, the method further complements steps to generate a perturbation algorithm network debugging evaluation index.
The second multi-layer perceptron unit carries out knowledge detection on the past editing behavior intention characteristics: by introducing the second multi-layer perceptron unit, it can be used for knowledge detection of past editing behavior intention features. This means that a second multi-layer perceptron unit is used to determine whether the learner exhibits knowledge point related features in past editing behavior. By training the perceptron unit and combining the editing behavior data, it is possible to identify and distinguish between editing behaviors with knowledge point intent and unrelated editing behaviors.
Obtaining a second knowledge detection result: and obtaining a second knowledge detection result based on the output of the second multi-layer perceptron unit. This result reflects the probability or confidence that each past editing behavior intent feature is determined to contain knowledge point intent. With this result, the accuracy and consistency of the knowledge point-related features exhibited by the learner in the past editing behavior can be further analyzed and understood.
Generating disturbance algorithm network debugging evaluation indexes: and finally, generating disturbance algorithm network debugging evaluation indexes based on the past course knowledge point labeling text block, the first knowledge detection result and the second knowledge detection result. The indexes comprehensively consider the labeling data of the learner in the past and the knowledge detection result of the intention characteristics of the editing behavior and are used for evaluating the confidence and the relevance of the learner about knowledge points in the editing process.
In this example, the original decision tree algorithm incorporates a second multi-layer perceptron unit in addition to the first multi-layer perceptron unit. The method further illustrates in step how to use the second multi-layer perceptron unit to perform knowledge detection on the past editing behavior intent features and generate a second knowledge detection result. And then, generating disturbance algorithm network debugging evaluation indexes by combining the text blocks, the first knowledge detection results and the second knowledge detection results of past course knowledge points so as to more comprehensively evaluate the confidence and relevance of the learner on the knowledge points in the editing process. These evaluation metrics help to improve the algorithm and provide more accurate personalized learning support.
In other alternative embodiments, the step of generating the network debug evaluation index of the intention mining algorithm based on the past course knowledge point annotation text blocks in step 251 includes steps 251 a-251 d.
Step 251a, determining an algorithm network debugging evaluation index LOSS1 based on the past course knowledge point labeling text block and the priori course knowledge point labeling text block.
In the step, according to the past course knowledge point labeling text blocks and the priori course knowledge point labeling text blocks, the algorithm network debugging evaluation index LOSS1 is determined. This index may be used to measure the performance of the model in identifying and classifying knowledge point labels.
Step 251b, determining an algorithm network debugging evaluation index LOSS2 based on the past editing behavior intention characteristic and the prior editing behavior intention characteristic corresponding to the past course knowledge point labeling text block.
In the step, the past editing behavior intention characteristic and the prior editing behavior intention characteristic corresponding to the past course knowledge point labeling text block are utilized to determine an algorithm network debugging evaluation index LOSS2. The index may be used to measure the performance of the model in capturing the intent of the learner in editing behavior in relation to the knowledge points.
Step 251c, determining an algorithm network debugging evaluation index LOSS3 based on the past editing behavior track characteristics and the prior editing behavior track characteristics corresponding to the past course knowledge point labeling text blocks.
In the step, the algorithm network debugging evaluation index LOSS3 is determined based on the past editing behavior track characteristics and the prior editing behavior track characteristics corresponding to the past course knowledge point labeling text blocks. The index may be used to measure the performance of the model in analyzing learner-edited behavior trajectories to find patterns or trends associated with knowledge points.
And step 251d, performing global fusion on the algorithm network debugging and evaluation index LOSS1, the algorithm network debugging and evaluation index LOSS2 and the algorithm network debugging and evaluation index LOSS3 to obtain the intention mining algorithm network debugging and evaluation index.
In the step, global fusion is carried out on algorithm network debugging evaluation indexes LOSS1, LOSS2 and LOSS3, and a final intention mining algorithm network debugging evaluation index is obtained. The index comprehensively considers evaluation indexes in different aspects, ensures comprehensive performance and accuracy, and can be used for evaluating the overall performance of the algorithm network on the intentional mining task.
Further refinement in an alternative embodiment step 251 is provided in the example above for generating a intent mining algorithm network debug evaluation index. Through the substeps 251a to 251d, the past data can be analyzed from different angles, and a plurality of evaluation indexes are comprehensively considered to obtain a more comprehensive and accurate evaluation result. These evaluation metrics help to improve the algorithmic network design, providing more efficient intent mining support to facilitate the learner's personalized learning process.
In other alternative embodiments, the step of generating the intent update algorithm network debug evaluation index based on the past course knowledge point annotation text blocks in step 251 includes steps 2511-2514.
Step 2511, determining an algorithm network debugging evaluation index LOSS4 according to the difference coefficient between the text blocks of the past course knowledge point labels corresponding to the two continuous past course resource texts and the first priori difference coefficient.
In the step, an algorithm network debugging evaluation index LOSS4 is determined according to the distinguishing coefficient between the text blocks of the past course knowledge point labels corresponding to the continuous two past course resource texts and the first priori distinguishing coefficient. This index may be used to measure the performance of the model in capturing differences between knowledge point labeled text blocks.
Step 2512, determining an algorithm network debugging evaluation index LOSS5 based on the difference coefficient between the previous editing behavior intention features corresponding to the two continuous previous course resource texts and the second prior difference coefficient.
In this step, the algorithm network debugging evaluation index LOSS5 is determined based on the difference coefficient between the previous editing behavior intention features corresponding to the two successive previous course resource texts and the second prior difference coefficient. The index may be used to measure the performance of the model in capturing differences between the intent features of the editing behavior.
Step 2513, determining an algorithm network debugging evaluation index LOSS6 based on the difference coefficient between the previous editing behavior track features corresponding to the two continuous previous course resource texts and the third prior difference coefficient.
In this step, the algorithm network debugging evaluation index LOSS6 is determined based on the difference coefficient between the past editing behavior trace features corresponding to the two continuous past course resource texts and the third prior difference coefficient. The index may be used to measure the performance of the model in capturing differences between the characteristics of the edit behavior trace.
Step 2514, performing global fusion on the algorithm network debugging and evaluation index LOSS4, the algorithm network debugging and evaluation index LOSS5 and the algorithm network debugging and evaluation index LOSS6 to obtain the intent updating algorithm network debugging and evaluation index.
In the step, global fusion is carried out on algorithm network debugging evaluation indexes LOSS4, LOSS5 and LOSS6, and final intention updating algorithm network debugging evaluation indexes are obtained. By comprehensively considering the evaluation indexes in different aspects, a more comprehensive and accurate evaluation result can be obtained so as to measure the overall performance of the algorithm network on the intention updating task.
In this alternative embodiment, step 251 is further refined to sub-steps 2511 to 2514 to generate intent update algorithm network debug evaluation metrics. Through the substeps, the difference between two continuous past resource texts is analyzed from different angles, and a plurality of evaluation indexes are comprehensively considered to obtain a more comprehensive and accurate evaluation result. These evaluation metrics help to improve the algorithmic network design, providing more efficient intent update support to facilitate the learner's personalized learning process.
In the above embodiments, a priori is understood to be a true or reference value for comparison and measurement with a specific evaluation index. Further expanded descriptions of steps 2511 to 2514 are provided below:
in step 2511, an algorithmic network debug evaluation index LOSS4 is determined.
In the step, the algorithm network debugging evaluation index LOSS4 is determined by calculating the distinguishing coefficient and the first priori distinguishing coefficient between the past course knowledge point labeling text blocks corresponding to the continuous two past course resource texts. These difference coefficients may be distances, similarities, or other metrics that are used to compare the degree of difference between two knowledge point labeled text blocks. By comparing with the first a priori discrimination coefficients, the performance of the algorithm network in capturing the differences of the text blocks of knowledge point labels can be evaluated.
With respect to step 2512, an algorithmic network debug evaluation index LOSS5 is determined.
In this step, the algorithm network debugging evaluation index LOSS5 is determined according to the distinguishing coefficient between the previous editing behavior intention features corresponding to the two successive previous course resource texts and the second priori distinguishing coefficient. Similar to step 2511, by calculating the discrimination coefficients between intent features, the performance of the algorithm network in capturing the differences in the intent features of the editing behavior may be evaluated and compared to a second a priori discrimination coefficient.
With respect to step 2512, an algorithmic network debug evaluation index LOSS6 is determined.
In this step, the algorithm network debugging evaluation index LOSS6 is determined based on the difference coefficient between the past editing behavior trace features corresponding to the two continuous past course resource texts and the third prior difference coefficient. By calculating the distinguishing coefficient between the editing behavior trace features, the performance of the algorithm network in capturing the differences of the editing behavior trace features can be evaluated and compared with a third prior distinguishing coefficient.
With respect to step 2512, it is the global fusion generation intent update algorithm network debug evaluation index.
In the step, global fusion is carried out on algorithm network debugging evaluation indexes LOSS4, LOSS5 and LOSS6, and final intention updating algorithm network debugging evaluation indexes are obtained. By weighting, fusing or combining different evaluation indexes, performance performances of all aspects can be comprehensively considered, and a comprehensive evaluation result can be obtained. This composite index may help determine the overall performance of the algorithm network in terms of intent update tasks.
Steps 2511 to 2514 further develop to illustrate how the discrimination coefficients and a priori values can be used to determine an evaluation index to evaluate the performance of the algorithm network in different aspects. These steps help to compare specific observed data with a priori knowledge and comprehensively consider multiple evaluation indexes to obtain accurate and comprehensive evaluation results to guide the improvement and optimization of the intent update algorithm network.
Under some independent design ideas, based on the editing behavior intention feature and the editing behavior track feature, course knowledge point generation is performed by a course knowledge point generation component of the course knowledge point generation network, so as to obtain a course knowledge point labeling text block of the corresponding learning content editing text block, and then the method further comprises step 150.
Step 150, extracting structural features of the optimized course resource text to obtain a course resource structural semantic vector; using the course resource structural semantic vector to carry out structural storage on the optimized course resource text; and the optimized course resource text comprises a course knowledge point labeling text block.
Continuing with the history discipline as an example, assume that a curriculum knowledge point annotation text block has been generated for a corresponding learning content editing text block. These knowledge point annotation text blocks may contain key information related to a particular topic or concept.
For example, in an ancient Greek historic course, there may be a course knowledge point labeled text block covering the relevant contents of the Greek urban bang regime. The text block may include knowledge point labels regarding the origin of the urban system, the political organization of the urban system, the citizenship rights and obligations, and the like.
These knowledge point annotation text blocks can be generated from the content of the learning resource and provide a more specific and accurate description of the lesson knowledge points. For example, the Athens history is discussed in the textbook section, and the corresponding curriculum knowledge point annotation text block may mention the characteristics of the Athens history, the form of the people's participation, the decision process, etc.
In step 150, the optimized lesson resource text includes not only the textbook content, research papers, etc., but also content related to the lesson knowledge point labeling text blocks. This means that the curriculum knowledge points are labeled with text blocks as important components in the structured feature extraction and semantic vector representation.
In summary, for the optimized course resource text of the history subject, besides extracting structural features and semantic information such as chapters of the teaching materials and titles of the papers, the content related to the course knowledge point labeling text block is paid attention to and processed in particular, so that the generated structural semantic vector can reflect knowledge points and topics of the learning resource completely.
In some independent embodiments, the step 150 of using the curriculum asset structured semantic vector to structurally store the optimized curriculum asset text includes: pyramid feature coding is carried out on the course resource structural semantic vector, and a first pyramid semantic feature set corresponding to the course resource structural semantic vector is obtained; performing storage feature mapping according to the first pyramid semantic feature set to obtain a storage feature mapping semantic set; and determining the structured storage text of the optimized course resource text based on the storage feature mapping semantic set.
In an embodiment based on step 150, the process of using the curriculum asset structured semantic vector to structurally store the optimized curriculum asset text can be further explained as follows:
(1) Pyramid feature coding is carried out on course resource structured semantic vectors: and carrying out pyramid feature coding on the structured semantic vectors of the course resources. This means that the vector is layered, capturing semantic information of different granularity. For example, a vector may be partitioned into a plurality of sub-vectors using techniques such as pyramid pooling, and key features of each sub-vector extracted;
(2) Obtaining a first pyramid semantic feature set: and obtaining a first pyramid semantic feature set corresponding to the course resource structured semantic vector through pyramid feature coding. The feature set contains semantic features with different levels and different granularities, such as chapter topics, sub-topics, keywords and the like;
(3) Storing feature mapping: and carrying out storage feature mapping according to the first pyramid semantic feature set, namely mapping each semantic feature in the feature set to a corresponding storage position. The purpose of this is to create an efficient index that facilitates subsequent storage and retrieval operations. For example, the features may be mapped into corresponding storage locations or databases using hash functions or index tables;
(4) Determining a structured storage text of the optimized course resource text: based on the storage feature mapping semantic set, a structured storage text that optimizes the curriculum resource text is determined. This means that each text block in the curriculum asset can be identified, organized, and associated with its corresponding semantic feature. For example, a record is created in the database containing the stored feature map semantic set as a key and the corresponding optimized curriculum resource text as a value.
It can be seen that through operations such as pyramid feature encoding, storage feature mapping, and structured storage text, the optimized course resource text of the history subject can be stored in a structured format, so that the information and semantic features of the text block can be effectively managed and retrieved. This helps to improve the organization, availability and discoverability of learning resources, thereby providing better learning experience and knowledge acquisition.
Further, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the above-described method.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A knowledge point generation method based on big data mining, characterized in that the method is applied to a big data mining system, the method comprising:
when learning behavior analysis is carried out on a target offline learning user to obtain a course resource text to be processed, initial editing text block distribution data of the target offline learning user in the course resource text to be processed is obtained;
based on the initial editing text block distribution data, text description mining is carried out by a text description mining component of a course knowledge point generation network, and target text description quantization semantics are obtained;
Based on the target text description quantization semantics, performing behavior intention analysis by a behavior intention analysis component of the course knowledge point generation network to obtain editing behavior intention characteristics and editing behavior track characteristics of each learning content editing text block of the target offline learning user;
and generating course knowledge points through a course knowledge point generating component of the course knowledge point generating network based on the editing behavior intention characteristic and the editing behavior track characteristic to obtain a course knowledge point labeling text block of the corresponding learning content editing text block.
2. The method of claim 1, wherein the text-description mining component includes a first text-description mining branch and a first text-description stitching branch, the text-description mining component of the text-description mining network based on the initial compiled text block distribution data to obtain target text-description quantified semantics by course knowledge point generation, comprising:
obtaining text description mining information of the initial editing text block distribution data;
generating a first text description quantization semantic corresponding to the to-be-processed course resource text through the first text description mining branch based on the text description mining information of the initial editing text block distribution data;
And performing text description splicing on the first text description quantified semantics corresponding to the to-be-processed course resource text and the first text description quantified semantics corresponding to the last course resource text through the first text description splicing branch to obtain the target text description quantified semantics, wherein the last course resource text and the to-be-processed course resource text are in the same course resource text set, and the last course resource text in the course resource text set is before the to-be-processed course resource text and is associated with the to-be-processed course resource text.
3. The method of claim 2, wherein the text-description mining component further comprises a second text-description mining branch and a second text-description stitching branch, the obtaining text-description mining information for the initial compiled text block distribution data before the second text-description mining branch and the second text-description stitching branch are in the first text-description mining branch in the curriculum knowledge point generation network, comprising:
performing text description mining on the initial editing text block distribution data through the second text description mining branch to obtain second text description quantization semantics corresponding to the to-be-processed course resource text;
Determining the second text description quantization semantics corresponding to the to-be-processed course resource text as the text description mining information;
the text description mining information based on the initial editing text block distribution data generates a first text description quantization semantic corresponding to the to-be-processed course resource text through the first text description mining branch, and the text description quantization semantic comprises:
performing semantic splicing on the second text description quantified semantics corresponding to the to-be-processed course resource text and the second text description quantified semantics corresponding to the previous course resource text through the second text description splicing branch to obtain text description quantified spliced semantics;
and carrying out feature quantization processing on the text description quantization splicing semantics through the first text description mining branch to obtain the first text description quantization semantics.
4. The method of claim 1, wherein the method further comprises:
and labeling text blocks according to the course knowledge points, and generating a course knowledge relation network of the target offline learning user.
5. The method of claim 1, wherein the course knowledge point generation network-corresponding raw decision tree algorithm includes a base text description mining component, a base behavioral intention parsing component, and a base course knowledge point generation component, the method further comprising:
Acquiring the distribution data of the past initial editing text blocks of the past offline learning user in the past course resource text;
based on the past initial editing text block distribution data, text description mining is carried out through the basic text description mining component, and target past text description quantization semantics are obtained;
based on the target past text description quantization semantics, carrying out behavior intention analysis through the basic behavior intention analysis component to obtain past editing behavior intention characteristics and past editing behavior track characteristics of each learning content editing text block of the past offline learning user;
based on the past editing behavior intention characteristic and the past editing behavior track characteristic, generating course knowledge points through the basic course knowledge point generating component to obtain past course knowledge point labeling text blocks of the corresponding learning content editing text blocks;
generating a target algorithm network debugging evaluation index based on the past course knowledge point labeling text block;
updating and improving algorithm variables of the original decision tree algorithm based on the target algorithm network debugging evaluation index to obtain the course knowledge point generation network;
The generating the target algorithm network debugging evaluation index based on the past course knowledge point labeling text block comprises the following steps:
generating an intention mining algorithm network debugging evaluation index, an intention updating algorithm network debugging evaluation index and a disturbance algorithm network debugging evaluation index respectively based on the past course knowledge point labeling text blocks, wherein the intention mining algorithm network debugging evaluation index is used for representing the accuracy of editing intention mining, the intention updating algorithm network debugging evaluation index is used for representing the change coefficient of editing behavior adjustment between different course resource texts, and the disturbance algorithm network debugging evaluation index is used for representing the confidence level of editing intention mining;
generating the target algorithm network debugging evaluation index based on at least one of the intention mining algorithm network debugging evaluation index, the intention updating algorithm network debugging evaluation index and the disturbance algorithm network debugging evaluation index.
6. The method of claim 5, wherein the original decision tree algorithm further comprises a first multi-layer perceptron unit, the step of generating the disturbance algorithm network debug evaluation index based on the past course knowledge point annotation text blocks comprising:
Carrying out knowledge detection on the past course knowledge point labeling text blocks through the first multi-layer perceptron unit to obtain a first knowledge detection result;
generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block and the first knowledge detection result;
wherein the original decision tree algorithm further comprises a second multi-layer perceptron unit, the method further comprising; carrying out knowledge detection on the past editing behavior intention characteristic based on the second multi-layer perceptron unit to obtain a second knowledge detection result; the generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block and the first knowledge detection result comprises the following steps: and generating the disturbance algorithm network debugging evaluation index based on the past course knowledge point labeling text block, the first knowledge detection result and the second knowledge detection result.
7. The method of claim 5, wherein generating the intent mining algorithm network debug evaluation index based on the past course knowledge point annotation text blocks comprises:
determining an algorithm network debugging evaluation index LOSS1 based on the past course knowledge point labeling text block and the priori course knowledge point labeling text block;
Determining an algorithm network debugging evaluation index LOSS2 based on the past editing behavior intention characteristic and the prior editing behavior intention characteristic corresponding to the past course knowledge point labeling text block;
determining an algorithm network debugging evaluation index LOSS3 based on the past editing behavior track characteristics and the priori editing behavior track characteristics corresponding to the past course knowledge point labeling text blocks;
and carrying out global fusion on the algorithm network debugging evaluation index LOSS1, the algorithm network debugging evaluation index LOSS2 and the algorithm network debugging evaluation index LOSS3 to obtain the intention mining algorithm network debugging evaluation index.
8. The method of claim 5, wherein generating the intent update algorithm network debug evaluation index based on the past course knowledge point annotation text blocks comprises:
determining an algorithm network debugging evaluation index LOSS4 according to a distinguishing coefficient between the text blocks of the past course knowledge point labels corresponding to the two continuous past course resource texts and the first priori distinguishing coefficient;
determining an algorithm network debugging evaluation index LOSS5 based on a distinguishing coefficient between the previous editing behavior intention features corresponding to the two continuous previous course resource texts and a second priori distinguishing coefficient;
Determining an algorithm network debugging evaluation index LOSS6 based on a distinguishing coefficient between the previous editing behavior track features corresponding to the two continuous previous course resource texts and a third priori distinguishing coefficient;
and carrying out global fusion on the algorithm network debugging evaluation index LOSS4, the algorithm network debugging evaluation index LOSS5 and the algorithm network debugging evaluation index LOSS6 to obtain the intention updating algorithm network debugging evaluation index.
9. A big data mining system comprising a processor and a memory; the processor is communicatively connected to the memory, the processor being configured to read a computer program from the memory and execute the computer program to implement the method of any of claims 1-8.
10. A computer readable storage medium, characterized in that a program is stored thereon, which program, when being executed by a processor, implements the method of any of claims 1-8.
CN202311819594.4A 2023-12-27 2023-12-27 Knowledge point generation method and system based on big data mining Active CN117473076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311819594.4A CN117473076B (en) 2023-12-27 2023-12-27 Knowledge point generation method and system based on big data mining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311819594.4A CN117473076B (en) 2023-12-27 2023-12-27 Knowledge point generation method and system based on big data mining

Publications (2)

Publication Number Publication Date
CN117473076A true CN117473076A (en) 2024-01-30
CN117473076B CN117473076B (en) 2024-03-08

Family

ID=89626078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311819594.4A Active CN117473076B (en) 2023-12-27 2023-12-27 Knowledge point generation method and system based on big data mining

Country Status (1)

Country Link
CN (1) CN117473076B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080104061A1 (en) * 2006-10-27 2008-05-01 Netseer, Inc. Methods and apparatus for matching relevant content to user intention
US20190004831A1 (en) * 2017-06-30 2019-01-03 Beijing Baidu Netcom Science And Technology Co., Ltd. IoT BASED METHOD AND SYSTEM FOR INTERACTING WITH USERS
CN111414464A (en) * 2019-05-27 2020-07-14 腾讯科技(深圳)有限公司 Question generation method, device, equipment and storage medium
CN112232066A (en) * 2020-10-16 2021-01-15 腾讯科技(北京)有限公司 Teaching outline generation method and device, storage medium and electronic equipment
CN113516574A (en) * 2021-07-27 2021-10-19 北京爱学习博乐教育科技有限公司 Self-adaptive learning system based on big data and deep learning and construction method thereof
CN114707510A (en) * 2022-04-01 2022-07-05 深圳市普渡科技有限公司 Resource recommendation information pushing method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080104061A1 (en) * 2006-10-27 2008-05-01 Netseer, Inc. Methods and apparatus for matching relevant content to user intention
US20190004831A1 (en) * 2017-06-30 2019-01-03 Beijing Baidu Netcom Science And Technology Co., Ltd. IoT BASED METHOD AND SYSTEM FOR INTERACTING WITH USERS
CN111414464A (en) * 2019-05-27 2020-07-14 腾讯科技(深圳)有限公司 Question generation method, device, equipment and storage medium
CN112232066A (en) * 2020-10-16 2021-01-15 腾讯科技(北京)有限公司 Teaching outline generation method and device, storage medium and electronic equipment
CN113516574A (en) * 2021-07-27 2021-10-19 北京爱学习博乐教育科技有限公司 Self-adaptive learning system based on big data and deep learning and construction method thereof
CN114707510A (en) * 2022-04-01 2022-07-05 深圳市普渡科技有限公司 Resource recommendation information pushing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN117473076B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN110795543B (en) Unstructured data extraction method, device and storage medium based on deep learning
CN112203122B (en) Similar video processing method and device based on artificial intelligence and electronic equipment
CN111680173A (en) CMR model for uniformly retrieving cross-media information
CN109492164A (en) A kind of recommended method of resume, device, electronic equipment and storage medium
CN109271539B (en) Image automatic labeling method and device based on deep learning
WO2022218186A1 (en) Method and apparatus for generating personalized knowledge graph, and computer device
CN111738016A (en) Multi-intention recognition method and related equipment
CN113886567A (en) Teaching method and system based on knowledge graph
CN110163376A (en) Sample testing method, the recognition methods of media object, device, terminal and medium
CN113239173B (en) Question-answer data processing method and device, storage medium and electronic equipment
CN111263238A (en) Method and equipment for generating video comments based on artificial intelligence
Dang et al. MOOC-KG: A MOOC knowledge graph for cross-platform online learning resources
CN116821318A (en) Business knowledge recommendation method, device and storage medium based on large language model
CN115952298A (en) Supplier performance risk analysis method and related equipment
CN115438674A (en) Entity data processing method, entity linking method, entity data processing device, entity linking device and computer equipment
CN117473076B (en) Knowledge point generation method and system based on big data mining
CN111104503A (en) Construction engineering quality acceptance standard question-answering system and construction method thereof
CN115982363A (en) Small sample relation classification method, system, medium and electronic device based on prompt learning
CN112528674B (en) Text processing method, training device, training equipment and training equipment for model and storage medium
Tian et al. Semantic similarity measure of natural language text through machine learning and a keyword‐aware cross‐encoder‐ranking summarizer—A case study using UCGIS GIS &T body of knowledge
Oliveira et al. Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents
CN117216194B (en) Knowledge question-answering method and device, equipment and medium in literature and gambling field
CN113837910B (en) Test question recommending method and device, electronic equipment and storage medium
CN117743315B (en) Method for providing high-quality data for multi-mode large model system
CN116226678B (en) Model processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant