CN116431800B - Examination interface generation method, device and readable storage medium - Google Patents

Examination interface generation method, device and readable storage medium Download PDF

Info

Publication number
CN116431800B
CN116431800B CN202310202771.8A CN202310202771A CN116431800B CN 116431800 B CN116431800 B CN 116431800B CN 202310202771 A CN202310202771 A CN 202310202771A CN 116431800 B CN116431800 B CN 116431800B
Authority
CN
China
Prior art keywords
difficulty
determining
question
weight
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310202771.8A
Other languages
Chinese (zh)
Other versions
CN116431800A (en
Inventor
梁名凯
林晓掀
杨煜荣
苏子旭
黄少龙
宿志鹏
江树杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Miaoke Technology Co ltd
Original Assignee
Guangzhou Miaoke Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Miaoke Technology Co ltd filed Critical Guangzhou Miaoke Technology Co ltd
Priority to CN202310202771.8A priority Critical patent/CN116431800B/en
Publication of CN116431800A publication Critical patent/CN116431800A/en
Application granted granted Critical
Publication of CN116431800B publication Critical patent/CN116431800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Tourism & Hospitality (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method, equipment and readable storage medium for generating an examination interface, wherein the method comprises the following steps: according to the historical answer data corresponding to the examinee identification, determining a difficulty compensation value associated with the subject to be examined, and adjusting the questions corresponding to the knowledge points; determining the compensation weight of each difficulty associated with the knowledge point according to the historical data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight; selecting target test questions from the question bank according to the selection probability; after receiving answer data corresponding to the examinee identification, determining a final score of the examinee identification according to answer scores and answer durations of the target test questions corresponding to the answer data. The method effectively solves the technical problem that the knowledge point examination is incomplete due to unified proposition examination paper for students in the related technology, and achieves the technical effects of individuating questions aiming at the conditions of the students and comprehensively examining the knowledge point mastering conditions of the students.

Description

Examination interface generation method, device and readable storage medium
Technical Field
The present application relates to the field of internet education, and in particular, to a method for generating an examination interface, an apparatus for generating an examination interface, and a readable storage medium.
Background
With the rapid development of internet technology, internet education is increasingly favored by students and parents due to the convenience thereof, compared with the fact that offline courses are limited by places and environments. In internet education, students can not only learn courses online, but also test online and detect learning conditions.
In the related art, a learning platform distributes test papers matched with grades to students, wherein the questions of each grade test paper are fixed test questions which are well pre-sent or randomly extracted from a question bank. However, the above way of unifying propositions to students for each examination results in that the scoring results cannot reflect the learning efficiency of the students.
Disclosure of Invention
The embodiment of the application solves the technical problems that knowledge points are not comprehensively examined and the learning efficiency of students cannot be pertinently reflected due to unified proposition paper for students in related technologies by providing the generation method of the examination interface, the generation equipment of the examination interface and the readable storage medium, and achieves the technical effects of individuating questions aiming at the conditions of the students, comprehensively examining the knowledge points of the students and improving the learning efficiency.
The embodiment of the application provides a method for generating an examination interface, which comprises the following steps:
determining a difficulty compensation value associated with a subject to be examined according to historical answer data corresponding to the subject identification, and adjusting the problem amount corresponding to each knowledge point according to the difficulty compensation value;
determining the compensation weight of each difficulty associated with the knowledge point according to the historical data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight;
selecting target test questions from a question bank according to the question quantity and the selection probability, and generating an examination interface based on the target test questions;
after receiving answer data corresponding to the examinee identification, determining a final score of the examinee identification according to answer scores and answer durations of the target test questions corresponding to the answer data.
Optionally, the determining the difficulty compensation value associated with the subject to be examined according to the historical answer data corresponding to the identifier of the subject to be examined includes:
determining subject evaluation parameters of the subject identification according to subject history scores associated with the subjects to be examined in the historical answer data;
and determining the difficulty compensation value based on the preset difficulty of the subject to be examined and the subject evaluation parameter, wherein the difficulty compensation value comprises compensation weights of the difficulty of each knowledge point.
Optionally, the adjusting the questions corresponding to the knowledge points according to the difficulty compensation value includes:
determining classification evaluation parameters of the knowledge points according to the historical classification scores associated with the knowledge points in the historical answer data;
acquiring the classification difficulty and preset classification weight associated with the knowledge points, and adjusting the preset classification weight based on the classification evaluation parameters;
and determining the question according to the adjusted preset classification weight and the classification difficulty.
Optionally, the determining the question according to the adjusted preset classification weight and the classification difficulty includes:
determining classification compensation weights according to the classification difficulties and the difficulty compensation values;
determining a classification relative weight according to the preset classification weight and the classification compensation weight;
and determining the questions according to the classification relative weights of the knowledge points.
Optionally, the determining the compensation weight of each difficulty associated with the knowledge point according to the historical data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight includes:
according to the historical difficulty scores associated with the difficulties in the historical answer data, determining difficulty compensation weights of the difficulties;
Determining a difficulty relative weight according to the preset difficulty weight of the difficulty and the difficulty compensation weight;
and determining the selection probability according to the relative weight of the difficulty of each difficulty.
Optionally, the selecting a target test question from a question bank according to the question quantity and the selection probability, and generating an examination interface based on the target test question, includes:
generating a probability set according to the selection probability;
selecting the target test questions of the questions from the question library associated with each knowledge point according to the probability set and a preset selection function;
and generating the examination interface according to the target test questions.
Optionally, after receiving answer data corresponding to the test taker identifier, determining a final score of the test taker identifier according to answer scores and answer durations of the target test questions corresponding to the answer data, where the determining includes:
after receiving the answer data, acquiring an evaluation basic index of the target test question;
determining a scoring value of the target test question according to the answer score, the answer time and the evaluation basic index;
obtaining a reference score corresponding to the test question mark according to the score values with the same test question mark;
And determining the final score according to each reference score and the reference weight corresponding to the test question mark.
Optionally, the determining the scoring value of the target test question according to the answer score, the answer duration and the evaluation base index includes:
determining the topic score and the expected answering time corresponding to the evaluation basic index;
and determining the scoring value according to the ratio of the answering score to the question score and the ratio of the answering duration to the expected answering time.
In addition, the application also provides a generating device of the examination interface, which comprises a memory, a processor and a generating program of the examination interface, wherein the generating program of the examination interface is stored in the memory and can be run on the processor, and the processor realizes the steps of the generating method of the examination interface when executing the generating program of the examination interface.
The application further provides a computer readable storage medium, wherein the computer readable storage medium stores a generation program of the examination interface, and the generation program of the examination interface realizes the steps of the generation method of the examination interface when being executed by a processor.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. the difficulty compensation value of the subject to be examined is determined according to the historical answer data corresponding to the examinee identification, and then the problem is adjusted according to the preset difficulty of each knowledge point and the difficulty compensation value; further determining the compensation weight of each difficulty associated with the knowledge point according to the historical answer data, and determining the selection probability of each test question according to the compensation weight; determining a target test question according to the selection probability and the question quantity and generating an examination interface; after answer data corresponding to the marks of the testees are received, final scores are determined according to the scores and the answer time lengths of all target test questions, so that the technical effects that knowledge points are not comprehensively examined and learning efficiency of the students cannot be reflected in a targeted manner due to the fact that the students are unified with the answer test papers in the related technology are effectively solved, personalized questions aiming at the conditions of the students are achieved, the knowledge points of the students are comprehensively examined and mastered, and the learning efficiency is improved are achieved.
2. After receiving the answer data, acquiring an evaluation basic index of the target test question; determining a scoring value of the target test question according to the answer score, the answer time and the evaluation basic index; obtaining a reference score corresponding to the test question mark according to the score values with the same test question mark; the final score is determined according to each reference score and the reference weight corresponding to the test question mark, so that the technical problem that the score is single in a scoring means in the related technology, so that the student cannot accurately evaluate the mastering condition of a certain knowledge point is effectively solved, the learning result of the student is comprehensively evaluated in a multi-dimensional mode, and the technical effect of the student on the mastering condition of each knowledge point is accurately reflected.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a method for generating an examination interface according to the present application;
fig. 2 is a detailed flowchart of step S120 in the second embodiment of the method for generating an examination interface according to the present application;
fig. 3 is a detailed flowchart of step S140 in the third embodiment of the method for generating an examination interface according to the present application;
fig. 4 is a schematic diagram of a hardware structure related to an embodiment of the generating device of the examination interface of the present application.
Detailed Description
In the related art, the learning platform performs class classification according to the grades of students and respectively proposing test papers according to different classes, but the test papers of each class are fixed, and the proposing test papers are unified for the students, so that the knowledge point examination is incomplete, and the learning efficiency of the students cannot be reflected pertinently. The embodiment of the application adopts the main technical scheme that: determining a difficulty compensation value of a subject to be examined according to historical answer data corresponding to the examinee identification, and adjusting the problem according to preset difficulty of each knowledge point and the difficulty compensation value; further determining the compensation weight of each difficulty associated with the knowledge point according to the historical answer data, and determining the selection probability of each test question according to the compensation weight; determining a target test question according to the selection probability and the question quantity and generating an examination interface; after answer data corresponding to the test taker identification are received, determining a final score according to the score and the answer time of each target test question. Therefore, the individual questions aiming at the situation of the students are realized, the knowledge points of the students are comprehensively examined and mastered, and the technical effect of improving the learning efficiency is achieved.
In order that the above-described aspects may be better understood, exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
Example 1
The embodiment of the application discloses a method for generating an examination interface, and referring to fig. 1, the method for generating the examination interface comprises the following steps:
step S110, determining a difficulty compensation value associated with a subject to be examined according to historical answer data corresponding to the subject identification, and adjusting the questions corresponding to the knowledge points according to the difficulty compensation value;
in this embodiment, the historical answer data is related to the test taker identification, and the past answer data includes, but is not limited to, a score of each test question, a length of time for answering each test question, and a final score of each test. The difficulty compensation value is associated with the subject to be examined, the expression form is a numerical value set, the difficulty compensation value is corresponding to each knowledge point, and each difficulty has the compensation value corresponding to the knowledge point.
In this embodiment, there are preset parameters as follows: the difficulty of 3 subjects of "law", criminal law and "criminal litigation law" is respectively 1, 3 and 2. The difficulty and weight are set for several knowledge points within 3 subjects. For example, criminal law is provided with knowledge points, difficulty and weight: 1-10% of national security crimes, 2-20% of public security crimes, 5-50% of civil rights and crimes, 2-20% of property crimes and the like.
Optionally, step S110 includes:
step S111, determining subject evaluation parameters of the subject identification according to subject calendar scores associated with the subjects to be examined in the historical answer data;
step S112, determining the difficulty compensation value based on the preset difficulty of the subject to be examined and the subject evaluation parameter, wherein the difficulty compensation value comprises compensation weights of the difficulty of each knowledge point;
as an optional implementation manner, obtaining a subject calendar score associated with a subject to be examined in the history answer data, substituting the subject calendar score into a preset formula, and determining subject evaluation parameters of the subject identification; and determining the difficulty compensation value by combining the change trend of the subject history score, the subject evaluation parameter and the subject difficulty associated with the subject to be examined. The difficulty compensation value comprises compensation weights for the difficulty of each knowledge point, and the difficulty of the knowledge point corresponds to the compensation weights one by one.
Optionally, the subject history score is substituted into a preset formula, and the subject evaluation parameter identified by the subject is determined according to the final score of the last time, the average score of the final score of the last three times and the average score of the final score of the last time, and the three scores.
For example, for a gramineous purpose examination, the last final score is 90 points, and the full score is 100 points; the final score of the last three times is equal to 81 minutes, the total score of the last three times is 100 minutes, the average score of the last final score of the last three times is 75 minutes, the total score of the last three times is 100 minutes, the final comprehensive score is determined to be 82 according to a preset formula, the last final score of the last three times is analyzed, and the score of the examinee is determined to be an ascending trend, so that the recent mastery degree of the learner is comprehensively determined to be better; the difficulty of the question bank is determined to be 3, so that the compensation weight of the difficulty of each knowledge point is as follows: 1-10% of difficulty, 2-20% of difficulty, 3-40% of difficulty, 4-15% of difficulty and 5-15% of difficulty.
Step S113, determining classification evaluation parameters of the knowledge points according to the historical classification scores associated with the knowledge points in the historical answer data;
step S114, acquiring the classification difficulty and the preset classification weight associated with the knowledge points, and adjusting the preset classification weight based on the classification evaluation parameters;
Step S115, determining the question according to the adjusted preset classification weight and the classification difficulty.
As an optional implementation manner, each subject to be examined is associated with a plurality of knowledge points, a historical classification score associated with each knowledge point in the historical answer data is determined, the historical classification score is substituted into a preset formula, and classification evaluation parameters of the examinee identification are determined; combining the change trend of the past classification score and the classification evaluation parameter, and adjusting the preset classification weight corresponding to the knowledge point; and determining the problem quantity corresponding to the knowledge point according to the adjusted preset classification weight and classification difficulty.
For example, the calendar classification score is substituted into a preset formula, and the classification evaluation parameter of the examinee identification is determined, where the subject evaluation parameter may be determined according to the knowledge point score of the last time, the average score of the knowledge point score of the last three times, and the average score of the knowledge point score of the calendar, and the three scores.
For example, aiming at the historical answer data of the knowledge point of harming public security crimes, the latest score is 15 points and the full score is 20 points; the score of the last three times is equally divided into 62 points, the score of the last three times is fully divided into 90 points, the score of the last three times is equally divided into 184 points and 231 points, and further, the classification evaluation parameter is determined to be 79 according to a preset formula, the change trend of the last three times is analyzed, and the score of the knowledge point of the examinee is judged to be fluctuated all the time. And acquiring a preset weight value of the knowledge point to be 20%, and adjusting the preset weight value to be 25% according to the classification evaluation parameters and the variation trend of the achievements.
Optionally, step S115 includes:
step S1151, determining a classification compensation weight according to the classification difficulty and the difficulty compensation value;
step S1152, determining a classification relative weight according to the preset classification weight and the classification compensation weight;
step S1153, determining the questions according to the classification relative weights of the knowledge points.
In this embodiment, the classification relative weight is the relative weight of the knowledge point, and the question weight corresponding to each knowledge point is determined according to the classification relative weight of each knowledge point, so as to determine the question amount of each knowledge point according to the total question amount.
As an optional implementation manner, according to the classification difficulty of the knowledge point, selecting a classification compensation weight corresponding to the difficulty in the difficulty compensation value, and determining the classification relative weight of the knowledge point based on the product of the classification compensation weight and a preset classification weight; and determining the question weights corresponding to the knowledge points according to the percentage of the classification relative weights of the knowledge points in the sum of the classification relative weights.
For example, if the classification difficulty of the knowledge point is 2, determining that the classification compensation weight is 20% and the preset classification weight of the knowledge point is 25% according to the difficulty compensation value, and if the classification relative weight is 20% by 25% = 5%; other knowledge points and the like, the weight value sets are determined to be {5%,8%,10%,15%,4% }, the question weights of the knowledge points are 4%/(4% +8% +10% +15% + 5%) = 9%, the question weights of other classifications and the like.
Illustratively, when the total topic is 100 lanes, the topic of the knowledge point is 9% x100 = 9 lanes.
Step S120, determining the compensation weight of each difficulty associated with the knowledge point according to the historical data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight;
step S130, selecting target test questions from a question bank according to the question quantity and the selection probability, and generating an examination interface based on the target test questions;
in this embodiment, a plurality of knowledge points are associated with the subject to be examined, each knowledge point is associated with a preset number of difficulty levels, and when the number of the difficulty levels is 5, the knowledge points represent the subjects in the subject library associated with the knowledge points, and five difficulty levels are provided.
Illustratively, the classification difficulty of the knowledge point I is 2, and the number of difficulty levels in the associated question bank is 5; the classification difficulty of the first knowledge point is 2, and the number of difficulty levels in the associated question bank is 3; the difficulty level of the two knowledge points is the same as the question of 2, and the difficulty is not the same.
Illustratively, the classification difficulty of the knowledge point I is 2, and the number of difficulty levels in the associated question bank is 5; the classification difficulty of the first knowledge point is 3, and the number of difficulty levels in the associated question bank is 5; the difficulty level in the two knowledge points is the same as the question of 3, and the difficulty is not the same.
As an optional implementation manner, according to the historical answer data, determining historical achievements corresponding to various difficulties associated with the knowledge points; and determining the compensation weight corresponding to each difficulty according to the historical achievements, determining the difficulty relative weight corresponding to each difficulty based on the compensation weight and the preset difficulty weight related to the difficulty, and determining the selection probability of each test question in the knowledge point related question library according to each difficulty relative weight. And then selecting a target test question corresponding to the question quantity according to the selection probability according to a preset function rule. And generating an examination interface according to the target test questions corresponding to the knowledge points.
Step S140, after receiving answer data corresponding to the test taker identifier, determining a final score of the test taker identifier according to answer scores and answer durations of the target test questions corresponding to the answer data.
As an optional implementation manner, after answer data fed back by the examination interface is received, answer scores and answer durations of all target test questions are obtained according to the answer data, relative scores of all the test questions are determined according to a preset evaluation rule, and then weighting calculation is carried out according to preset weight values and relative scores of all the test questions, so that final scores are determined.
Illustratively, an evaluation base index is set for each title. Such as difficulty 2, score 10, expected response time 90 seconds, score 10, weight 50%.
The technical scheme provided by the embodiment of the application at least has the following technical effects or advantages:
the difficulty compensation value of the subject to be examined is determined according to the historical answer data corresponding to the examinee identification, and then the problem is adjusted according to the preset difficulty of each knowledge point and the difficulty compensation value; further determining the compensation weight of each difficulty associated with the knowledge point according to the historical answer data, and determining the selection probability of each test question according to the compensation weight; determining a target test question according to the selection probability and the question quantity and generating an examination interface; after answer data corresponding to the marks of the testees are received, final scores are determined according to the scores and the answer time lengths of all target test questions, so that the technical effects that knowledge points are not comprehensively examined and learning efficiency of the students cannot be reflected in a targeted manner due to the fact that the students are unified with the answer test papers in the related technology are effectively solved, personalized questions aiming at the conditions of the students are achieved, the knowledge points of the students are comprehensively examined and mastered, and the learning efficiency is improved are achieved.
Example two
Based on the first embodiment, the present application provides a method for generating an examination interface, referring to fig. 2, step S120 includes:
step S210, determining difficulty compensation weights of the difficulties according to the history difficulty scores associated with the difficulties in the history answer data;
step S220, determining a difficulty relative weight according to the preset difficulty weight of the difficulty and the difficulty compensation weight;
in this embodiment, each difficulty is associated with a preset difficulty weight.
As an optional implementation manner, according to the historical answer data, determining a historical score corresponding to each difficulty associated with the knowledge point; and determining difficulty evaluation parameters according to the score of the last time, the average score of the last three times and the average score of the last time, and further determining difficulty compensation weights corresponding to the difficulty according to the difficulty evaluation parameters. And determining a preset difficulty weight corresponding to the difficulty, and determining a difficulty relative weight corresponding to the difficulty based on the mapping value of the preset difficulty weight and the difficulty compensation weight.
For example, for the historical score of each difficulty question in the knowledge point of harming public security crime, for example, the most recent 1 score is averagely divided into 8, the most recent 3 scores is averagely divided into 7, the score is averagely divided into 6, and the score is an ascending trend, and the difficulty compensation weight of the difficulty of the knowledge point is determined to be better in recent mastering degree, then the relative weight value of each difficulty of the difficulty 1 in the classification is 50% by 80% by 40%, that is, the preset difficulty weight value of the difficulty is the difficulty compensation weight.
Step S230, determining the selection probability according to the difficulty relative weights of the difficulties.
As an alternative implementation manner, after determining the relative weights of the difficulties, determining the corresponding question weights of the difficulties according to the percentage of the relative weights of the difficulties in the total sum of the relative weights of the difficulties.
Illustratively, the relative weight value of each difficulty in the knowledge point is {40%,60%,80%,20%,30% }, and the question weight of difficulty 1 is 40%/(40% +60% +80% +20% + 30%) =17.3%, and the question weight of difficulty 2 is 60%/(40% +60% +80% +20% + 30%) =26%. Other difficulty weights and so on.
As another alternative implementation manner, after determining the relative weights of the difficulties, setting the relative weights of each question associated with the difficulty as the relative weights of the difficulties, so as to determine the relative weights of each question in the knowledge point associated question library; and further determining the selection probability of each question according to the relative weight of each question.
For example, the relative weight of difficulty 1 is 40%, and the test questions of difficulty 1 have 3 channels, and the relative weight of each test question is {40%,40%,40%,60%,60%,80%,20%,30% }, and the probability of selecting question 1 is 40%/(40% +40% +40% +60% +60% +80% +20% + 30%) =10.8%, and the probability of selecting question 4 is 60%/(40% +40% +40% +60% +60% +60% +80% +20% + 30%) =16.2%. Other topic selection probabilities and so on.
Optionally, step S130 includes:
step S230, a probability set is generated according to the selection probability;
step S240, selecting the target test questions of the questions from the question library associated with each knowledge point according to the probability set and a preset selection function;
step S250, generating the examination interface according to the target test questions.
As an implementation mode, probability combination is generated according to the selection probability, a probability set and the question quantity are input into a preset selection function as parameters, and a target test question is extracted; and generating an examination interface according to a preset template. The size of the examination interface is adaptively adjusted due to the difference between occupied areas of the questions.
In the knowledge point associated question library, the probability set of selecting the questions is prob= [0.1,0.2,0.05,0.15,0.1,0.05,0.1,0.05,0.15,0.05]; the number of questions is 5, i.e., n=5; and randomly selecting target test questions according to the probability by using a randsrc function. Wherein the usage of the randsrc function is referred to as target=randsrc (1, n, [1:10; prob ]); outputting a Target test question disp (Target). And generating an examination interface according to the selected target test questions.
Alternatively, the selection function is not unique, but can be other functions extracted according to probability.
As another alternative implementation manner, when the determined selection probability is the question weight of the associated difficulty of each knowledge point, determining the selection quantity corresponding to each difficulty according to each question weight and the question quantity, and then extracting.
For example, assume that there are one hundred questions in the question bank, and each question has a difficulty degree as a property, and five difficulties are altogether: 1. 2, 3, 4, 5. Of which 1 is the simplest and 5 is the most difficult. 10 target test questions are selected from the question bank, and the difficulty distribution of the target test questions is determined according to the question setting weights corresponding to the respective degrees of difficulty: 1 level 1, 2 levels 2, 3 levels 3, 2 levels 4, 2 levels 5.
Firstly, classifying the test questions into five categories according to the difficulty attribute of each test question in the question bank: c1, C2, C3, C4, C5. Then, according to the difficulty distribution of the target test questions, determining the number of the test questions to be selected in each category: n1=1, n2=2, n3=3, n4=2, n5=2.
Then randomly extracting a corresponding number of test questions from each category, and combining the test questions into a target test question set. For example: randomly extracting a test question from C1: q1; randomly extracting two test questions from C2: q2, Q3; three test questions were randomly drawn from C3: q4, Q5, Q6; randomly extracting two test questions from C4: q7, Q8; randomly extracting two test questions from C5: q9, Q10; finally, the target test question set { Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8, Q9, Q10}.
Due to the fact that the difficulty compensation weight of the difficulty is determined according to the history difficulty scores associated with the difficulties in the history answer data; determining a difficulty relative weight according to the preset difficulty weight of the difficulty and the difficulty compensation weight; according to the relative weight of the difficulty, the selection probability is determined, so that the technical effects that in the related technology, students are unified with proposition test paper, knowledge point examination is incomplete and learning efficiency of the students cannot be reflected pertinently are effectively solved, personalized questions aiming at the conditions of the students are realized, knowledge point grasping conditions of the students are comprehensively examined, and learning efficiency is improved.
Example III
Based on the first embodiment, the third embodiment proposes a method for generating an examination interface, referring to fig. 3, step S140 further includes:
step S310, after receiving the answer data, acquiring an evaluation basic index of the target test question;
step S320, determining a scoring value of the target test question according to the answer score, the answer time length and the evaluation basic index;
in the present embodiment, an evaluation base index is set for each title. Such as difficulty 2, score 10, expected response time 90 seconds, base score 10, weight 50%.
As an optional implementation manner, after receiving answer data, acquiring evaluation basic indexes associated with each target test question; and determining the scoring value of the target test question according to the relation between the answer score and the relation between the answer time and the expected answer time.
Optionally, step S320 includes:
step S321, determining the topic score and the expected response time corresponding to the evaluation basic index;
step S322, determining the scoring value according to the ratio of the answering score to the question score and the ratio of the answering duration to the expected answering time.
As an optional implementation manner, the question score and the expected answer time in the evaluation basic index related to the target test question are determined, and the score value of the target test question is determined according to the ratio of the answer score to the question score and the ratio of the answer time to the expected answer time and the basic score value corresponding to the evaluation basic index.
For the example, for the question 1, the answer score is 9, the answer time is 100 seconds, the question score in the evaluation base index is 10 minutes, and the expected answer time is 80 seconds; then based on the score 9/10→90%, the length of the answer (100-80)/80=25% →80%; a base score value of 10 points is obtained, and then the score value of the title is 10×90% ×80=7.2 points.
Step S330, obtaining a reference score corresponding to the test question mark according to the score values with the same test question mark;
and step S340, determining the final score according to each reference score and the reference weight corresponding to the test question mark.
As an optional implementation manner, determining a reference score corresponding to the test question mark according to the score value of the target test questions with the same test question mark and combining the corresponding weight value; and then, determining a final score according to the reference weight of each test question mark combined with the reference score.
For example, the test question mark consists of knowledge points and associated difficulties, and for the same knowledge point and N questions of the same difficulty, the reference score set is { weight of each question }, and then the score of the knowledge point is the ratio of { weight of each question } score of each question } to N. I.e., { weight per question }/N of the score value per question }. And determining that the reference weight of the difficulty of the knowledge point is P, and determining a weighted average value based on the score value corresponding to each test question mark and the reference weight as the final score.
After receiving the answer data, acquiring an evaluation basic index of the target test question; determining a scoring value of the target test question according to the answer score, the answer time and the evaluation basic index; obtaining a reference score corresponding to the test question mark according to the score values with the same test question mark; the final score is determined according to each reference score and the reference weight corresponding to the test question mark, so that the technical problem that the score is single in a scoring means in the related technology, so that the student cannot accurately evaluate the mastering condition of a certain knowledge point is effectively solved, the learning result of the student is comprehensively evaluated in a multi-dimensional mode, and the technical effect of the student on the mastering condition of each knowledge point is accurately reflected.
The application further provides examination interface generating equipment, and referring to fig. 4, fig. 4 is a schematic structural diagram of examination interface generating equipment of a hardware running environment according to an embodiment of the application.
As shown in fig. 4, the generating device of the examination interface may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the configuration shown in fig. 4 does not constitute a limitation of the generation device of the examination interface, and may include more or fewer components than illustrated, or may combine certain components, or may be arranged in a different arrangement of components.
Optionally, the memory 1005 is electrically connected to the processor 1001, and the processor 1001 may be configured to control operation of the memory 1005, and may also read data in the memory 1005 to implement generation of an examination interface.
Alternatively, as shown in fig. 4, a memory 1005 as a storage medium may include an operating system, a data storage module, a network communication module, a user interface module, and a program for generating an examination interface.
Optionally, in the generating device of the examination interface shown in fig. 4, the network interface 1004 is mainly used for data communication with other devices; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the generation device of the examination interface of the present application may be provided in the generation device of the examination interface.
As shown in fig. 4, the generating device of the test interface calls, through the processor 1001, a generating program of the test interface stored in the memory 1005, and executes related step operations of the generating method of the test interface provided by the embodiment of the present application:
determining a difficulty compensation value associated with a subject to be examined according to historical answer data corresponding to the subject identification, and adjusting the problem amount corresponding to each knowledge point according to the difficulty compensation value;
Determining the compensation weight of each difficulty associated with the knowledge point according to the historical data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight;
selecting target test questions from a question bank according to the question quantity and the selection probability, and generating an examination interface based on the target test questions;
after receiving answer data corresponding to the examinee identification, determining a final score of the examinee identification according to answer scores and answer durations of the target test questions corresponding to the answer data.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
determining subject evaluation parameters of the subject identification according to subject history scores associated with the subjects to be examined in the historical answer data;
and determining the difficulty compensation value based on the preset difficulty of the subject to be examined and the subject evaluation parameter, wherein the difficulty compensation value comprises compensation weights of the difficulty of each knowledge point.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
determining classification evaluation parameters of the knowledge points according to the historical classification scores associated with the knowledge points in the historical answer data;
Acquiring the classification difficulty and preset classification weight associated with the knowledge points, and adjusting the preset classification weight based on the classification evaluation parameters;
and determining the question according to the adjusted preset classification weight and the classification difficulty.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
determining classification compensation weights according to the classification difficulties and the difficulty compensation values;
determining a classification relative weight according to the preset classification weight and the classification compensation weight;
and determining the questions according to the classification relative weights of the knowledge points.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
according to the historical difficulty scores associated with the difficulties in the historical answer data, determining difficulty compensation weights of the difficulties;
determining a difficulty relative weight according to the preset difficulty weight of the difficulty and the difficulty compensation weight;
and determining the selection probability according to the relative weight of the difficulty of each difficulty.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
Generating a probability set according to the selection probability;
selecting the target test questions of the questions from the question library associated with each knowledge point according to the probability set and a preset selection function;
and generating the examination interface according to the target test questions.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
after receiving the answer data, acquiring an evaluation basic index of the target test question;
determining a scoring value of the target test question according to the answer score, the answer time and the evaluation basic index;
obtaining a reference score corresponding to the test question mark according to the score values with the same test question mark;
and determining the final score according to each reference score and the reference weight corresponding to the test question mark.
Optionally, the processor 1001 may call the generation program of the examination interface stored in the memory 1005, and further perform the following operations:
determining the topic score and the expected answering time corresponding to the evaluation basic index;
and determining the scoring value according to the ratio of the answering score to the question score and the ratio of the answering duration to the expected answering time.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a generating program of the examination interface, and the generating program of the examination interface realizes the relevant steps of any embodiment of the generating method of the examination interface when being executed by a processor.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. The method for generating the examination interface is characterized by comprising the following steps of:
determining a difficulty compensation value associated with a subject to be examined according to historical answer data corresponding to the subject identification, and adjusting the problem amount corresponding to each knowledge point according to the difficulty compensation value, wherein the difficulty compensation value comprises compensation weight of the difficulty of each knowledge point;
determining the compensation weight of each difficulty associated with the knowledge point according to the historical answer data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight, wherein the subjects to be examined are associated with a plurality of knowledge points, and each knowledge point is associated with a preset number of difficulties;
Selecting target test questions from a question bank according to the question quantity and the selection probability, and generating an examination interface based on the target test questions;
after receiving answer data corresponding to the examinee identification, acquiring an evaluation basic index of the target test question;
determining a scoring value of the target test question according to the answer score, the answer time length and the evaluation basic index corresponding to the answer data;
obtaining a reference score corresponding to the test question mark according to the score values with the same test question mark;
and determining the final score of the examinee identification according to each reference score and the reference weight corresponding to the test question identification.
2. The method for generating an examination interface as claimed in claim 1, wherein the determining the difficulty compensation value associated with the subject to be examined according to the historical answer data corresponding to the subject identifier comprises:
determining subject evaluation parameters of the subject identification according to subject history scores associated with the subjects to be examined in the historical answer data;
and determining the difficulty compensation value based on the preset difficulty of the subject to be examined and the subject evaluation parameter, wherein the difficulty compensation value comprises compensation weights of the difficulty of each knowledge point.
3. The method for generating an examination interface according to claim 1, wherein the adjusting the questions corresponding to each knowledge point according to the difficulty compensation value comprises:
determining classification evaluation parameters of the knowledge points according to the historical classification scores associated with the knowledge points in the historical answer data;
acquiring the classification difficulty and preset classification weight associated with the knowledge points, and adjusting the preset classification weight based on the classification evaluation parameters;
and determining the question according to the adjusted preset classification weight and the classification difficulty.
4. The method for generating an examination interface according to claim 3, wherein the determining the question according to the adjusted preset classification weight and the classification difficulty comprises:
determining classification compensation weights according to the classification difficulties and the difficulty compensation values;
determining a classification relative weight according to the preset classification weight and the classification compensation weight;
and determining the questions according to the classification relative weights of the knowledge points.
5. The method for generating an examination interface according to claim 1, wherein determining the compensation weight of each difficulty associated with the knowledge point according to the historical answer data, and determining the selection probability of each test question associated with the difficulty based on the compensation weight, comprises:
According to the historical difficulty scores associated with the difficulties in the historical answer data, determining difficulty compensation weights of the difficulties;
determining a difficulty relative weight according to the preset difficulty weight of the difficulty and the difficulty compensation weight;
and determining the selection probability according to the relative weight of the difficulty of each difficulty.
6. The method for generating an examination interface according to claim 1, wherein selecting a target test question from a question bank according to the question amount and the selection probability, and generating the examination interface based on the target test question, comprises:
generating a probability set according to the selection probability;
selecting the target test questions of the questions from the question library associated with each knowledge point according to the probability set and a preset selection function;
and generating the examination interface according to the target test questions.
7. The method for generating an examination interface according to claim 1, wherein the determining the scoring value of the target test question according to the answer score, the answer duration and the evaluation base index corresponding to the answer data includes:
determining the topic score and the expected answering time corresponding to the evaluation basic index;
And determining the scoring value according to the ratio of the answering score to the question score and the ratio of the answering duration to the expected answering time.
8. A test interface generation device, comprising a memory, a processor, and a test interface generation program stored in the memory and executable on the processor, wherein the processor, when executing the test interface generation program, performs the steps of the test interface generation method according to any one of claims 1 to 7.
9. A computer-readable storage medium, wherein a program for generating an examination interface is stored on the computer-readable storage medium, and the program for generating an examination interface, when executed by a processor, implements the steps of the method for generating an examination interface according to any one of claims 1 to 7.
CN202310202771.8A 2023-03-03 2023-03-03 Examination interface generation method, device and readable storage medium Active CN116431800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310202771.8A CN116431800B (en) 2023-03-03 2023-03-03 Examination interface generation method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310202771.8A CN116431800B (en) 2023-03-03 2023-03-03 Examination interface generation method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN116431800A CN116431800A (en) 2023-07-14
CN116431800B true CN116431800B (en) 2023-11-03

Family

ID=87080497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310202771.8A Active CN116431800B (en) 2023-03-03 2023-03-03 Examination interface generation method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN116431800B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020023519A (en) * 2000-09-22 2002-03-29 이예범 Testing and evaluation program of the problem bank type and keeping same testing difficulty to examinee in spite of different test problem for the internet
WO2011034309A2 (en) * 2009-09-18 2011-03-24 Choi Tae-Ho Test-based electronic learning system and method for same
CN108596472A (en) * 2018-04-20 2018-09-28 贵州金符育才教育科技有限公司 A kind of the artificial intelligence tutoring system and method for natural sciences study
CN110378818A (en) * 2019-07-22 2019-10-25 广西大学 Personalized exercise recommended method, system and medium based on difficulty
CN111737448A (en) * 2020-07-21 2020-10-02 江西理工大学南昌校区 Question selection method and system based on basic subject short answer of answer duration
CN112508334A (en) * 2020-11-06 2021-03-16 华中师范大学 Personalized paper combining method and system integrating cognitive characteristics and test question text information
CN113870634A (en) * 2021-09-24 2021-12-31 华中科技大学 Intelligent volume combination method and system combined with virtual teaching

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020023519A (en) * 2000-09-22 2002-03-29 이예범 Testing and evaluation program of the problem bank type and keeping same testing difficulty to examinee in spite of different test problem for the internet
WO2011034309A2 (en) * 2009-09-18 2011-03-24 Choi Tae-Ho Test-based electronic learning system and method for same
CN108596472A (en) * 2018-04-20 2018-09-28 贵州金符育才教育科技有限公司 A kind of the artificial intelligence tutoring system and method for natural sciences study
CN110378818A (en) * 2019-07-22 2019-10-25 广西大学 Personalized exercise recommended method, system and medium based on difficulty
CN111737448A (en) * 2020-07-21 2020-10-02 江西理工大学南昌校区 Question selection method and system based on basic subject short answer of answer duration
CN112508334A (en) * 2020-11-06 2021-03-16 华中师范大学 Personalized paper combining method and system integrating cognitive characteristics and test question text information
CN113870634A (en) * 2021-09-24 2021-12-31 华中科技大学 Intelligent volume combination method and system combined with virtual teaching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Shiyan He 等.Learning Analysis Path Construction and Empirical Research Based on Rasch Model in Electronic Schoolbag Environment.《2021 Tenth International Conference of Educational Innovation through Technology (EITT)》.2022,282-287. *
李一波,张溶溶.试题得分概率和答题时间概率分布自适应学习整定.计算机工程与应用.2006,(17),215-217. *
赵宇航.自适应测试及试题推荐系统的研究与实现.《 CNKI优秀硕士学位论文全文库》.2022,H127-294. *

Also Published As

Publication number Publication date
CN116431800A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
Muldner et al. An analysis of students’ gaming behaviors in an intelligent tutoring system: Predictors and impacts
Hindess The use of official statistics in sociology: A critique of positivism and ethnomethodology
Borsboom Latent variable theory
CN110389969A (en) The system and method for the learning Content of customization are provided
Lallé et al. Prediction of users' learning curves for adaptation while using an information visualization
von Davier TIMSS 2019 scaling methodology: Item response theory, population models, and linking across modes
KR101041672B1 (en) An Intelligent Customized Learning Service Method
Duttle Cognitive skills and confidence: Interrelations with overestimation, overplacement and overprecision
Schmera et al. On the reliability of the elements of metacommunity structure framework for separating idealized metacommunity patterns
US20120329028A1 (en) Method for intelligent personalized learning service
CN110245207B (en) Question bank construction method, question bank construction device and electronic equipment
Uittenhove et al. From lab-based to web-based behavioural research: Who you test is more important than how you test
Cranmer et al. Getting to 30 GW by 2030: Visual preferences of coastal residents for offshore wind farms on the US East Coast
CN116431800B (en) Examination interface generation method, device and readable storage medium
Zickar et al. Developing an interpretation of item parameters for personality items: Content correlates of parameter estimates
CN109800880B (en) Self-adaptive learning feature extraction system based on dynamic learning style information and application
CN110674632A (en) Method and device for determining security level, storage medium and equipment
CN112015830B (en) Question storage method suitable for adaptive learning
Timm et al. Secondary students’ reasoning on pedigree problems
Morrison Comparing elo, glicko, irt, and bayesian irt statistical models for educational and gaming data
CN113919983A (en) Test question portrait method, device, electronic equipment and storage medium
Perez et al. Implementation of a test constructor utilizing a calibrated item bank using 3PL-IRT model
Guthrie et al. Adding duration-based quality labels to learning events for improved description of students’ online learning behavior
Şendurur et al. Development of metacognitive skills inventory for internet search (MSIIS): Exploratory and confirmatory factor analyses
Samimi et al. Association between logical reasoning ability and quality of relevance judgments in crowdsourcing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant