CN111708951B - Test question recommending method and device - Google Patents

Test question recommending method and device Download PDF

Info

Publication number
CN111708951B
CN111708951B CN202010579654.XA CN202010579654A CN111708951B CN 111708951 B CN111708951 B CN 111708951B CN 202010579654 A CN202010579654 A CN 202010579654A CN 111708951 B CN111708951 B CN 111708951B
Authority
CN
China
Prior art keywords
user
question
test
test question
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010579654.XA
Other languages
Chinese (zh)
Other versions
CN111708951A (en
Inventor
韩文玉
汪张龙
徐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Xunfei Qiming Technology Development Co ltd
Original Assignee
Guangdong Xunfei Qiming Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Xunfei Qiming Technology Development Co ltd filed Critical Guangdong Xunfei Qiming Technology Development Co ltd
Priority to CN202010579654.XA priority Critical patent/CN111708951B/en
Publication of CN111708951A publication Critical patent/CN111708951A/en
Application granted granted Critical
Publication of CN111708951B publication Critical patent/CN111708951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a test question recommending method and device, wherein the method comprises the following steps: after receiving a test question recommendation request triggered by a user, acquiring historical question making information of the user, analyzing the historical question making information, and determining a user capacity defect; and determining a target recommendation strategy according to the user capacity defect, and screening candidate recommendation test questions from the candidate test question sets according to the target recommendation strategy so as to recommend the candidate recommendation test questions to a user. The user capability defect is determined based on the historical question making information of the user, so that the user capability defect can accurately represent the question solving capability defect of the user in the historical question making process, candidate recommended test questions recommended based on the user capability defect are recommended according to the question solving capability defect of the user in the historical question making process, and therefore accuracy of test question recommendation can be improved, and user experience can be improved.

Description

Test question recommending method and device
Technical Field
The application relates to the technical field of computers, in particular to a test question recommending method and device.
Background
With the popularization of the internet, users can practice questions using a web-based practice question recommendation system. However, due to the fact that the conventional test question recommendation system is used for recommending test questions, the recommended test questions are high in universality, the recommended test questions are lack of pertinence, the accuracy of test question recommendation is low, and user experience is poor.
As an example, when a user performs english listening and speaking test exercises using the exercise test question recommending system, the exercise test question recommending system can recommend english listening and speaking test questions to the user according to the user's identity information (e.g., grade, etc.), so that the user can exercise using the recommended english listening and speaking test questions. The comparison of the English listening and speaking test questions recommended by the recommendation method is general, so that the recommended English listening and speaking test questions have high universality (for example, the recommended English listening and speaking test questions are suitable for all students of a certain grade), the recommended English listening and speaking test questions are lack of pertinence, the accuracy of test question recommendation is low, and the user experience is poor.
Disclosure of Invention
The main object of the embodiments of the present application is to provide a method and an apparatus for recommending test questions, which can improve the accuracy of recommending test questions.
The embodiment of the application provides a test question recommending method, which comprises the following steps:
after receiving a test question recommendation request triggered by a user, acquiring historical question information of the user;
analyzing the historical question information to determine the user capacity defect;
determining a target recommendation strategy according to the user capability defect;
screening candidate recommended test questions from the candidate test question set according to the target recommendation strategy;
and recommending the candidate recommended test questions to the user.
Optionally, the analyzing the historical doing topic information to determine the user capability defect includes:
analyzing the historical question information and determining the user score loss reason;
and analyzing the user capacity defect according to the user score loss reason.
Optionally, the analyzing the historical topic information to determine a user score loss cause includes:
according to the test question type carried by the historical question making information, determining the examination content corresponding to the test question type;
and determining the user score loss reason according to the user answer result carried by the historical question making information and the examination content corresponding to the test question type.
Optionally, the analyzing the user capability defect according to the user score loss cause includes:
Determining the user score loss type according to the user score loss reason and the first mapping relation; the first mapping relation is used for recording the corresponding relation between each misclassification reason and each misclassification type;
and determining the problem solving capability defect corresponding to the user failure classification type as a user capability defect.
Optionally, the determining the target recommendation policy according to the user capability defect includes:
determining a target recommendation strategy according to the user capability defect and the second mapping relation; the second mapping relation is used for recording the corresponding relation between the problem solving capability defect and the test problem recommending strategy.
Optionally, the determining the target recommendation policy according to the user capability defect includes:
judging whether the user capacity defect meets a preset condition or not;
when the user capability defect is determined to meet a preset condition, determining a target recommendation strategy according to the user capability defect;
and when the user capability defect is determined to not meet the preset condition, determining a target recommendation strategy according to the user capability defect, the test question subject carried by the historical question making information and the grade of the user.
Optionally, the obtaining the historical question information of the user includes:
Each piece of user wrong question information which is stored in a preset time period and carries the ith test question theme is determined to be historical question information; or respectively determining T pieces of user wrong question information with the ith test question theme, the storage time of which is closest to the current time, as historical question information; t is a positive integer, i is less than or equal to N, and N is the number of thematic subjects;
analyzing the historical doing topic information to determine a user capacity defect, including:
and analyzing the historical question information respectively to determine the user capacity defect corresponding to the historical question information.
Optionally, determining a target recommendation policy according to the user capability defect; screening candidate recommended test questions from the candidate test question set according to the target recommendation strategy, wherein the screening comprises the following steps:
determining a target recommendation strategy corresponding to each history question information according to the user capability defect corresponding to each history question information; determining candidate recommended test questions corresponding to each history question information according to target recommendation strategies corresponding to each history question information;
the recommending the candidate recommended test questions to the user comprises the following steps:
generating a test question recommendation set corresponding to an ith test question according to candidate recommendation test questions corresponding to each history question making information carrying the ith test question, extracting the test questions of the test question recommendation set corresponding to the ith test question according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question, obtaining each target recommendation test question, and recommending the target recommendation test questions to the user; wherein i is a positive integer, i is less than or equal to N, and N is the number of thematic subjects.
Optionally, the method further comprises:
selecting a preset number of test questions from the expanded test question set to serve as test questions to be recommended; the extended test question set comprises at least one test question carrying an extended theme; the expansion theme characterizes a test question theme which is not exercised by the user;
the recommending the candidate recommended test questions to the user comprises the following steps:
and recommending the candidate recommended test questions and the test questions to be recommended to the user.
Optionally, the method further comprises:
acquiring a preset capability level of a user;
the determining a target recommendation strategy according to the user capability defect comprises the following steps:
and determining a target recommendation strategy according to the user capacity defect and the user preset capacity level.
The embodiment of the application also provides a test question recommending device, which comprises:
the information acquisition unit is used for acquiring historical question making information of the user after receiving a question recommending request triggered by the user;
the capability determining unit is used for analyzing the historical question information and determining the capability defect of the user;
the strategy determining unit is used for determining a target recommendation strategy according to the user capacity defect;
the test question determining unit is used for screening candidate recommended test questions from the candidate test question set according to the target recommendation strategy;
And the test question recommending unit is used for recommending the candidate recommended test questions to the user.
Optionally, the capability determining unit includes:
the reason analysis subunit is used for analyzing the historical question information and determining the reason of the user failure;
and the capability analysis subunit is used for analyzing the user capability defect according to the user score loss reason.
Optionally, the cause analysis subunit is specifically configured to:
according to the test question type carried by the historical question making information, determining the examination content corresponding to the test question type; and determining the user score loss reason according to the user answer result carried by the historical question making information and the examination content corresponding to the test question type.
Optionally, the capability analysis subunit is specifically configured to:
determining the user score loss type according to the user score loss reason and the first mapping relation; the first mapping relation is used for recording the corresponding relation between each misclassification reason and each misclassification type; and determining the problem solving capability defect corresponding to the user failure classification type as a user capability defect.
Optionally, the policy determining unit is specifically configured to:
determining a target recommendation strategy according to the user capability defect and the second mapping relation; the second mapping relation is used for recording the corresponding relation between the problem solving capability defect and the test problem recommending strategy.
Optionally, the policy determining unit is specifically configured to:
judging whether the user capacity defect meets a preset condition or not; when the user capability defect is determined to meet a preset condition, determining a target recommendation strategy according to the user capability defect; and when the user capability defect is determined to not meet the preset condition, determining a target recommendation strategy according to the user capability defect, the test question subject carried by the historical question making information and the grade of the user.
Optionally, the information acquisition unit is specifically configured to:
each piece of user wrong question information which is stored in a preset time period and carries the ith test question theme is determined to be historical question information; or respectively determining T pieces of user wrong question information with the ith test question theme, the storage time of which is closest to the current time, as historical question information; t is a positive integer, i is less than or equal to N, and N is the number of thematic subjects;
the capability determining unit is specifically configured to:
and analyzing the historical question information respectively to determine the user capacity defect corresponding to the historical question information.
Optionally, the policy determining unit is specifically configured to: determining a target recommendation strategy corresponding to each history question information according to the user capability defect corresponding to each history question information;
The test question determining unit is specifically configured to: determining candidate recommended test questions corresponding to each history question information according to target recommendation strategies corresponding to each history question information;
the test question recommending unit is specifically configured to:
generating a test question recommendation set corresponding to an ith test question according to candidate recommendation test questions corresponding to each history question making information carrying the ith test question, extracting the test questions of the test question recommendation set corresponding to the ith test question according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question, obtaining each target recommendation test question, and recommending the target recommendation test questions to the user; wherein i is a positive integer, i is less than or equal to N, and N is the number of thematic subjects.
Optionally, the apparatus further includes:
the test question selection unit is used for selecting a preset number of test questions from the expanded test question set to serve as test questions to be recommended; the extended test question set comprises at least one test question carrying an extended theme; the expansion theme characterizes a test question theme which is not exercised by the user;
the test question recommending unit is specifically configured to:
and recommending the candidate recommended test questions and the test questions to be recommended to the user.
Optionally, the apparatus further includes:
the level acquisition unit is used for acquiring a user preset capacity level;
the policy determining unit is specifically configured to:
and determining a target recommendation strategy according to the user capacity defect and the user preset capacity level.
Based on the technical scheme, the application has the following beneficial effects:
in the test question recommending method provided by the application, after a test question recommending request triggered by a user is received, historical question making information of the user is acquired, and the historical question making information is analyzed to determine the capability defect of the user; and determining a target recommendation strategy according to the user capacity defect, and screening candidate recommendation test questions from the candidate test question sets according to the target recommendation strategy so as to recommend the candidate recommendation test questions to a user. The user capability defect is determined based on the historical question making information of the user, so that the user capability defect can accurately represent the question solving capability defect of the user in the historical question making process, candidate recommended test questions recommended based on the user capability defect are recommended according to the question solving capability defect of the user in the historical question making process, and therefore accuracy of test question recommendation can be improved, and user experience can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a test question recommendation method applied to a terminal device provided in an embodiment of the present application;
fig. 2 is an application scenario schematic diagram of a test question recommendation method applied to a server according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for recommending test questions according to an embodiment of the present application;
FIG. 4 is a logic diagram of a recommendation policy selection provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a test question recommending apparatus according to an embodiment of the present application.
Detailed Description
In order to facilitate understanding of the technical solution of the present application, some basic concepts are described below.
English listening and speaking questions include listening questions and spoken questions.
The hearing test questions are used for examining the hearing and understanding abilities of the user to the language, and the examination forms of the hearing test questions are objective questions (such as post-hearing selection, post-hearing filling, post-hearing ordering and the like).
The oral test questions are used for examining the expression capability of the user on the language, and the oral test question examination form comprises a reading question, a question answer and a presentation question. The reading questions are used for examining the voice intonation and emotion of the user, and examination contents corresponding to the reading questions are whether the pronunciation of the user is accurate or not. The question and answer questions are used for examining the understanding ability and the expression ability of the user, and the examination contents corresponding to the question and answer questions can comprise whether the pronunciation of the user is accurate, whether the answer of the user is accurate and complete, and the like. The expression questions are used for examining the expression capability of the user on the language, and the examination contents corresponding to the expression questions can comprise whether the user expression is accurate and complete, whether the user expression is smooth, whether the user pronunciation is accurate, and the like.
The test question attribute is used for describing label information of the test question, and the test question attribute can comprise at least one of test question content, test question standard answer, test question type, test question difficulty, test question theme, test question applicability grade, teaching material unit corresponding to the test question and the like.
The test question type characterizes the test question type. For example, the questions of the spoken test questions include reading questions, answering questions, and expressing questions.
The test question theme characterizes the main content of the test question. For example, the subject matter may be football, hobbies, ideal, etc.
The test question is applicable to the grade characterization test question using object. For example, when the test question fitness level of one test question is high three, it is determined that the test question is suitable for the high three students to practice.
The teaching material unit corresponding to the test question characterizes the teaching material unit related to the test question, which comprises the following specific steps: when the test questions belong to the test questions in the teaching materials, the teaching material units corresponding to the test questions refer to the teaching material units comprising the test questions; when the test question does not belong to the test questions in the teaching material, but the examination knowledge points of the test question belong to the content of the teaching material, the teaching material unit corresponding to the test question refers to the teaching material unit comprising the examination knowledge points of the test question.
The user attribute is used to describe tag information possessed by the user, and may include user identification, grade information, and the like. The user identifier is used for uniquely identifying the user, for example, when the test question recommending method provided by the embodiment of the application is applied to a test question recommending system, the user identifier may be a user account. The grade information is used to describe the grade in which the user is located. The test question recommending system refers to a system for realizing the test question recommending method.
The user answer result is used for describing the different points between the answer content of the test questions provided by the user and the standard answers of the test questions.
In order to solve the technical problems in the background art, the embodiment of the application provides a test question recommending method, which specifically comprises the following steps: after receiving a test question recommendation request triggered by a user, acquiring historical question making information of the user; analyzing the historical question information to determine the user capacity defect; determining a target recommendation strategy according to the user capability defect; screening candidate recommended test questions from the candidate test question set according to a target recommended strategy; and recommending the candidate recommended test questions to the user.
In addition, the execution subject of the test question recommending method is not limited, and for example, the test question recommending method provided in the embodiment of the application can be applied to data processing equipment such as terminal equipment or a server. The terminal device may be a smart phone, a computer, a personal digital assistant (Personal Digital Assitant, PDA), a tablet computer, or the like. The servers may be stand alone servers, clustered servers, or cloud servers.
In order to facilitate understanding of the technical solution provided in the embodiments of the present application, an application scenario of the test question recommendation method provided in the embodiments of the present application is described in the following by way of example with reference to fig. 1 and fig. 2, respectively. Fig. 1 is an application scenario schematic diagram of a test question recommendation method applied to a terminal device according to an embodiment of the present application; fig. 2 is an application scenario schematic diagram of a test question recommendation method applied to a server according to an embodiment of the present application.
In the application scenario shown in fig. 1, when the user 101 triggers a test question recommendation request on the terminal device 102, the terminal device 102 receives the test question recommendation request, and performs test question recommendation to the user 101 by executing the test question recommendation method provided in the embodiment of the present application. For example, the process of the terminal device 102 recommending the test questions to the user 101 may specifically be: the terminal equipment 102 firstly acquires the historical question information of the user 101, analyzes the historical question information and determines the user capacity defect; the terminal device 102 determines a target recommendation policy according to the user capability defect, and screens candidate recommendation questions from the candidate question sets according to the target recommendation policy so as to recommend the candidate recommendation questions to the user 101, so that the user 101 can view the candidate recommendation questions on the terminal device 102.
In the application scenario shown in fig. 2, when the user 201 triggers a test question recommendation request on the terminal device 202, the terminal device 202 receives the test question recommendation request and forwards the test question recommendation request to the server 203, so that the server 203 performs test question recommendation to the user 201 by executing the test question recommendation method provided in the embodiment of the present application. For example, the process of the server 203 recommending the test questions to the user 101 may specifically be: the server 203 firstly acquires the historical question information of the user 201, analyzes the historical question information and determines the user capacity defect; and determining a target recommendation strategy according to the user capability defect, and screening candidate recommendation questions from the candidate test question sets according to the target recommendation strategy so as to send the candidate recommendation questions to the terminal equipment 202 for display, so that the user 201 can view the candidate recommendation questions on the terminal equipment 202.
It should be noted that, the test question recommending method provided in the embodiment of the present application may be applied not only to the application scenario shown in fig. 1 or fig. 2, but also to other application scenarios in which test question recommendation is required, which is not specifically limited in the embodiment of the present application.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Method embodiment one
Referring to fig. 3, the chart is a flowchart of a test question recommending method provided in an embodiment of the present application.
The test question recommending method provided by the embodiment of the application comprises S301-S305:
s301: and after receiving a test question recommendation request triggered by the user, acquiring historical question making information of the user.
The test question recommendation request is used for requesting to recommend test questions to the user, and the embodiment of the application is not limited to the implementation mode that the user triggers the test question recommendation request. For example, the user may trigger the question recommendation request by clicking the recommendation button, or may trigger the question recommendation request by opening, switching, or refreshing the question recommendation page.
The historical question making information is used for describing relevant information of a test question (namely, a done test question) of the user, and the historical question making information can carry information such as a user answer result, a test question score, at least one test question attribute and the like. In addition, each test question finished by the user corresponds to one piece of history question information, so that the history question information can accurately represent the corresponding relevant information of the finished test question, and the number of the history question information is not limited by the embodiment of the application.
In some cases, in order to improve accuracy of test question recommendation, relevant information that some users closest to the current time have completed the test questions may be determined as historical test question information. Based on this, the embodiment of the present application further provides two implementations of S301, which are described below respectively.
As a first embodiment, S301 may specifically be: and determining the question information of each user stored in the preset time period as historical question information.
The preset time period characterizes a historical time period which is closest to the current time and has a time length reaching the preset time length, that is, the preset time period is a time period interval [ current time-preset time length, current time ]. That is, the preset time period is terminated at the current time and is started at the difference between the current time and the preset time. The preset time period can be preset.
Note that, the embodiment of the present application is not limited to the current time, and the current time may refer to the execution time of S301. The current time may be the time of receiving the question recommendation request. For example, assume that the preset time period is set to 3 hours in advance, and the reception time of the test question recommendation request is 30 days 1, 2 and 2020: 00. based on this assumption, the preset time period is [2020, 1, 2, 7:00 30 days 1 and 2 of 2020: 00].
Based on the above-described content related to the first embodiment of S301, after receiving the test question recommendation request, each piece of user question information stored in the preset period of time may be determined as historical question information, so that the test question recommendation can be performed based on the historical question information. The preset time period is a historical time period closest to the current time, so that the question making information of each user stored in the preset time period can more accurately represent the knowledge grasping level of the user, and the test questions recommended based on the question making information of the user can be closer to the test question demands of the user at the current time, so that the accuracy of the test question recommendation is improved.
As a second embodiment, S301 may specifically be: and respectively determining R pieces of user question information with storage time closest to the current time as historical question information.
R is a positive integer, and can be preset according to application scenes. For example, when the test question recommending method provided by the embodiment of the application is applied to a test question recommending system, R can be set according to the total number of test questions in the test question bank and/or the use frequency of the system. The test question library is a database used for storing all test questions in the test question recommendation system. The system use frequency refers to the frequency with which the test question recommending system is accessed.
Based on the above-described content related to the second embodiment of S301, after receiving the question recommendation request, the R pieces of user question information, the storage time of which is closest to the current time, may be determined as historical question information, respectively. The knowledge mastering level of the users can be more accurately represented by the R user question information, so that the questions recommended based on the user question information can be closer to the question demands of the users at the current moment, and the accuracy of the question recommendation is improved. And because R is preset, R users can make question information so as to ensure that the data volume according to the recommendation is sufficient and effective, thereby being beneficial to further improving the accuracy of the recommendation of the test questions.
In addition, in order to improve the test question recommending efficiency, only the acquired wrong question information of each user can be determined to be historical question making information, so that the wrong question information of each user only needs to be analyzed later. Based on this, the historical question information may be user wrong question information. The user wrong question information is used for describing related information of the user answering the wrongly-made test questions.
In some cases, in order to ensure the comprehensiveness of test question recommendation, user wrong question information corresponding to user test questions under different test question topics can be determined as historical question information, so that the basis materials of test question recommendation are diversified. Based on this, the embodiment of the present application further provides two other implementations of S301, which are described below respectively.
As a third embodiment, S301 may specifically be: and determining the user wrong question information carrying the 1 st test question theme, the user wrong question information carrying the 2 nd test question theme, … … and the user wrong question information carrying the N th test question theme stored in the preset time period as historical question information. Wherein N is the number of themes of the test question. It should be noted that, please refer to the above for the relevant content of the preset time period.
As an example, assume that first to eighth user error information are stored in a preset period of time, the first to third user error information all carry the 1 st test question, the fourth to fifth user error information all carry the 2 nd test question, and the sixth to eighth user error information all carry the 3 rd test question. Based on this assumption, S301 may specifically be: and determining the first user wrong information to the eighth user wrong information as historical doing problem information.
Therefore, the historical question making information acquired in the third embodiment of S301 covers the user wrong question information corresponding to the user wrong questions under the multiple test question topics completed in the preset time period, so that the subsequent test questions recommended based on the user wrong question information related to the multiple test question topics can cover the user missed points of the user under the multiple test question topics, and the recommended test questions are more suitable for the user, so that the accuracy of recommending the test questions is effectively improved.
As a fourth embodiment, S301 may specifically be: and respectively determining the T pieces of user wrong information carrying the 1 st test question topic, the T pieces of user wrong information carrying the 2 nd test question topic, … … and the T pieces of user wrong information carrying the N th test question topic which are closest to the current time as historical question making information. Wherein N is the number of themes of the test question.
T is a positive integer, and T can be preset according to application scenes. In addition, when the test question recommending method provided by the embodiment of the application is applied to a test question recommending system, T can be set according to the total number of test questions in the test question library and/or the use frequency of the system. The test question library is a database used for storing all test questions in the test question recommendation system. The system use frequency refers to the frequency with which the test question recommending system is accessed.
As an example, when T is 50, S301 may specifically be: respectively determining 50 pieces of user wrong topic information which is closest to the current time in storage time and carries the 1 st test topic as historical topic information; respectively determining 50 pieces of user wrong topic information which is closest to the current time in storage time and carries the 2 nd test topic as historical topic information; … …; and respectively determining 50 pieces of user wrong question information which is closest to the current time in storage time and carries the Nth test question theme as historical question information.
Therefore, the historical question making information acquired in the fourth embodiment of S301 covers T user wrong question information under multiple test question topics recently completed by the user, so that the recommended test questions based on the user wrong question information related to the multiple test question topics can cover user misclassification points of the user under the multiple test question topics, thereby making the recommended test questions more suitable for the user, and effectively improving the accuracy of the recommended test questions. And because T is preset, the problem information of T users under each test problem theme can also ensure that the data volume of the basis materials of the test problem recommendation under the corresponding test problem theme is sufficient and effective, thereby being beneficial to further improving the accuracy of the test problem recommendation.
Based on the above-mentioned related content of S301, in the embodiment of the present application, after receiving the test question recommendation request triggered by the user, the recent historical question making information of the user may be obtained, especially, the user wrong question information under at least one test question subject that the user has completed recently is obtained, so that the user miss score cause may be analyzed based on the historical question making information or the user wrong question information subsequently.
S302: and analyzing the historical question information to determine the user capacity defect.
The user capability defect is used for describing the defect in the user's question making capability, and the user capability defects corresponding to different application fields are different. For example, for the field of spoken questions, user capability flaws may include flaws in knowledge point mastery, flaws in pronunciation, flaws in presentation, and the like.
In some cases, because the user capability defect can be determined according to the user's question-making score, after the historical question-making information is obtained, the user score-making score can be analyzed first and then the user capability defect can be determined. Based on this, the present embodiment provides an implementation for determining a user capability defect (i.e. S302), which specifically includes S3021 and S3022:
S3021: and analyzing the historical question information to determine the user score loss reason.
The user score loss reason is used for describing the reason that the user does not give an accurate answer to the test question.
In addition, different question types in different application fields correspond to different failure division reasons. For example, as shown in table 1, for the field of spoken test questions, different test question types correspond to different failure causes due to different examination contents of the different test question types.
It should be noted that, the capital letters in table 1 are the identifiers of the score-losing reasons, for example, a is used to uniquely identify the score-losing reason "a phoneme sounds inaccurately when answering questions".
Figure BDA0002552706230000121
TABLE 1 misscore causes corresponding to different test question types in the field of spoken test questions
In addition, because different test question types correspond to different score losing reasons, the score losing reason of the user corresponding to the historical question information can be determined based on the test question types carried by the historical question information. Based on this, the embodiment of the present application provides an implementation manner for analyzing the user score loss cause (i.e. S3021), specifically including S30211-S30212:
s30211: and determining examination contents corresponding to the test question type according to the test question type carried by the historical question making information.
In some cases, in order to enable determination of examination contents based on examination question types, a correspondence between each examination question type and each examination content may be established in advance, so that examination contents corresponding to examination question types carried by historical examination question information can be searched and determined by using the correspondence later. Based on this, the embodiment of the present application further provides an implementation manner of S30211, which specifically is: and determining examination contents corresponding to the test question types according to the third mapping relation and the test question types carried by the historical question making information.
The third mapping relation is used for recording the corresponding relation between the test question type and the examination content, and different application fields correspond to different third mapping relations. For example, the third mapping relation in the field of spoken test questions may include the correspondence relation between each test question type and each examination content shown in table 2.
Based on the above-mentioned related content of S30211, after the history question information is obtained, the examination content may be determined according to the test question type carried by the history question information, which specifically includes: searching examination contents corresponding to the examination question types carried by the historical examination question information in a third mapping relation which is constructed in advance, and taking the examination contents as the examination contents corresponding to the examination question types, so that the user score loss reasons can be analyzed based on the examination contents later.
Figure BDA0002552706230000131
Table 2 examination contents corresponding to different test question types in spoken test questions
S30212: and determining the user score loss reason according to the user answer result carried by the historical question making information and the examination content corresponding to the test question type.
In some cases, in order to be able to analyze the user score loss cause based on the user answer result and the examination contents, the judgment condition of each score loss cause in each examination content may be set in advance. Based on this, the embodiment of the present application further provides an implementation manner of S30212, which specifically is: firstly, acquiring a score losing reason judgment condition set according to the examination content corresponding to the test question type, and then determining a score losing reason of a user according to a user answer result and the score losing reason judgment condition set carried by the historical question making information. The score loss cause judgment condition set is used for recording judgment conditions which are met by score loss causes corresponding to the test question types.
Therefore, in the embodiment of the present application, after determining the examination content corresponding to the test question type carried by the historical question making information, the score losing cause judgment condition set corresponding to the examination content may be first searched from the fourth mapping relation, then the answer result carried by the historical question making information is compared with each judgment condition in the score losing cause judgment condition set, and the score losing cause corresponding to the judgment condition that can be met by the answer result of the user is determined as the score losing cause of the user.
In addition, the embodiment of the application is not limited to the analysis method of the user score loss reasons, and the embodiment of the application can be implemented by adopting any existing or future method capable of analyzing the user score loss reasons from the historical question making information.
In addition, when a plurality of history topic information is acquired in S301, S3021 may specifically be: and analyzing the historical question information respectively to determine the user score loss reasons corresponding to the historical question information. For example, when R pieces of history question information are obtained in S301, and R is a positive integer, S3021 may specifically be: analyzing the 1 st historical question information, and determining the user score loss reason corresponding to the 1 st historical question information; … … (and so on); and analyzing the problem information of the R-th history, and determining the user score loss reason corresponding to the problem information of the 1-th history. Therefore, the embodiment of the application can analyze each test question (especially each wrong question) completed by the user one by one, and determine the user score missing reason corresponding to each test question (especially each wrong question).
Based on the above-mentioned related content of S3021, after the history question information is obtained, the user score loss cause may be analyzed from the history question information, so that the problem solving capability defect of the user in the history question making process may be determined based on the user score loss cause.
S3022: and analyzing the user capacity defect according to the user score loss reason.
The user capability defect is used for describing the defect in the user's question making capability, and the user capability defects corresponding to different application fields are different. For example, for the field of spoken questions, user capability flaws may include flaws in knowledge point mastery, flaws in pronunciation, flaws in presentation, and the like.
In addition, for any application field, the score loss causes of different users may belong to different score loss types, and may also belong to the same score loss type. For ease of understanding, the following description is provided in connection with examples.
As an example, as shown in table 3, for the field of spoken language questions, the misscore causes such as "inaccurate pronunciation of a certain phoneme in answer", "inaccurate pronunciation of a certain phoneme in expression", and "inaccurate pronunciation of a certain phoneme in reading" all belong to class I misclassification; the failure classification reasons such as "wrong answer to key information", "incomplete expression information" and "unsmooth expression language caused by incomplete answer content" all belong to class II failure classification; the misclassification reasons such as "person name error" all belong to the class III misclassification type; the reasons of misclassification such as temporal errors belong to the IV type misclassification; the reasons of failure to divide such as 'sentence component deficiency' all belong to the class V failure classification type; the misclassification reasons such as inaccurate expression meaning belong to class VI misclassification types; the misclassification reasons such as complete answer content but unsmooth expression language belong to the class VII misclassification.
Figure BDA0002552706230000151
TABLE 3 solving Capacity Defect corresponding to different failure reasons in oral test questions
It should be noted that, roman numerals in table 3 are all identifiers of the misclassification type, for example, "I" is used to identify the misclassification type "pronunciation inaccuracy". The lower case letters in Table 3 are all identifiers of the problem solving capability defect, for example, "a" is used to uniquely identify the problem solving capability defect "user pronunciation has a problem".
In addition, different misclassification types correspond to different solving capability defects (as shown in table 3). It should be noted that, because the "unsmooth expression language" includes "unsmooth expression language due to incomplete answer content" and "complete answer content but unsmooth expression language", the "unsmooth expression language" of the incapacity cause may belong to the II-type incapacity classification and the VII-type incapacity classification, respectively, so that the "unsmooth expression language" of the incapacity cause corresponds to two solving capability defects (i.e., b and g in table 3).
Based on the above, the correspondence between the different score loss causes and the score loss types and the correspondence between the different score loss types and the different problem solving capability defects can be established in advance, so that the user capability defects corresponding to the user score loss causes can be determined based on the two correspondence. Based on this, the present application embodiment also provides an implementation manner of S3022, which specifically includes steps S30221 to S30222:
S30221: and determining the user score loss type according to the user score loss reason and the first mapping relation.
The user misclassification type is used for representing the misclassification type to which the user misclassification cause belongs.
The first mapping relation is used for recording the corresponding relation between each misclassification reason and each misclassification type. For example, as shown in table 3, for the spoken test question field, the first mapping relationship includes a correspondence between A, J, K misclassification cause and class I misclassification type, a correspondence between B, C, G, I misclassification cause and class II misclassification type, a correspondence between D misclassification cause and class III misclassification type, a correspondence between E misclassification cause and class IV misclassification type, a correspondence between F misclassification cause and class V misclassification type, a correspondence between H misclassification cause and class VI misclassification type, and a correspondence between I misclassification cause and class VII misclassification type.
It can be seen that, in the embodiment of the present application, after the user score loss cause is obtained, the score loss type corresponding to the user score loss cause may be queried from the first mapping relationship first as the user score loss type, so that the user capability defect can be determined based on the user score loss type.
S30222: and determining the problem solving capability defect corresponding to the user failure classification type as the user capability defect.
In this embodiment of the present invention, since each misclassification type corresponds to one type of problem solving capability defect, a fifth mapping relationship for recording the correspondence between different misclassification types and different problem solving capability defects may be pre-established, so that after determining the misclassification type of the user, the problem solving capability defect corresponding to the misclassification type of the user may be directly searched from the fifth mapping relationship as the user capability defect.
In some cases, when a plurality of score loss causes are acquired in S3021, S3022 may specifically be: and analyzing the user score loss reasons corresponding to the historical question making information respectively to obtain the user capability defects corresponding to the historical question making information. For example, S3022 may specifically be: analyzing the user score loss reasons corresponding to the 1 st historical thematic information to obtain user capacity defects corresponding to the 1 st historical thematic information; … … (and so on); and analyzing the user score loss reasons corresponding to the R historical question information to obtain the user capacity defect corresponding to the R historical question information. Wherein R is a positive integer, and R is the number of historical question information. Therefore, the embodiment of the application can analyze the score loss reasons one by one and determine the user capacity defect which is caused by the score loss reasons.
Based on the above-mentioned related content of S3022, after the user score losing cause is obtained, the problem solving capability defect (i.e., the user capability defect) existing in the user can be analyzed from the user score losing cause, so that the corresponding test problem recommendation can be performed for the problem solving capability defect existing in the user, which is beneficial for the user to quickly overcome the problem solving capability defect existing in the user.
In some cases, when a plurality of history topic information is acquired in S301, S302 may specifically be: and analyzing the historical question information respectively to determine the user capacity defect corresponding to the historical question information. For example, S302 may specifically be: analyzing the 1 st historical thematic information, and determining a user capacity defect corresponding to the 1 st historical thematic information; … … (and so on); and analyzing the problem information of the R historical problem information to determine the user capacity defect corresponding to the problem information of the R historical problem information. Wherein R is a positive integer, and R is the number of historical question information. Therefore, the embodiment of the application can analyze each test question (especially each wrong question) completed by the user one by one, and determine the user capacity defect corresponding to each test question (especially each wrong question).
Based on the above-mentioned related content of S302, after the historical question making information of the user is obtained, the question solving capability defects of the user in the historical question making process can be analyzed from the historical question making information, so that the user can be subjected to test question recommendation based on the question solving capability defects of the user, and the test questions recommended to the user can help the user to overcome the question solving capability defects.
S303: and determining a target recommendation strategy according to the user capability defect.
The target recommendation policy refers to a test question recommendation policy used to overcome the user's capability deficiency.
In addition, in each application field, different test question recommendation strategies should be adopted for different question solving capability defects. For example, as shown in table 4, for the spoken test question field, each question-solving capability defect corresponds to a different test question recommendation strategy, respectively.
Figure BDA0002552706230000171
/>
Figure BDA0002552706230000181
TABLE 4 correspondence between solving capability defects and test question recommendation strategies in the field of spoken language test questions
In table 4, "one lower grade" means one lower grade than the grade in which the user is located; the "same grade" means the same grade as the grade where the user is located; the current test question theme is the same as the test question theme carried in the user question making information; the "same learning segment" refers to the same learning segment as the learning segment in which the user is located. Wherein the learning stage is used to describe the learning stage of the user, for example, the learning stage may be a primary, middle, high school or university, etc.
In order to facilitate understanding of the correspondence between the defect of the solving capability and the recommendation policy of the test questions in table 4, the following description will respectively explain the setting reasons of the correspondence:
because the a problem solving capability defect represents that the pronunciation of a certain phoneme is inaccurate by a user, the test problem recommending strategy corresponding to the a problem solving capability defect is strategy 1, and the strategy 1 can be used for correcting and strengthening the pronunciation of the user;
because the b problem solving capability defect represents the lack of basic knowledge mastering of the user, the test problem recommending strategy corresponding to the b problem solving capability defect is strategy 2, and the strategy 2 can motivate the user to make an effort progress on the premise of improving the learning self-confidence of the user;
because the c or b problem solving capability defect represents that a user has a use problem for a specific grammar content (such as name or tense), the test problem recommending strategy corresponding to the c or b problem solving capability defect is strategy 5, and the strategy 5 can be used for correcting and strengthening the use of the specific grammar content by the user;
because the e-solution capability defect characterizes that a user has a use problem aiming at the grammar under the current test question subject, the test question recommendation strategy corresponding to the e-solution capability defect is strategy 4, and the strategy 4 can be used for training the grammar use of the user;
because the f solving capability defect characterizes knowledge points under some similar test topics which are easily confused by users, the test topic recommendation strategy corresponding to the f solving capability defect is strategy 3, and the strategy 3 can be used for correcting and strengthening the knowledge points under each test topic;
Because the g-solution capability defect characterizes that the user has better knowledge points and only expresses unsmoothness, the test question recommendation strategy corresponding to the g-solution capability defect is strategy 6, and the strategy 6 can be used for improving the fluency of the spoken language expression of the user.
It can be seen that, since different test question recommendation strategies should be adopted for different question solving capability defects, the correspondence between the different question solving capability defects and the different test question recommendation strategies can be pre-established (as shown in table 4), so that the target recommendation strategy corresponding to the user capability defect can be directly determined based on the correspondence. Based on this, the embodiment of the present application further provides an implementation manner of S303, which specifically is: and determining a target recommendation strategy according to the user capability defect and the second mapping relation.
The second mapping relation is used for recording the corresponding relation between the problem solving capability defect and the test problem recommending strategy. For example, for the spoken test question field, the second mapping relationship may include a correspondence between a solving capability defect and policy 1, b correspondence between a solving capability defect and policy 2, c or b correspondence between a solving capability defect and policy 5, e correspondence between a solving capability defect and policy 4, f correspondence between a solving capability defect and policy 3, and g correspondence between a solving capability defect and policy 6.
In some cases, some target recommendation strategies (such as strategy 1 and strategy 5) can be determined only according to the user capability defect, and some target recommendation strategies (such as strategy 2 and strategy 3) can be determined according to the user capability defect, the test question subject carried by the historical question making information and the grade of the user. Based on this, the embodiment of the present application provides an implementation manner of S303, which specifically is: judging whether the user capacity defect meets a preset condition or not; if yes, determining a target recommendation strategy according to the user capacity defect; if not, determining a target recommendation strategy according to the user capability defect, the test question subject carried by the historical question making information and the grade of the user.
The preset conditions may be preset, and the preset conditions may be defects belonging to preset capability. For example, for the field of spoken questions, the preset capability flaws include "user pronunciation has a problem", "user is not proficient in use by people", and "user is not proficient in use by tenses".
In some cases, when a plurality of user capability defects are acquired in S302, S303 may specifically be: and determining a target recommendation strategy corresponding to each historical question information according to the user capability defect corresponding to each historical question information. For example, S303 may specifically be: determining a target recommendation strategy corresponding to the 1 st historical thematic information according to the user capacity defect corresponding to the 1 st historical thematic information; … … (and so on); and determining a target recommendation strategy corresponding to the R historical question information according to the user capability defect corresponding to the R historical question information. Wherein R is a positive integer, and R is the number of historical question information. Therefore, the embodiment of the application can analyze the capability defects of each user one by one, and determine the target recommendation strategy which can be used for overcoming the capability defects of each user.
Based on the above-mentioned content related to S303, after the user capability defect is obtained, the test question recommendation policy corresponding to the user capability defect may be searched from the second mapping relationship and used as the target recommendation policy, so that the target test question recommendation policy may be used for the test question recommendation.
S304: and screening candidate recommended test questions from the candidate test question sets according to the target recommendation strategy.
The candidate test question set may be preset, and the candidate test question set may be determined according to a user error question and a user not doing a test question.
The candidate recommended test question characterization is used for overcoming the recommended test questions of the user score loss reasons.
In addition, since different target recommendation strategies correspond to different candidate recommendation questions, when determining a plurality of target recommendation strategies in S303, S304 may specifically be: and determining candidate recommended test questions corresponding to the historical question making information according to the target recommendation strategies corresponding to the historical question making information. As an example, when it is determined in S303 that R target recommendation policies corresponding to the history question information are provided, and R is a positive integer, S303 may specifically be: according to a target recommendation strategy corresponding to the 1 st historical question information, determining candidate recommendation test questions corresponding to the 1 st historical question information; … … (and so on); and determining candidate recommended test questions corresponding to the R historical question information according to the target recommendation strategy corresponding to the R historical question information. Therefore, according to the embodiment of the application, the target recommendation strategies corresponding to the historical question making information can be used for recommending the questions, so that candidate recommendation questions corresponding to the historical question making information can help users to overcome corresponding user capacity defects.
Based on the above-mentioned related content of S304, after determining the target recommendation policy corresponding to the user capability defect, candidate recommendation questions may be screened out from the candidate test question set according to the target recommendation policy corresponding to the user capability defect, so that the candidate recommendation questions may be used to overcome the user capability defect, thereby being beneficial to helping the user overcome the user failure cause.
S305: and recommending the candidate recommended test questions to the user.
In the embodiment of the application, after the candidate recommended test questions are acquired, all or part of the candidate recommended test questions can be recommended to the user, so that the user can overcome the user score losing reasons by practicing the candidate recommended test questions.
In some cases, because there are too many candidate recommended questions, the target recommended questions (i.e., the questions that are finally recommended to the user) may be determined according to the subject matters carried by the historical question making information. Based on this, the embodiment of the present application further provides an implementation manner of S305, where in this implementation manner, description will be given by taking as an example recommendation of candidate recommendation test questions corresponding to each history of the ith test question subject carried with the test question information, i is a positive integer, i is less than or equal to N, and N is the number of test question subjects; at this time, S305 specifically includes S3051-S3053:
S3051: and generating a test question recommendation set corresponding to the ith test question subject according to the candidate recommendation test questions corresponding to each history question making information carrying the ith test question subject.
S3052: and extracting the test questions from the test question recommendation set corresponding to the ith test question theme according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question theme, so as to obtain each target recommendation test question.
S3053: and recommending the target recommended test questions to the user.
The recommendation priority corresponding to the ith test question topic is used for describing the recommendation sequence of the test question recommendation set corresponding to the ith test question topic, and if the recommendation priority corresponding to the ith test question topic is higher, the recommendation sequence corresponding to the ith test question topic is higher.
In addition, the recommendation priority corresponding to the ith test question theme can be determined according to each history question making information carrying the ith test question theme, and the recommendation priority specifically comprises the following three steps:
the first step: and determining the test question score value corresponding to the ith test question subject according to the test question score carried by the historical test question information carrying the ith test question subject.
The embodiment of the application is not limited to the calculation mode of the test question score value corresponding to the ith test question theme, for example, the test question score value corresponding to the ith test question theme can be obtained by adding (or averaging) the test question scores carried by the historical test question information carrying the ith test question theme. For another example, the score ratio of the test question carried by each history of the ith test question subject may be calculated first, and then the score ratio of the test question carried by each history of the ith test question subject may be summed (or averaged) to obtain the score value of the test question corresponding to the ith test question subject. The test question score ratio refers to the ratio of the test question score to the total score of the test questions.
And a second step of: and sorting the test question score values corresponding to the 1 st test question topic to the test question score values corresponding to the N th test question topic according to a preset sorting standard, and determining the sorting sequence number corresponding to the i th test question topic.
The preset ranking criteria may be preset, for example, the preset ranking criteria may be ranking criteria for ranking from large to small, or ranking criteria for ranking from small to large.
And a third step of: and determining the recommendation priority corresponding to the ith test question topic according to a preset ranking standard and a ranking sequence corresponding to the ith test question topic.
In this embodiment of the present application, after the ranking sequence number corresponding to the ith test question topic is obtained, the recommendation priority corresponding to the ith test question topic may be determined according to the preset ranking standard and the ranking sequence number corresponding to the ith test question topic, where the determination principle of the recommendation priority is: when the preset sorting standard is that sorting is performed from big to small, if the sorting sequence number corresponding to the ith test question theme is larger, the recommendation priority corresponding to the ith test question theme is higher; when the preset sorting standard is that sorting is performed from small to large, if the sorting sequence number corresponding to the ith test question theme is smaller, determining that the recommendation priority corresponding to the ith test question theme is higher.
That is, if the test question score value corresponding to the ith test question is lower, the recommendation priority corresponding to the ith test question is higher; if the score value of the test question corresponding to the ith test question is higher, the recommendation priority corresponding to the ith test question is lower.
Based on the related content of the three steps, the recommendation priority corresponding to the ith test question theme can be determined according to the historical question making information carrying the ith test question theme, and particularly can be determined according to the test question score carried by the historical question making information carrying the ith test question theme.
The test question recommendation proportion corresponding to the ith test question theme is used for representing the test question screening proportion of the test question recommendation set corresponding to the ith test question theme.
In addition, the question recommendation proportion corresponding to the ith question theme can be determined together according to each history question making information carrying the ith question theme and the recommendation priority corresponding to the history question making information, and the method specifically comprises the following steps: judging whether the recommendation priority corresponding to the ith test question topic meets recommendation conditions or not, if so, determining the test question score ratio carried by the historical test question information carrying the ith test question topic according to the test question score carried by the historical test question information carrying the ith test question topic, and then calculating the test question recommendation ratio corresponding to the ith test question topic according to the test question score ratio carried by the historical test question information carrying the ith test question topic; if not, determining that the test question recommendation ratio corresponding to the ith test question theme is 0.
The recommended condition may be preset, for example, the recommended condition may be higher than a preset priority (e.g., higher than the 5 th priority).
In addition, the question recommendation proportion corresponding to the ith question theme can be obtained by means of weighted summation calculation of the question score ratio carried by the question information of each history carrying the ith question theme.
Based on the related content of S3051 to S3053, the recommendation priority, the test question recommendation proportion and the test question recommendation set corresponding to the ith test question theme can be obtained first, and then the test question is extracted from the test question recommendation set corresponding to the ith test question theme according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question theme, so as to obtain each target recommended test question.
It should be noted that, in the embodiment of the present application, the target recommended test questions corresponding to any test question topic (for example, the 1 st test question topic, … …, the nth test question topic) may be determined according to S3051 to S3052.
According to the related steps of the test question recommending method based on the introduction, in the test question recommending method provided by the application, after a test question recommending request triggered by a user is received, historical question making information of the user is acquired, and the historical question making information is analyzed to determine the cause of user score loss; and determining candidate recommended test questions according to the user score loss reasons, and recommending the candidate recommended test questions to the user. The user score losing reason is determined based on the historical question making information of the user, so that the user score losing reason can accurately represent errors of the user in the historical question making process, candidate recommended test questions recommended based on the user score losing reason are recommended according to the errors of the user in the historical question making process, and therefore accuracy of test question recommendation can be improved, and user experience can be improved.
Method embodiment II
In some cases, some recommended test questions may also be determined from test question topics that have not been exercised by the user. Based on this, the embodiment of the present application further provides an implementation manner of the test question recommending method, where the test question recommending method includes S306 in addition to S301-S305:
s306: and selecting a preset number of test questions from the expanded test question set to serve as test questions to be recommended.
The extended test question set comprises at least one test question carrying an extended subject, and the extended subject characterizes the test question subject which is not exercised by the user.
The preset number may be preset.
In this embodiment, S305 may specifically be: and recommending the candidate recommended test questions and the test questions to be recommended to the user.
Based on the above, in the embodiment of the present application, some test questions may be screened out from the test questions under the test question subject that the user has not yet trained, and then these test questions to be recommended and all or part of candidate recommended test questions are recommended to the user, so that the user can not only practice the recommended test questions for overcoming the user failure reasons, but also practice the test questions under the test question subject that the user has not yet trained, which is beneficial to improving the diversity of the recommended test questions.
In some cases, the question recommendation may be further made with reference to an associated user ability level (e.g., user hearing level), for example, for the spoken question field, the question recommendation may be further made with reference to the user hearing level. Based on this, the embodiment of the present application provides an embodiment of the test question recommending method, in this embodiment, the test question recommending method further includes S307 in addition to the above part or all of the steps:
s307: and acquiring a user preset capacity level.
The user preset capability may be preset, and different application areas correspond to different user preset capabilities. For example, for the field of spoken questions, the user preset capability may be user hearing.
The user preset capability level characterizes the preset capability level possessed by the user, and the embodiment of the application does not limit the acquisition mode of the user preset capability level. For example, the user preset capability level may be set autonomously by the user, or may be determined based on historical thematic information (e.g., historical thematic information of a hearing problem) for evaluating the user preset capability level.
In some cases, the user preset capability level may be determined from historical thematic information for evaluating the user preset capability level, which specifically includes S3071-S3072:
S3071: and determining the preset capability evaluation information of each user according to the historical question making information for evaluating the preset capability level of the user.
The user preset capability evaluation information corresponding to different application fields is different. For example, when the user preset ability is the user's hearing ability, the user preset ability evaluation information may be understandable and knowledge point grasp firm (class (1) evaluation information), knowledge point grasp not firm and confusable (class (2) evaluation information), and understandable and knowledge point not grasp (class (3) evaluation information).
In addition, the user preset capability evaluation information corresponding to each piece of history question information can be determined according to the test question score carried by the history question information.
S3072: and determining the user preset capacity level according to the user preset capacity evaluation information.
The embodiment of the present application is not limited to the implementation manner of S3072, and in a possible implementation manner, S3072 may specifically be: and matching the preset capacity evaluation information of each user with the level judgment conditions in the level judgment condition set, and determining the preset capacity level corresponding to the successfully matched level judgment conditions as the preset capacity level of the user.
Wherein the set of level judgment conditions includes at least one level judgment condition. For example, when the user preset ability is user hearing, the set of horizontal judgment conditions may include a first condition, a second condition, and a third condition;
The first condition is that the occurrence frequency of the class (1) evaluation information (that is, the ratio of the occurrence frequency of the class (1) evaluation information to the total number of the user preset capacity evaluation information) is higher than a first threshold value, and the preset capacity level corresponding to the first condition is the class (1) evaluation information;
the second condition is that the occurrence frequency of the class (1) evaluation information is lower than a second threshold value and the occurrence frequency of the class (3) evaluation information (namely, the ratio of the occurrence frequency of the class (3) evaluation information to the total number of the user preset capability evaluation information) is higher than a third threshold value, and the preset capability level corresponding to the second condition is the class (3) evaluation information;
the third condition is that the first condition is not satisfied and the second condition is not satisfied, and the preset capability level corresponding to the third condition is the type (2) evaluation information.
Based on the above-mentioned related content of S3071 to S3072, after determining each piece of user preset capability evaluation information according to each piece of history question information for evaluating the user preset capability level, each piece of user preset capability evaluation information may be first matched with each level judgment condition in the level judgment condition set, and the preset capability level corresponding to the level judgment condition that is successfully matched may be determined as the user preset capability level.
As an example, when the user preset ability is user hearing, and the level judgment condition set may include a first condition, a second condition, and a third condition, the occurrence frequency of the (1) th-class evaluation information and the occurrence frequency of the (3) th-class evaluation information may be determined according to the respective user preset ability evaluation information; and determining a level judgment condition which is successfully matched with the occurrence frequency of the (1) class of evaluation information and the occurrence frequency of the (3) class of evaluation information from the level judgment condition set, and determining a preset capacity level corresponding to the level judgment condition which is successfully matched as a user preset capacity level.
Based on the above-mentioned content related to S307, in this embodiment, S303 may specifically be: and determining a target recommendation strategy according to the user capability defect and the user preset capability level.
The user preset capacity level can be used for limiting the test question difficulty of the recommended test questions, and the method specifically comprises the following steps: when the user preset capacity level is higher, the question difficulty of the recommended questions is higher.
Based on the above-mentioned related content of S307 and S303, in this embodiment of the present application, the user capability defect and the user preset capability level may be obtained first, and then the candidate recommended test question may be determined according to the user capability defect and the user preset capability level, so that the candidate recommended test question may not only meet the requirement of overcoming the user capability defect, but also conform to the difficulty constraint of the user preset capability level on the recommended test question, so that the problem recommendation with improper difficulty may be effectively avoided for the user, and thus the accuracy of the test question recommendation may be improved.
It should be noted that, the embodiment of the present application is not limited to the types of the recommended questions, for example, for the field of the spoken questions, the recommended questions may be spoken questions or hearing questions.
In order to facilitate understanding of the test question recommendation method provided in the embodiment of the present application, the following description is given by taking test question recommendation in the spoken language field as an example.
Scene embodiment
Assume that the test question library comprises test questions under W test question topics, and the test question library comprises spoken test questions and hearing test questions; the user has practiced the test questions under the 1 st to the N th test question topics.
Based on the above assumption, the test question recommending method applied to the spoken language field provided by the embodiment of the application comprises the following steps of 1-4:
step 1: after receiving a test question recommending request triggered by a user, respectively determining a target recommended test question set corresponding to the 1 st test question theme, … … and a target recommended test question set corresponding to the N test question theme.
Since the process of acquiring the target recommended test question set corresponding to each test question topic is similar, for brevity, the process of acquiring the target recommended test question set corresponding to the ith test question topic will be described below as an example. Wherein i is a positive integer, i is less than or equal to N, and N is the number of test question topics which are trained by the user.
The target recommended test question set corresponding to the ith test question subject comprises at least one target recommended test question corresponding to the ith test question subject, and the acquisition process of the target recommended test question set corresponding to the ith test question subject specifically comprises the following steps 11-19:
step 11: and respectively determining the hearing evaluation information corresponding to each piece of history hearing test question information according to each piece of history hearing test question information carrying the ith test question theme.
The hearing ability evaluation information may be understood and knowledge point grasp firm (class (1) evaluation information), knowledge point grasp not firm and confusing (class (2) evaluation information), or knowledge point grasp not known (class (3) evaluation information).
The method for acquiring the hearing evaluation information is not limited, for example, the hearing evaluation information may be determined according to a test question score ratio (i.e., a ratio of a test question score to a test question total score) carried by the historical hearing test question information, or the hearing evaluation information carried by the historical hearing test question information may be directly acquired (it is to be noted that the hearing evaluation information may be manually determined).
Step 12: and determining the hearing level of the user corresponding to the ith test subject according to the hearing evaluation information corresponding to each piece of history hearing test subject information.
The determining process of the user hearing level corresponding to the ith test question theme specifically may be: firstly, determining the occurrence frequency of the type (1) evaluation information and the occurrence frequency of the type (3) evaluation information according to the preset capability evaluation information of each user; then, a level judgment condition which is successfully matched with the occurrence frequency of the (1) class evaluation information and the occurrence frequency of the (3) class evaluation information is determined from the level judgment condition set, and a preset capacity level corresponding to the level judgment condition which is successfully matched is determined as a user preset capacity level.
Wherein the set of horizontal judgment conditions may include a first condition, a second condition, and a third condition; the first condition is that the occurrence frequency of the class (1) evaluation information is higher than a first threshold (such as 40%), and the preset capacity level corresponding to the first condition is the class (1) evaluation information; the second condition is that the occurrence frequency of the class (1) evaluation information is lower than a second threshold (such as 20%), the occurrence frequency of the class (3) evaluation information is higher than a third threshold (60%), and the preset capacity level corresponding to the second condition is the class (3) evaluation information; the third condition is that the first condition is not satisfied and the second condition is not satisfied, and the preset capability level corresponding to the third condition is the type (2) evaluation information.
In addition, the hearing level of the user corresponding to the ith test question theme can be used for representing whether the user grasps the knowledge point under the ith test question theme firmly or not, which is specifically as follows: when the occurrence frequency of the class (1) evaluation information is higher than a first threshold value, the user can be determined to grasp a knowledge point under the ith test question subject more firmly; when the second condition is that the occurrence frequency of the (1) th class of evaluation information is lower than a second threshold value and the occurrence frequency of the (3) th class of evaluation information is higher than a third threshold value, determining that the user does not grasp a knowledge point under the i-th test question subject; for the case other than the two cases described above, it can be determined that the user is not very firm in grasping the knowledge points under the subject of the ith test question.
It should be noted that, the first threshold, the second threshold and the third threshold may be determined according to factors such as the number of test question banks and the question making capability level of the user group.
Step 13: t pieces of user wrong question information which are closest to the current time in storage time and carry the ith test question theme and the jth oral test question theme are obtained and are used as historical question making information. Wherein j is a positive integer, and j is less than or equal to the total number of test question types.
The total number of question types may be preset, for example, when the spoken question types include: when the questions are read, answered and expressed, the total number of test question types is 3.
To facilitate an understanding of step 13, the following description is provided in connection with an example.
When the oral test questions include: when the questions are read, answered and expressed, and T is 50, step 13 may specifically be: and determining 50 pieces of user wrong question information which are stored for the time closest to the current time and carry the ith test question theme and the reading questions, 50 pieces of user wrong question information which carry the ith test question theme and the answering questions and 50 pieces of user wrong question information which carry the ith test question theme and the expressing questions as historical question making information corresponding to the ith test question theme so as to determine a target recommended test set corresponding to the ith test question theme based on the historical question making information.
Step 14: and analyzing the historical question information to determine the user score loss reason corresponding to the historical question information.
It should be noted that step 14 may be implemented using any of the embodiments of the user score loss reasons provided above.
Step 15: and determining the user score losing type corresponding to each piece of historical question making information according to the user score losing reason corresponding to each piece of historical question making information and the user hearing level corresponding to the ith test question.
The user misclassification is used for representing the type of the user misclassification reason and the user The type of loss of score may be determined based on the type of loss of score of the user and the user's hearing level. For example, the user score type may be a score type corresponding to the user score type and the user hearing level as found in table 5. Note that in table 5, some of the failure types in table 3 are subdivided into a plurality of subtypes according to the hearing level. For example, type II is subdivided into type II 1 Subclass, II 2 Subclass and II 3 A subclass.
Figure BDA0002552706230000281
TABLE 5 non-categorised subdivision list in the field of spoken language questions
To facilitate an understanding of step 15, the following description is provided in connection with an example.
As an example, when the history question information corresponding to the ith test question includes P, step 15 is specifically: searching in the table 5 to obtain the user score losing type corresponding to the t historical question making information according to the user score losing reason corresponding to the t historical question making information and the user hearing level corresponding to the i test question making theme; wherein t is a positive integer, t is less than or equal to P, and P is a positive integer.
Step 16: and determining the user capability defect corresponding to each historical question information according to the user score losing type corresponding to each historical question information.
In this embodiment of the present application, each of the failure classification types corresponds to one of the problem solving capability defects, so after determining the user failure classification type corresponding to each of the historical problem making information, the user capability defects corresponding to each of the historical problem making information may be determined respectively. For example, the embodiment of the present application may perform step 16 using table 6, which is specifically: searching in a table 6 to obtain a user capacity defect corresponding to the 1 st historical thematic information according to the user score losing type corresponding to the 1 st historical thematic information; … … (and so on); and searching in the table 6 to obtain the user capacity defect corresponding to the P historical question information according to the user failure classification type corresponding to the P historical question information. Wherein P is a positive integer, and P represents the total number of historical question information corresponding to the ith test question theme.
Figure BDA0002552706230000291
TABLE 6 solving capability Defect corresponding to different misclassification types in spoken test questions
In table 6, the lower case letters are defect identifications of solving ability. For example, a is used to uniquely identify the problem solving capability defect "user pronunciation has problems".
Step 17: and determining target recommendation strategies corresponding to the historical thematic information according to the user capability defects corresponding to the historical thematic information.
In fact, because the pronunciation problem has a relatively large influence on the spoken language ability of the user, whether the user pronounces has a problem or not can be judged firstly; because the influence of the basic knowledge points on the spoken language ability of the user is larger than the influence of the grammar knowledge on the spoken language ability of the user, whether the basic knowledge points of the user lack or not can be judged first, and then whether the spoken language knowledge points of the user lack or not can be judged; the knowledge points are mastered, but the influence of the fluency of the expression on the spoken language ability of the user is small, so that whether the user expresses fluency can be judged finally. Based on this, the embodiment of the present application constructs the recommended policy selection logic shown in fig. 4.
As an example, step 17 may be implemented using the recommendation policy selection logic shown in fig. 4, which is specifically: determining a target recommendation strategy corresponding to the 1 st historical thematic information according to the user capacity defect corresponding to the 1 st historical thematic information and by utilizing strategy determination logic shown in FIG. 4; … … (and so on); and determining a target recommendation strategy corresponding to the P historical thematic information according to the user capacity defect corresponding to the P historical thematic information and by utilizing strategy determination logic shown in FIG. 4. Wherein P is a positive integer, and P represents the total number of historical question information corresponding to the ith test question theme.
In addition, in order to facilitate understanding of policies 1 to 6, the following description is made in connection with respective policy correspondence examples.
As an example of policy 1: the recommended word questions and sentence questions, including the user pronunciation inaccuracy phoneme, are read to correct and strengthen the student pronunciation.
It should be noted that, for a user with a bad pronunciation, it is necessary to train the user repeatedly which phoneme is not well developed. The training process of using words starts from training the phonemes, and the training process of using sentences starts from continuous training in sentences, so that the user pronunciation can be comprehensively trained by combining the two training processes.
As an example of policy 2: n% recommends a hearing test question which is one grade lower in relevant test question theme and is more difficult to get; 1-n% of hearing test questions and reading questions with the same test question theme of the same grade and the difficulty of m are recommended. Wherein, the value of n is determined according to the test question quantity of the test question library. For m, when the misclassification type is II 1 When m is not limited, randomly extracting; when the failure type is II 2 When the method is used, m is the medium difficulty and the simple difficulty; when the failure type is II 3 And m is simply the difficulty.
It should be noted that, for users with lack of basic knowledge, the proficiency of knowledge points needs to be trained, and the subjects of low-grade test questions are consolidated first, so that the receiving difficulty is reduced, and the learning achievement and self-confidence of students are improved; meanwhile, the difficulty of pulling up is small, some test questions related to the grade are trained, and students are stimulated to make efforts and progress.
As an example of policy 3: n% recommending hearing test questions and reading questions with the same test question theme of the same grade and the difficulty of m; 1-n% of hearing test questions and reading questions with difficulty m, which are similar test question topics of the same school. Wherein, the value of n is determined according to the test question quantity of the test question library. For m, when the misclassification type is VI 1 When the method is used, m is higher in difficulty; when the failure type is VI 2 When m is the medium difficulty; when the failure type is VI 3 And m is simply the difficulty.
It should be noted that, for a user who easily confuses knowledge points under similar test question topics, because knowledge points under the test question topics cannot be distinguished to different degrees, knowledge expansion and consolidation are required to be performed on the test questions under the similar test question topics with different difficulties according to the familiarity degree of students on the knowledge points, and the difficulty of recommending the test questions is judged by combining the hearing level of the user.
As an example of policy 4: and recommending the same test question topics with the same grade and the same reading questions and presentation questions with the difficulty of m. Wherein, m takes the value: when the failure type is V 1 When the value of m is high, the difficulty is high; when the failure type is V 2 When m is a value, the difficulty is moderate; when the failure type is V 3 And when the value of m is simple and difficult.
It should be noted that, for the user lacking the grammar knowledge points, the user is not skilled in the grammar knowledge points under the similar test question subject to different degrees, so that the grammar is deficient, and the related speakable questions and expression questions with different difficulties are recommended to the user, so that the user can actively open the way to exercise the spoken language expression of the user.
As an example of policy 5: recommending question-answer type test questions with the same grammar points as the grammar questions reflected by the user capability defects, wherein the test questions have the difficulty of m, and when the failure classification type is III 1 、III 2 Or III 3 When the recommended examination points are all test questions, the group is IV 1 、IV 2 Or IV 3 Recommending test questions with temporal examination points; if the misclassification pattern includes III 1 、III 2 Or III 3 IV 1 、IV 2 Or IV 3 At the same time, both are recommended. m is taken as a value: when the failure type is III 1 Or IV 1 When the method is used, m is higher in difficulty; when the failure type is III 2 Or IV 2 When m is the medium difficulty; when the failure type is III 3 Or IV 3 And m is simply the difficulty.
It should be noted that, since these two main groups of error causes are most obvious, the grammar problem reflected by the user capability defect is directly recommended.
As an example of policy 6: the method comprises the steps of recommending the same test question topics or similar test question topics of the same school, wherein the difficulties are medium-level and high-level expression topics.
It should be noted that, when the user grasps the topic knowledge points well and only expresses insufficiently smoothly, the user's proficiency is not enough, and the user needs to practice more than one expression questions; and for students with good spoken language, the difficulty is proper to get medium difficulty and high difficulty.
Based on the above and as shown in fig. 4, the priority ranking of the 6 recommendation strategies is as follows: policy 1→policy 2→policy 3→policy 4→policy 5→policy 6.
Step 18: and screening candidate recommended test question sets and recommendation priorities thereof corresponding to the historical question information from the candidate test question sets according to target recommendation strategies corresponding to the historical question information.
The candidate test question set comprises user hearing error questions, user spoken error questions with the test question score ratio less than or equal to a preset score threshold (such as 80%), and test questions which are not made by the user under the grade of the user.
It should be noted that, the setting principle of the candidate test question set is as follows: and eliminating the test questions mastered by the user from the test question library. The hearing test questions are objective questions, so that the hearing test questions which are wrongly done by the user are necessarily the test questions which need to be consolidated. In addition, for the spoken test questions, when the test question score ratio of the spoken error questions is less than or equal to the preset score threshold, the actual level of the user is determined to be less than ideal, so that the user is required to pay important attention to and consolidate the test questions. In addition, questions that the user did not do can also be recommended to the user.
The recommendation priority of the candidate recommended test question set is used for describing the recommendation sequence of the candidate recommended test question set under the ith test question subject.
In addition, the recommendation priority of the candidate recommendation test question set can be determined according to the priority ranking number of the target recommendation strategy for generating the candidate recommendation test question set. For example, if the target recommendation policy corresponding to the historical question information is policy 3, determining the recommendation priority of the candidate recommendation test question set corresponding to the historical question information as the third level.
To facilitate an understanding of step 18, it is described below in connection with an example.
As an example, step 18 may specifically be: according to the target recommendation strategy corresponding to the 1 st historical question making information, selecting a candidate recommendation question set corresponding to the 1 st historical question making information and recommendation priority thereof from the candidate question sets; … …; and screening candidate recommended test question sets and recommendation priorities thereof corresponding to the P historical question information from the candidate test question sets according to a target recommendation strategy corresponding to the P historical question information. Wherein P is a positive integer, and P represents the total number of historical question information corresponding to the ith test question theme.
It should be noted that, in the embodiment of the present application, the recommendation priority of the candidate recommendation test question set may be used as the recommendation priority corresponding to any candidate recommendation test question in the candidate recommendation test question set.
Step 19: and obtaining a target recommended test question set corresponding to the ith test question subject according to the candidate recommended test question set corresponding to each historical test question information.
In one possible implementation, step 19 specifically includes steps 191-192:
step 191: and generating a test question recommendation set corresponding to the ith test question subject according to the candidate recommendation test question set corresponding to each history question making information.
In the embodiment of the application, the set of all candidate recommended test questions in the candidate recommended test question sets corresponding to each historical test question information can be directly used as the test question recommended set corresponding to the ith test question subject. For example, the set of all candidate recommended questions in the candidate recommended questions set corresponding to the 1 st historical question information, … …, and all candidate recommended questions in the candidate recommended questions set corresponding to the P historical question information is determined as the question recommended set corresponding to the i-th question topic.
Step 192: and extracting the test questions of the test question recommendation set corresponding to the ith test question theme according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question theme, so as to obtain a target recommendation test question set.
It should be noted that step 192 may be performed using the embodiment of S3042 above, please refer to the above.
In addition, in some cases, the calculation formula of the test question recommendation ratio corresponding to the ith test question subject is specifically: the recommended ratio of the test questions corresponding to the ith test question theme=the spoken test question score rate×the first coefficient (e.g. 60%) +the hearing test question score rate×the second coefficient (e.g. 40%).
In addition, for step 192, when the test question recommendation set corresponding to the ith test question subject is subjected to test question extraction according to the test question recommendation proportion, the test question extraction may be performed according to the recommendation priority corresponding to each candidate recommendation test question in the test question recommendation set, so as to preferentially extract the candidate recommendation test questions with higher priority.
Based on the related content of the foregoing steps 11 to 19, in the embodiment of the present application, the target recommended test question set corresponding to the 1 st test question topic, the target recommended test question set … … corresponding to the nth test question topic, and the target recommended test question set corresponding to the nth test question topic may all be determined according to the foregoing steps 11 to 19.
Step 2: and selecting a preset number of test questions from the expanded test question set according to the grade of the user to obtain a test question set to be recommended.
As an example, step 2 may specifically be: according to the grade of the user, selecting a preset number of test questions (hearing test questions and reading questions) from test questions under test question topics which are not exercised by any target number (such as 3) of users, and generating a test question set to be recommended.
It should be noted that, the recommendation of the test questions under the test question subject not practiced by the user is performed in advance, which is helpful for the pre-learning and the continuous effort of the user.
Step 3: and determining final recommended test questions according to the target recommended test question sets and the test question sets to be recommended corresponding to the test question topics.
In some cases, all target recommended test questions in the target recommended test question set corresponding to the test question theme and all to-be-recommended test questions in the to-be-recommended test question set can be directly used as final recommended test questions.
In some cases, the recommendation of the test questions can also be performed according to a certain proportion, which is specifically as follows: the target recommended test questions determined by the strategy 1 are recommended according to a first proportion (for example, 60%), the target recommended test questions determined by the strategies 2-6 are recommended according to a second proportion (for example, 30%), and the test questions to be recommended selected from the expanded test question set are recommended according to a third proportion (for example, 10%).
Step 4: and recommending the final recommended test questions to the user.
Based on the related content of the steps 1 to 4, the embodiment of the application can recommend the test questions based on the user score loss reason, so that the test questions recommended to the user can help the user overcome the user score loss reason, and the accuracy of the recommended test questions is improved.
Based on the test question recommending method provided by the method embodiment, the embodiment of the application also provides a test question recommending device, and the test question recommending device is explained and illustrated below with reference to the accompanying drawings.
Device embodiment
The device embodiment introduces the test question recommending device, and the related content is referred to the method embodiment.
Referring to fig. 5, the structure of the test question recommending apparatus provided in the embodiment of the present application is schematically shown.
The test question recommending apparatus 500 provided in the embodiment of the present application includes:
an information obtaining unit 501, configured to obtain historical question information of a user after receiving a question recommendation request triggered by the user;
a capability determining unit 502, configured to analyze the historical question information and determine a user capability defect;
a policy determining unit 503, configured to determine a target recommendation policy according to the user capability defect;
the test question determining unit 504 is configured to screen candidate recommended test questions from the candidate test question set according to the target recommendation policy;
and the test question recommending unit 505 is configured to recommend the candidate recommended test questions to the user.
As an embodiment, to improve the accuracy of the test question recommendation, the capability determining unit 502 includes:
The reason analysis subunit is used for analyzing the historical question information and determining the reason of the user failure;
and the capability analysis subunit is used for analyzing the user capability defect according to the user score loss reason.
As an embodiment, in order to improve the accuracy of the test question recommendation, the cause analysis subunit is specifically configured to:
according to the test question type carried by the historical question making information, determining the examination content corresponding to the test question type; and determining the user score loss reason according to the user answer result carried by the historical question making information and the examination content corresponding to the test question type.
As an embodiment, in order to improve the accuracy of the test question recommendation, the capability analysis subunit is specifically configured to:
determining the user score loss type according to the user score loss reason and the first mapping relation; the first mapping relation is used for recording the corresponding relation between each misclassification reason and each misclassification type;
and determining the problem solving capability defect corresponding to the user failure classification type as a user capability defect.
As an embodiment, in order to improve the accuracy of the test question recommendation, the policy determining unit 503 is specifically configured to:
Determining a target recommendation strategy according to the user capability defect and the second mapping relation; the second mapping relation is used for recording the corresponding relation between the problem solving capability defect and the test problem recommending strategy.
As an embodiment, in order to improve the accuracy of the test question recommendation, the policy determining unit 503 is specifically configured to:
judging whether the user capacity defect meets a preset condition or not; when the user capability defect is determined to meet a preset condition, determining a target recommendation strategy according to the user capability defect; and when the user capability defect is determined to not meet the preset condition, determining a target recommendation strategy according to the user capability defect, the test question subject carried by the historical question making information and the grade of the user.
As an embodiment, in order to improve the accuracy of the test question recommendation, the information obtaining unit 501 is specifically configured to:
each piece of user wrong question information which is stored in a preset time period and carries the ith test question theme is determined to be historical question information; or respectively determining T pieces of user wrong question information with the ith test question theme, the storage time of which is closest to the current time, as historical question information; t is a positive integer, i is less than or equal to N, and N is the number of thematic subjects;
The capability determining unit 502 is specifically configured to:
and analyzing the historical question information respectively to determine the user capacity defect corresponding to the historical question information.
As an embodiment, in order to improve the accuracy of the test question recommendation, the policy determining unit 503 is specifically configured to: determining a target recommendation strategy corresponding to each history question information according to the user capability defect corresponding to each history question information;
the test question determining unit 504 is specifically configured to: determining candidate recommended test questions corresponding to each history question information according to target recommendation strategies corresponding to each history question information;
the test question recommending unit 505 is specifically configured to:
generating a test question recommendation set corresponding to an ith test question according to candidate recommendation test questions corresponding to each history question making information carrying the ith test question, extracting the test questions of the test question recommendation set corresponding to the ith test question according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question, obtaining each target recommendation test question, and recommending the target recommendation test questions to the user; wherein i is a positive integer, i is less than or equal to N, and N is the number of thematic subjects.
As an embodiment, to improve accuracy of the test question recommendation, the apparatus 500 further includes:
The test question selection unit is used for selecting a preset number of test questions from the expanded test question set to serve as test questions to be recommended; the extended test question set comprises at least one test question carrying an extended theme; the expansion theme characterizes a test question theme which is not exercised by the user;
the test question recommending unit 505 is specifically configured to:
and recommending the candidate recommended test questions and the test questions to be recommended to the user.
As an embodiment, to improve accuracy of the test question recommendation, the apparatus 500 further includes:
the level acquisition unit is used for acquiring a user preset capacity level;
the policy determining unit 503 is specifically configured to:
and determining a target recommendation strategy according to the user capacity defect and the user preset capacity level.
Further, the embodiment of the application also provides test question recommending equipment, which comprises: a processor, memory, system bus;
the processor and the memory are connected through the system bus;
the memory is configured to store one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform any one of the implementation methods of the question recommending method described above.
Further, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores instructions, and when the instructions run on the terminal device, the terminal device is caused to execute any implementation method of the test question recommending method.
Further, the embodiment of the application also provides a computer program product, which when run on a terminal device, causes the terminal device to execute any implementation method of the test question recommending method.
From the above description of embodiments, it will be apparent to those skilled in the art that all or part of the steps of the above described example methods may be implemented in software plus necessary general purpose hardware platforms. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present description, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. The test question recommending method is characterized by comprising the following steps of:
after receiving a test question recommendation request triggered by a user, acquiring historical question information of the user;
analyzing the historical question information to determine the user capacity defect;
determining a target recommendation strategy according to the user capability defect;
screening candidate recommended test questions from the candidate test question set according to the target recommendation strategy;
recommending the candidate recommended test questions to the user;
the analyzing the historical doing topic information to determine the user capacity defect comprises the following steps:
analyzing the historical question information and determining the user score loss reason;
Determining the user score loss type according to the user score loss reason and the first mapping relation; the first mapping relation is used for recording the corresponding relation between each misclassification reason and each misclassification type;
determining the problem solving capability defect corresponding to the user failure classification type as a user capability defect;
or determining a target recommendation strategy according to the user capability defect, including:
determining a target recommendation strategy according to the user capability defect and the second mapping relation; the second mapping relation is used for recording the corresponding relation between the problem solving capability defect and the test problem recommending strategy.
2. The method of claim 1, wherein the determining a target recommendation policy based on the user capability deficiency comprises:
judging whether the user capacity defect meets a preset condition or not;
when the user capability defect is determined to meet a preset condition, determining a target recommendation strategy according to the user capability defect;
and when the user capability defect is determined to not meet the preset condition, determining a target recommendation strategy according to the user capability defect, the test question subject carried by the historical question making information and the grade of the user.
3. The method of claim 1, wherein the obtaining historical topical information for the user comprises:
each piece of user wrong question information which is stored in a preset time period and carries the ith test question theme is determined to be historical question information; or respectively determining T pieces of user wrong question information with the ith test question theme, the storage time of which is closest to the current time, as historical question information; t is a positive integer, i is less than or equal to N, and N is the number of thematic subjects;
analyzing the historical doing topic information to determine a user capacity defect, including:
and analyzing the historical question information respectively to determine the user capacity defect corresponding to the historical question information.
4. A method according to claim 3, wherein said determining a target recommendation policy based on said user capability deficiency; screening candidate recommended test questions from the candidate test question set according to the target recommendation strategy, wherein the screening comprises the following steps:
determining a target recommendation strategy corresponding to each history question information according to the user capability defect corresponding to each history question information; determining candidate recommended test questions corresponding to each history question information according to target recommendation strategies corresponding to each history question information;
The recommending the candidate recommended test questions to the user comprises the following steps:
generating a test question recommendation set corresponding to an ith test question according to candidate recommendation test questions corresponding to each history question making information carrying the ith test question, extracting the test questions of the test question recommendation set corresponding to the ith test question according to the recommendation priority and the test question recommendation proportion corresponding to the ith test question, obtaining each target recommendation test question, and recommending the target recommendation test questions to the user; wherein i is a positive integer, i is less than or equal to N, and N is the number of thematic subjects.
5. The method according to claim 1, wherein the method further comprises:
selecting a preset number of test questions from the expanded test question set to serve as test questions to be recommended; the extended test question set comprises at least one test question carrying an extended theme; the expansion theme characterizes a test question theme which is not exercised by the user;
the recommending the candidate recommended test questions to the user comprises the following steps:
and recommending the candidate recommended test questions and the test questions to be recommended to the user.
6. The method according to claim 1, wherein the method further comprises:
acquiring a preset capability level of a user;
The determining a target recommendation strategy according to the user capability defect comprises the following steps:
and determining a target recommendation strategy according to the user capacity defect and the user preset capacity level.
7. The utility model provides a test question recommending apparatus which characterized in that includes:
the information acquisition unit is used for acquiring historical question making information of the user after receiving a question recommending request triggered by the user;
the capability determining unit is used for analyzing the historical question information and determining the capability defect of the user;
the strategy determining unit is used for determining a target recommendation strategy according to the user capacity defect;
the test question determining unit is used for screening candidate recommended test questions from the candidate test question set according to the target recommendation strategy;
the test question recommending unit is used for recommending the candidate recommended test questions to the user;
wherein the capability determination unit includes:
the reason analysis subunit is used for analyzing the historical question information and determining the reason of the user failure;
the capacity analysis subunit is used for determining the user score losing type according to the user score losing reason and the first mapping relation; the first mapping relation is used for recording the corresponding relation between each misclassification reason and each misclassification type;
Determining the problem solving capability defect corresponding to the user failure classification type as a user capability defect;
or the policy determining unit is configured to determine a target recommendation policy according to the user capability defect and the second mapping relationship; the second mapping relation is used for recording the corresponding relation between the problem solving capability defect and the test problem recommending strategy.
CN202010579654.XA 2020-06-23 2020-06-23 Test question recommending method and device Active CN111708951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010579654.XA CN111708951B (en) 2020-06-23 2020-06-23 Test question recommending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010579654.XA CN111708951B (en) 2020-06-23 2020-06-23 Test question recommending method and device

Publications (2)

Publication Number Publication Date
CN111708951A CN111708951A (en) 2020-09-25
CN111708951B true CN111708951B (en) 2023-06-09

Family

ID=72543014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010579654.XA Active CN111708951B (en) 2020-06-23 2020-06-23 Test question recommending method and device

Country Status (1)

Country Link
CN (1) CN111708951B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625302B (en) * 2020-12-10 2024-05-31 炬芯科技股份有限公司 Electronic question making method, device, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407237A (en) * 2015-08-03 2017-02-15 科大讯飞股份有限公司 An online study test question recommendation method and system
KR20170034106A (en) * 2015-09-18 2017-03-28 아주대학교산학협력단 Apparatus of Recommending Problems of Adequate Level for User and Method thereof
CN109509126A (en) * 2018-11-02 2019-03-22 中山大学 A kind of personalized examination question recommended method based on user's learning behavior
CN110704732A (en) * 2019-09-19 2020-01-17 广州大学 Cognitive diagnosis-based time-sequence problem recommendation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407237A (en) * 2015-08-03 2017-02-15 科大讯飞股份有限公司 An online study test question recommendation method and system
KR20170034106A (en) * 2015-09-18 2017-03-28 아주대학교산학협력단 Apparatus of Recommending Problems of Adequate Level for User and Method thereof
CN109509126A (en) * 2018-11-02 2019-03-22 中山大学 A kind of personalized examination question recommended method based on user's learning behavior
CN110704732A (en) * 2019-09-19 2020-01-17 广州大学 Cognitive diagnosis-based time-sequence problem recommendation method

Also Published As

Publication number Publication date
CN111708951A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN110782921B (en) Voice evaluation method and device, storage medium and electronic device
US20160293036A1 (en) System and method for adaptive assessment and training
CN108281052A (en) A kind of on-line teaching system and online teaching method
January et al. Universal screening in grades K-2: A systematic review and meta-analysis of early reading curriculum-based measures
US20090239201A1 (en) Phonetic pronunciation training device, phonetic pronunciation training method and phonetic pronunciation training program
CN111930925B (en) Test question recommendation method and system based on online teaching platform
CN106558252B (en) Spoken language practice method and device realized by computer
CN111242816A (en) Multimedia teaching plan making method and system based on artificial intelligence
CN112348725A (en) Knowledge point difficulty grading method based on big data
CN111507680A (en) Online interviewing method, system, equipment and storage medium
CN111597305B (en) Entity marking method, entity marking device, computer equipment and storage medium
Baur et al. A shared task for spoken CALL?
Shan et al. [Retracted] Research on Classroom Online Teaching Model of “Learning” Wisdom Music on Wireless Network under the Background of Artificial Intelligence
CN111708951B (en) Test question recommending method and device
CN112507792B (en) Online video key frame positioning method, positioning system, equipment and storage medium
CN116258613B (en) Course planning method, course planning device, and readable storage medium
Webb Advanced lexical development
Koizumi et al. Comparing the story retelling speaking test with other speaking tests
CN110968669B (en) Intelligent video analysis police test question classification and recommendation method
CN113919983A (en) Test question portrait method, device, electronic equipment and storage medium
CN109582971B (en) Correction method and correction system based on syntactic analysis
CN115797122B (en) Operation analysis method and device
CN117275319B (en) Device for training language emphasis ability
CN111368177B (en) Answer recommendation method and device for question-answer community
Wu et al. A Review of Empirical Studies on Peerceptiv

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant