CN113628080B - Score prediction method and device, storage medium and electronic equipment - Google Patents

Score prediction method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113628080B
CN113628080B CN202111133101.2A CN202111133101A CN113628080B CN 113628080 B CN113628080 B CN 113628080B CN 202111133101 A CN202111133101 A CN 202111133101A CN 113628080 B CN113628080 B CN 113628080B
Authority
CN
China
Prior art keywords
score
sample
scores
answered
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111133101.2A
Other languages
Chinese (zh)
Other versions
CN113628080A (en
Inventor
卢鑫鑫
刘萌
叶礼伟
夏志群
蔡晓凤
孙康明
吴嫒博
孙朝旭
滕达
覃伟枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111133101.2A priority Critical patent/CN113628080B/en
Publication of CN113628080A publication Critical patent/CN113628080A/en
Application granted granted Critical
Publication of CN113628080B publication Critical patent/CN113628080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Abstract

The application provides a score prediction method, a score prediction device, a storage medium and electronic equipment, and relates to the technical field of computers. The method can be applied to scenes such as cloud technology, Internet of vehicles, intelligent transportation, artificial intelligence and the like, after the scores of the answered test questions of at least two target objects are obtained, the object characteristic relationship between every two target objects, the score characteristic relationship between every two answered test question scores and the attribute characteristic relationship between each target object and each answered test question score can be respectively determined, and the determined object characteristic relationship, each score characteristic relationship and each attribute characteristic relationship are subjected to convolution operation to obtain the prediction scores of the unanswered test questions of at least two target objects. The scores of the unanswered test questions can be predicted according to the determined various characteristic relations, so that the corresponding predicted scores are obtained, and the accuracy and efficiency of predicting the scores can be improved.

Description

Score prediction method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a score prediction method, an apparatus, a storage medium, and an electronic device.
Background
With the continuous development of education systems, in the process of education on a target object, the score condition of the target object in the non-made test questions is predicted according to the historical test question answering score of the target object, and then test question recommendation, learning path planning and the like are performed on the target object according to the score condition.
In the related art, the historical test question answering score of a target object is usually obtained through a mode of simulating an examination, examining a test point and the like, and then the answer score of the target object in the non-made test question is predicted depending on the analysis and understanding of education related personnel on the historical test question answering score.
However, in this method, since the prediction score of the target object in the non-made test questions needs to be obtained according to the historical test question answering scores of a large number of target objects, the efficiency of predicting the test question score is low, and the accuracy of the obtained score prediction result is not high.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present application provide a score prediction method, apparatus, storage medium, and electronic device, which can improve accuracy and efficiency of predicting a score of a target object.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a score prediction method, including:
acquiring an initial score set; the initial set of scores comprises at least: the method comprises the following steps of (1) marking answered test questions, unanswered test questions and the scores of the answered test questions of at least two target objects;
respectively determining an object characteristic relationship between every two target objects, a score characteristic relationship between every two answered test question scores and an attribute characteristic relationship between each target object and each answered test question score;
and carrying out convolution operation on the obtained object feature relations, the obtained score feature relations and the obtained attribute feature relations to obtain the prediction scores of the unanswered test questions of the at least two target objects.
In a second aspect, an embodiment of the present application further provides a score prediction apparatus, including:
an initial score acquisition unit for acquiring an initial score set; the initial set of scores comprises: the method comprises the following steps of (1) marking answered test questions, unanswered test questions and the scores of the answered test questions of at least two target objects;
The characteristic relation determining unit is used for respectively determining an object characteristic relation between every two target objects, a score characteristic relation between every two answered test question scores and an attribute characteristic relation between each target object and each answered test question score;
and the prediction score determining unit is used for performing convolution operation on the obtained object feature relations, the obtained score feature relations and the obtained attribute feature relations to obtain the prediction scores of the unanswered test questions of the at least two target objects.
In an optional embodiment, the feature relationship determining unit is specifically configured to:
inputting the initial score set into a first convolution network in a trained score prediction model, and performing a first convolution operation on the initial score set based on the first convolution network to obtain a corresponding feature score set;
the feature score set comprises a plurality of feature scores obtained by performing the first convolution operation on each answered test question score; in the feature score set, the relationship between the feature scores corresponding to each two adjacent column elements represents the object feature relationship between two corresponding target objects;
In the feature score set, the relationship between every two adjacent feature scores represents the score feature relationship between corresponding two answered test question scores;
in the feature score set, the relationship between the feature scores represents the attribute feature relationship between the target objects and the answered question scores.
In an optional embodiment, the prediction score determining unit is specifically configured to:
inputting the feature score set into a second convolution network in the score prediction model, and performing second convolution operation on the feature score set based on the second convolution network to obtain a corresponding target score set;
wherein the set of target scores comprises: predicted scores of the unanswered questions of the at least two target objects.
In an alternative embodiment, the apparatus further comprises a model training unit for:
acquiring a sample initial score set; the sample initial score set comprises: the method comprises the following steps of (1) answering sample questions, non-answering sample questions, the answered sample scores of the answered sample questions of at least two sample objects, and the non-answering sample scores of the non-answering sample questions of the at least two sample objects; each unanswered sample score is determined based on the answered sample scores associated with the corresponding sample objects;
Training a score prediction model based on the initial score set of samples until the score prediction model converges, wherein one training process comprises:
inputting the initial sample score set into a score prediction model to be trained, and obtaining a corresponding sample prediction score set based on the score prediction model;
determining a corresponding target loss value according to the sample prediction score set and the sample initial score set;
and adjusting parameters of the score prediction model to be trained according to the target loss value.
In an optional embodiment, the model training unit is further configured to:
performing a first convolution operation on the initial sample fraction set based on a first convolution network in the fraction prediction model to obtain a corresponding sample characteristic fraction set;
and performing second convolution operation on the sample characteristic score set based on a second convolution network in the score prediction model to obtain a corresponding sample prediction score set.
In an alternative embodiment, the set of sample prediction scores comprises: the sample prediction scores of the sample questions of the at least two sample objects are not equal to the sample prediction scores of the sample questions of the at least two sample objects; the model training unit is further configured to:
Determining a first loss value according to the norm of the difference between each answered sample score and the corresponding each answered sample prediction score;
determining a corresponding answered sample average score according to each answered sample score, and determining a corresponding unanswered predicted average score according to each unanswered sample predicted score; determining a second loss value based on a difference between the sample average score answered and the predicted average score not answered;
based on the first loss value and the second loss value, a respective target loss value is determined.
In an alternative embodiment, the set of sample prediction scores comprises: (ii) the answered sample prediction scores of the answered sample questions of the at least two sample objects and the unanswered sample prediction scores of the unanswered sample questions of the at least two sample objects; the model training unit is further configured to:
determining a first loss value according to the norm of the difference between each answered sample score and the corresponding each answered sample prediction score;
determining a corresponding answered sample average score according to each answered sample score, and determining a corresponding unanswered predicted average score according to each unanswered sample predicted score; determining a second loss value based on a difference between the sample average score answered and the predicted average score not answered;
Respectively determining sample fraction degrees corresponding to the answered sample fractions according to the ratio of the answered sample fractions to the test question difficulty grades corresponding to the corresponding test questions; respectively determining the prediction score degrees corresponding to the prediction scores of the unanswered samples according to the ratio of the prediction scores of the unanswered samples to the test question difficulty grades corresponding to the corresponding test questions; determining a corresponding sample average score according to each sample score, and determining a corresponding predicted average score according to each predicted score; determining a third loss value according to the difference between the sample average fraction degree and the predicted average fraction degree;
determining a respective target loss value based on the first, second, and third loss values.
In an optional embodiment, the apparatus further comprises a test question recommending unit, configured to:
and recommending corresponding test questions for each target object according to the prediction scores of the unanswered test questions of the at least two target objects.
In a third aspect, this application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the score prediction method of the first aspect is implemented.
In a fourth aspect, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and when the computer program is executed by the processor, the processor is caused to implement the score prediction method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which includes computer instructions stored in a computer-readable storage medium; when the processor of the computer device reads the computer instructions from the computer readable storage medium, the processor executes the computer instructions, causing the computer device to perform the steps of any of the score prediction methods described above.
According to the score prediction method, the score prediction device, the storage medium and the electronic equipment, after an initial score set of the scores of the already-answered test questions of at least two target objects is obtained, an object feature relationship between every two target objects, a score feature relationship between every two already-answered test question scores and an attribute feature relationship between each target object and each already-answered test question score can be respectively determined, convolution operation is conducted on each obtained object feature relationship, each score feature relationship and each attribute feature relationship, and prediction scores of the at least two target objects of the not-answered test questions are obtained. The characteristic relations among the target objects, the scores of the answered test questions and the scores of the target objects and the answered test questions can be learned according to the initial score set, and then the scores of the unanswered test questions of each target object can be predicted according to the characteristic relations to obtain the predicted scores of the unanswered test questions of at least two target objects, so that the accuracy and the efficiency of predicting the scores can be improved, and the obtained score prediction result can reflect the real answer level of the target objects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application scenario diagram of a score prediction method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a score prediction method according to an embodiment of the present disclosure;
fig. 3a is a schematic flowchart of a training method of a score prediction model according to an embodiment of the present disclosure;
fig. 3b is a schematic flow chart illustrating a process of determining a set of sample prediction scores according to an embodiment of the present application;
fig. 3c is a schematic diagram of determining a sample feature score set according to an embodiment of the present application;
fig. 3d is a schematic diagram of determining a set of sample prediction scores according to an embodiment of the present application;
fig. 3e is a schematic flowchart of determining a target loss value according to an embodiment of the present application;
FIG. 3f is a schematic diagram of another process for determining a target loss value according to an embodiment of the present application;
FIG. 4a is a schematic flow chart of another score prediction method provided in the embodiments of the present application;
fig. 4b is a schematic diagram of obtaining an initial score set according to an embodiment of the present application;
FIG. 4c is a schematic diagram of determining a target score set according to an embodiment of the present application;
FIG. 4d is a schematic illustration of determining a prediction score according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a test question recommendation provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a score prediction apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another score prediction apparatus provided in the embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that the terms "first," "second," and the like, as used herein, are used interchangeably to distinguish between similar elements and not necessarily to describe a particular order or sequence. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than described or illustrated herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The present application will be described in further detail with reference to the following drawings and specific embodiments.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
Embodiments of the present application relate to Artificial Intelligence (AI) and Machine Learning technologies, and are designed based on Speech processing Technology (Speech Technology) and Machine Learning (ML) in the AI.
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology mainly comprises a computer vision technology, a voice processing technology, machine learning/deep learning and other directions.
With the research and progress of artificial intelligence technology, artificial intelligence is developed and researched in a plurality of fields, such as common smart home, image retrieval, video monitoring, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical treatment and the like.
Key technologies for speech processing technology are automatic speech recognition technology (ASR) and speech synthesis technology (TTS), as well as voiceprint recognition technology. The computer can listen, see, speak and feel, and is the development direction of man-machine interaction in the future, and at present, voice becomes one of man-machine interaction modes.
The natural language processing technology is an important direction in the fields of computer science and artificial intelligence. It is a research into various theories and methods that enable efficient communication between humans and computers using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics. Therefore, the research in this field will involve natural language, i.e. the language that people use everyday, so it is closely related to the research of linguistics. Natural language processing techniques typically include speech processing, semantic understanding, text processing, and the like.
Machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and the like. After the initial score set is obtained, the prediction scores of the unanswered test questions of the at least two target objects can be obtained according to the answered test question scores of the answered test questions of the at least two target objects in the initial score set based on a score prediction model of machine learning or deep learning.
The embodiment of the application also relates to a Blockchain (Blockchain) technology, wherein the Blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A blockchain is essentially a decentralized database, a string of blocks that are generated using cryptographic methods. Each block records a batch of test data of user behavior for verifying the validity (anti-counterfeiting) of the test data and generating the next block. Each block of the block chain comprises a hash value of the test data stored in the block (the hash value of the block) and a hash value of a previous block, and the blocks are connected through the hash values to form the block chain. Each block of the block chain may further include information such as a time stamp when the block is generated.
In the embodiment of the application, the scores of the answered test questions can be stored in the block chain in real time, the server can obtain the scores of the answered test questions from the block chain, and then the scores of the answered test questions are subjected to convolution operation through the trained score prediction model to obtain the prediction scores of the unanswered test questions. And the initial sample score set comprising the fraction of the sample which is already answered and the fraction of the sample which is not already answered can also be stored on the block chain in real time, and the server acquires the initial sample score set from the block chain to train the score prediction model to obtain the trained score prediction model.
In order to better understand the technical solution provided by the embodiment of the present application, some brief descriptions are provided below for application scenarios to which the technical solution provided by the embodiment of the present application is applicable, and it should be noted that the application scenarios described below are only used for illustrating the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
The score prediction method provided by the embodiment of the application can be applied to the application scenario shown in fig. 1. Referring to fig. 1, the server 100 is communicatively connected to the terminal device 300 through a network 200, wherein the network 200 may be, but is not limited to, a local area network, a metropolitan area network, a wide area network, or the like, and the number of the terminal devices 300 connected to the server 100 may be plural. The terminal device 300 can transmit communication data and messages to and from the server 100 through the network 200.
The terminal 300 may be a portable device (e.g., a mobile phone, a tablet Computer, a notebook Computer, etc.), or may be a Computer, a smart screen, a Personal Computer (PC), etc. In addition, the terminal device 300 may also include, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, and the like. The server 100 may be a server or a server cluster or a cloud computing center composed of a plurality of servers, or a virtualization platform, and may also be a personal computer, a large and medium-sized computer, or a computer cluster, etc. According to implementation needs, the application scenario in the embodiment of the present application may have any number of terminal devices and servers. The embodiment of the present application is not particularly limited to this. The score prediction method provided by the embodiment of the present application may be executed by the server 100, or executed by the terminal device 300 and the server 100 in cooperation.
Illustratively, the terminal device 300 is provided with a client of the test question testing application, and the user can answer the test questions in the test question testing application, and after the user answers a part of the test questions and respectively obtains a score corresponding to each test question in the part of the test questions, the terminal device 300 can obtain the answer condition of the user and send the answer condition to the server 100. After obtaining the answer conditions of the users sent by the multiple terminal devices 300, the server 100 may obtain an initial score set based on the multiple answer conditions, where the initial score set includes the already-answered test question scores of the already-answered test questions of at least two users. The server 100 may determine an object feature relationship between every two target objects, a score feature relationship between every two already-answered test question scores, and an attribute feature relationship between each target object and each already-answered test question score, respectively, based on the initial score set, and perform a convolution operation on each obtained object feature relationship, each score feature relationship, and each attribute feature relationship to obtain a prediction score of the unanswered test question of at least two users. Moreover, after determining the prediction scores of the unanswered test questions of at least two users, the server 100 may recommend corresponding test questions to each user based on the prediction scores.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figure when the method is executed in an actual processing procedure or a device.
Fig. 2 shows a flowchart of a score prediction method provided in an embodiment of the present application, where the method may be executed by the server 100 in fig. 1, or may be executed by a terminal device or other electronic devices. By way of example, a specific implementation process of the score prediction method according to the embodiment of the present application is described below with a server for performing score prediction as an execution subject. The specific implementation process performed by other devices is similar to the process performed by the server alone, and is not described herein again.
As shown in fig. 2, the score prediction method includes the following steps:
in step S201, an initial score set is acquired.
The initial score set may include the number of the already-answered questions, the number of the non-answered questions, and the number of the already-answered test questions of at least two target objects.
For example, there are 20 questions, target object a answers 1 st to 5 th of the 20 questions, target object B answers 6 th to 10 th of the 20 questions, target object C answers 11 th to 15 th of the 20 questions, and target object D answers 16 th to 20 th of the 20 questions. Then the initial score set can be found as: the number of the already-answered test questions of the 1 st to 5 th test questions is respectively answered by the target object A, the number of the already-answered test questions of the 6 th to 10 th test questions is respectively answered by the target object B, the number of the already-answered test questions of the 11 th to 15 th test questions is respectively answered by the target object C, and the number of the already-answered test questions of the 16 th to 20 th test questions is respectively answered by the target object D.
In this embodiment, since the obtained initial score set may include the already-answered test questions scores of the already-answered test questions of at least two target objects, when the already-answered test questions of the target objects are less, the initial score set may be a sparse matrix including a small number of already-answered test question scores. And according to the initial score set, the prediction scores of the unanswered test questions of at least two target objects can be obtained, so that the purpose of obtaining more prediction scores of the other unanswered test questions through fewer scores of the answered test questions can be achieved, and the efficiency of predicting the scores is improved.
Step S202, respectively determining an object feature relationship between every two target objects, a score feature relationship between every two already-answered test question scores, and an attribute feature relationship between each target object and each already-answered test question score.
In step S202, the initial score set may be input into a first convolution network in the trained score prediction model, and a first convolution operation is performed on the initial score set based on the first convolution network to obtain a corresponding feature score set.
The feature score set comprises a plurality of feature scores obtained by performing first convolution operation on each answered test question score. In the feature score set, the relationship between the feature scores corresponding to each two adjacent column elements characterizes the object feature relationship between the corresponding two target objects.
In the feature score set, the relationship between every two adjacent feature scores represents the score feature relationship between corresponding two answered test question scores.
In the feature score set, the relationship between the feature scores represents the attribute feature relationship between each target object and each answered test question score.
In this embodiment, an initial score set including scores of already-answered test questions of at least two target objects may be input into the trained score prediction model, and a first convolution operation may be performed on the initial score set based on a first convolution network in the score prediction model to obtain a corresponding feature score set. Therefore, in the process, the initial score set is subjected to dimensionality reduction to obtain an abstract representation of the answer condition of each target object, namely, the initial score set is subjected to a first convolution operation to learn the characteristic relationships among the target objects, the scores of the already-answered test questions and the scores of the target objects and the already-answered test questions in the initial score set, so that the scores of the unanswered test questions of each target object are predicted according to the characteristic relationships to obtain the predicted scores of the unanswered test questions of at least two target objects, and the accuracy of predicting the scores is improved.
Step S203, performing convolution operation on the obtained object feature relations, the obtained score feature relations and the obtained attribute feature relations to obtain the prediction scores of the unanswered test questions of at least two target objects.
In step S203, since the feature score set may represent an object feature relationship between every two target objects, a score feature relationship between every two already-answered question scores, and an attribute feature relationship between each target object and each already-answered question score, the feature score set may be input to a second convolution network in the score prediction model, and a second convolution operation is performed on the feature score set based on the second convolution network to obtain a corresponding target score set.
The target score set can comprise predicted scores of unanswered test questions of at least two target objects.
For example, there are 20 questions, the initial score set is: the number of the already-answered test questions of the 1 st to 5 th test questions is respectively answered by the target object A, the number of the already-answered test questions of the 6 th to 10 th test questions is respectively answered by the target object B, the number of the already-answered test questions of the 11 th to 15 th test questions is respectively answered by the target object C, and the number of the already-answered test questions of the 16 th to 20 th test questions is respectively answered by the target object D. The initial set of scores is input into a trained score prediction model, passing through a first convolutional network and a second convolutional network in the score prediction model. After the convolution operation is performed on the initial fraction set, the target fraction set which can be obtained is as follows: the predicted scores of the target object a not answering the 6 th to 20 th questions, respectively, the predicted scores of the target object B not answering the 1 st to 5 th questions, and the 11 th to 20 th questions, respectively, the predicted scores of the target object C not answering the 1 st to 10 th questions, and the 16 th to 10 th questions, respectively, and the predicted scores of the target object D not answering the 1 st to 15 th questions, respectively.
In this embodiment, the obtained feature score set may be input to a second convolution network in the score prediction model, and a second convolution operation is performed on the feature score set based on the second convolution network to obtain a corresponding target score set. Therefore, the scores of all the unanswered test questions of all the target objects can be predicted according to the learned characteristic relations among the target objects, the scores of the answered test questions and the scores of the target objects and the answered test questions in the initial score set, the prediction scores of the unanswered test questions of at least two target objects are obtained, the accuracy of predicting the scores is improved, and the score prediction result output by the score prediction model is more convincing.
The training method of the score prediction model used in step S202 may be as shown in fig. 3a, and the training method of the alignment model may be executed by a server or a terminal device. The embodiment takes the server executing the training method as an example for explanation.
As shown in fig. 3a, the training method of the fractional prediction model may include the following steps:
step S301, a sample initial score set is obtained.
The acquired sample initial score set may include: the sample test questions include an answered sample test question, an unanswered sample test question, answered sample scores of the answered sample test questions of at least two sample objects, and unanswered sample scores of the unanswered sample test questions of at least two sample objects. Wherein each unanswered sample score is determined based on the answered sample scores associated with the respective sample objects.
Step S302, inputting the initial sample score set into a score prediction model to be trained, and obtaining a corresponding sample prediction score set based on the score prediction model.
In step S302, the initial sample score set may be input into a score prediction model to be trained, and based on the score prediction model, a sample prediction score set corresponding to the initial sample score set may be obtained.
The sample prediction score set may include: the sample prediction scores of the sample questions of the at least two sample objects that have been answered, and the sample prediction scores of the sample questions of the at least two sample objects that have not been answered.
Step S303, determining a corresponding target loss value according to the sample prediction score set and the sample initial score set.
And determining a corresponding target loss value according to the obtained sample prediction score set and the original sample initial score set. Generally, the target loss value is a measure of how close the actual output is to the desired output. The smaller the target loss value, the closer the actual output is to the desired output.
Step S304, determining whether the target loss value converges to a preset target value; if not, executing step S305; if so, step S306 is performed.
Judging whether the target loss value converges to a preset target value, if the target loss value is smaller than or equal to the preset target value, or if the variation amplitude of the target loss value obtained by continuous N times of training is smaller than or equal to the preset target value, considering that the target loss value converges to the preset target value, and indicating that the target loss value converges; otherwise, it indicates that the target loss value has not converged.
And S305, adjusting parameters of the score prediction model to be trained according to the determined target loss value.
And if the target loss value is not converged, adjusting the model parameters, and after adjusting the model parameters, returning to execute the step S302 and continuing the next round of training process.
And step S306, finishing the training to obtain the trained score prediction model.
And if the target loss value is converged, taking the currently obtained score prediction model as a trained score prediction model.
In the step S302, the specific process of inputting the initial sample score set into the score prediction model to be trained and obtaining the corresponding sample prediction score set based on the score prediction model may be as shown in fig. 3b, and includes the following steps:
step S3021, inputting the sample initial score set into a first convolution network in the score prediction model to be trained, and obtaining a corresponding sample feature score set based on the first convolution network.
In step S3021, the sample initial score set may be input to a first convolution network in the score prediction model to be trained, and a first convolution operation is performed on the sample initial score set based on the first convolution network to obtain a sample feature score set corresponding to the sample initial score set.
Specifically, in an embodiment, if the number of target objects is M and the number of test questions is N, the obtained sample initial score set may be an M × N matrix. When the target object j already answers the test question i, elements corresponding to the ith row and the jth column in the initial sample score set are already answered sample scores; when the target object j does not answer the test question i, the element corresponding to the ith row and the jth column in the initial sample score set is an unanswered sample score.
As shown in fig. 3c, the initial sample fraction set may be input into a first convolution network of the fraction prediction model, and after passing through a first convolution layer of the first convolution network, where the convolution kernel size of the first convolution layer is M × 1, and the number of convolution kernels is M _1, an M _1 × N matrix may be obtained, and after passing through a second convolution layer of the first convolution network, and the convolution kernel size of the second convolution layer is M _1 × 1, and the number of convolution kernels is M ', an M'/N matrix may be obtained, where the matrix is the sample feature fraction set. Wherein M' is less than M. The number of convolutional layers included in the first convolutional network may not be limited to include only the first convolutional layer and the second convolutional layer, i.e., the number of convolutional layers included in the first convolutional network may be plural to increase the network complexity of the first convolutional network.
Step S3022, inputting the sample characteristic score set into a second convolution network in the score prediction model to be trained, and obtaining a corresponding sample prediction score set based on the second convolution network.
In step S3022, the sample feature score set may be input to a second convolution network in the score prediction model to be trained, and a second convolution operation is performed on the sample feature score set based on the second convolution network to obtain a sample prediction score set corresponding to the sample initial score set.
Specifically, in an embodiment, as shown in fig. 3d, after obtaining the sample feature score set of M '× N, the sample feature score set may be input into a second convolution network of the score prediction model, and pass through a third convolution layer of the second convolution network, where the convolution kernel size of the convolution layer is M' × 1, and the number of convolution kernels is M _1, then a matrix of M _1 × N may be obtained, and pass through a fourth convolution layer of the second convolution layer, where the convolution kernel size of the convolution layer is M _1 × 1, and the number of convolution kernels is M, then a matrix of M × N may be obtained, where the matrix is the sample prediction score set. When the target object j already answers the test question i, elements corresponding to the ith row and the jth column in the sample prediction score set are already answered sample prediction scores; when the target object j does not serve as the answer test question i, elements corresponding to the ith row and the jth column in the initial sample score set are prediction scores of unanswered samples. The number of convolutional layers included in the second convolutional network may not be limited to include only the third convolutional layer and the fourth convolutional layer, i.e., the number of convolutional layers included in the second convolutional network may be plural to increase the network complexity of the second convolutional network.
In this embodiment, in the process of training the score prediction model, the initial sample score set may be input to a first convolution network in the score prediction model to obtain a corresponding sample feature score set, and then the obtained sample feature score set is input to a second convolution network in the score prediction model to obtain a corresponding sample prediction score set. Therefore, the characteristic relations among the sample objects in the sample initial score set, the answered sample scores and the sample objects and the answered sample scores can be learned based on the first convolution network in the score prediction model, and then score prediction is carried out based on the second convolution network in the score prediction model to obtain the answered sample prediction scores of the answered sample questions of at least two sample objects and the unanswered sample prediction scores of the unanswered sample questions of at least two sample objects.
In an embodiment, in the step S303, a specific process of determining the corresponding target loss value according to the sample prediction score set and the sample initial score set may be as shown in fig. 3e, and includes the following steps:
step S3031, determining a first loss value according to a norm of a difference between each of the already answered sample scores and the corresponding each of the already answered sample prediction scores.
Specifically, the first loss value may be obtained according to the following formula:
Figure 221020DEST_PATH_IMAGE001
wherein, f 1 Is a first loss value, L init For the initial set of scores of the sample, L predict Predicting a set of scores for a sample, Mark is an indicator matrix, if L init If the element in (1) is the unanswered sample score, the corresponding element of the Mark matrix is 0, and if L is the unanswered sample score init If the element in (1) is the sample score already answered, the corresponding element of the Mark matrix is 1.
Figure 341423DEST_PATH_IMAGE002
Represents the norm of a matrix, canTo be obtained according to the following formula:
Figure 58843DEST_PATH_IMAGE003
step 3032, determining a second loss value according to the difference between the average score of the answered samples determined according to the score of each answered sample and the average score of the unanswered predictions determined according to the prediction score of each unanswered sample.
In step S3032, a corresponding average score of the answered samples may be determined according to the scores of the answered samples, and a corresponding average score of the unanswered samples may be determined according to the predicted scores of the unanswered samples.
A second loss value is then determined based on the difference between the average score of the samples that have been answered and the average score of the predicted samples that have not been answered.
Specifically, the second loss value may be obtained according to the following formula:
Figure 657315DEST_PATH_IMAGE004
wherein f is 2 Is the second loss value, L init Is an initial set of scores for the sample, L predict Predicting a set of scores for a sample, M being L init And L predict The number of the test questions contained in (1), N is L init And L predict The number of target objects contained in (a).
Step S3033, determining a corresponding target loss value based on the first loss value and the second loss value.
Specifically, the target loss value may be obtained according to the following formula:
Figure 674949DEST_PATH_IMAGE005
wherein f is a target loss value, f 1 Is a first loss value, f 2 And λ is a preset hyperparameter for the second loss value.
In this embodiment, in order to make the prediction score obtained by the score prediction model restore the actual score as much as possible, i.e., to make the difference between the prediction score and the actual score small, the first loss value may be determined based on the norm of the difference between each already-answered sample score and the corresponding each already-answered sample prediction score. For better score prediction of unanswered questions, a priori information constraint can be introduced, and the priori information constraint can be based on a priori assumption: if the average value of the prediction scores is not much different from the average value of the actual scores, the average score of the answered sample and the average score of the unanswered prediction can be obtained respectively, and a second loss value is determined according to the difference between the average score of the answered sample and the average score of the unanswered prediction. And finally, determining a target loss value according to the first loss value and the second loss value. Due to the fact that prior information constraint is added in the process of training the score prediction model, the interpretability of the model can be improved, and after the trained score prediction model is obtained, the score is predicted through the score prediction model, and a more accurate score prediction result can be obtained.
In another embodiment, in the step S303, a specific process of determining the corresponding target loss value according to the sample prediction score set and the sample initial score set may also be as shown in fig. 3f, and includes the following steps:
step S3031 determines a first loss value according to the norm of the difference between each of the scores of the already answered samples and the corresponding prediction scores of each of the already answered samples.
Step 3032, determining a second loss value according to the difference between the average score of the answered samples determined according to the score of each answered sample and the average score of the unanswered predictions determined according to the prediction score of each unanswered sample.
Step S3033, respectively determining sample fraction degrees corresponding to each answered sample fraction according to the ratio of each answered sample fraction to the test question difficulty grade corresponding to the corresponding test question, and respectively determining prediction fraction degrees corresponding to each unanswered sample prediction fraction according to the ratio of each unanswered sample prediction fraction to the test question difficulty grade corresponding to the corresponding test question; and determining a third loss value according to the sample average score determined according to each sample score and the prediction average score determined according to each prediction score.
In step S3033, the sample score degree corresponding to each answered sample score can be determined according to the ratio of each answered sample score to the test question difficulty level corresponding to the corresponding test question. And respectively determining the prediction score degrees corresponding to the prediction scores of the unanswered samples according to the prediction scores of the unanswered samples and the ratio of the test question difficulty grades corresponding to the corresponding test questions.
Then, according to each sample fractional degree, determining a corresponding sample average fractional degree, and according to each predicted fractional degree, determining a corresponding predicted average fractional degree. And finally, determining a third loss value according to the difference between the average fraction degree of the sample and the predicted average fraction degree.
Specifically, the third loss value may be obtained according to the following formula:
Figure 697744DEST_PATH_IMAGE006
wherein f is 3 Is the third loss value, L init Is an initial set of scores for the sample, L predict Predicting a set of scores for a sample, M being L init And L predict The number of the test questions contained in (1), N is L init And L predict D is the difficulty level of the test question.
Step S3034, determining a corresponding target loss value based on the first loss value, the second loss value and the third loss value.
Specifically, the target loss value may be obtained according to the following formula:
Figure 902461DEST_PATH_IMAGE007
Wherein f is a target loss value, f 1 Is a first loss value, f 2 Is the second loss value, f 3 For the third loss value, λ and μ are preset hyperparameters, respectively.
In this embodiment, on the basis of determining the first loss value and the second loss value, based on the relationship between the question difficulty and the test question average score, the sample score degrees corresponding to the answered sample scores are respectively determined according to the answered sample scores and the test question difficulties corresponding to the corresponding test questions, the prediction score degrees corresponding to the unanswered sample prediction scores are respectively determined according to the unanswered sample prediction scores and the test question difficulties corresponding to the corresponding test questions, the corresponding sample average score degrees are determined according to the sample score degrees, the corresponding prediction average score degrees are determined according to the prediction score degrees, and the third loss value is determined according to the difference between the sample average score degrees and the prediction average score degrees. Therefore, the target loss value can be determined according to the first loss value, the second loss value and the third loss value, the interpretability of the model is further increased, and the prediction accuracy of predicting the score according to the trained score prediction model is further improved.
In the training method of the score prediction model provided in the above embodiment, the sample initial score set includes, in addition to the already-answered sample scores of the already-answered sample questions of the at least two sample objects, the unanswered sample scores of the unanswered sample questions of the at least two sample objects, and each unanswered sample score is determined based on the already-answered sample scores associated with the corresponding sample object. Therefore, the training process of training the fractional prediction model can be accelerated, and the convergence of the target loss value is accelerated. In addition, in the process of training the score prediction model, the determined first loss value is used for constraining the already-answered sample prediction scores in the sample prediction score set output by the score prediction model so that the already-answered sample prediction scores in the sample initial score set are restored as much as possible, and the determined second loss value and the determined third loss value are used for constraining the not-answered sample prediction scores in the sample prediction score set output by the score prediction model so as to obtain a better score prediction result.
In some embodiments, the score prediction method proposed in the present application may be implemented according to the process shown in fig. 4a, which may be executed by the server 100 in fig. 1, or may be executed by a terminal device or other electronic devices. For example, a server for performing score prediction is taken as an execution subject in the following, and a specific implementation process performed by other devices is similar to a process performed by the server alone, and is not described herein again.
As shown in fig. 4a, the following steps may be included:
step S401, an initial score set is obtained.
The initial score set may include the already-answered test questions, the unanswered test questions, and the already-answered test question scores of the already-answered test questions of at least two target objects. And each of the already-answered test question scores is a normalized numerical value of the answer score obtained by answering the test question by the corresponding target object, namely each of the already-answered test question scores is a numerical value between 0 and 1. And for each unanswered test question, the corresponding initial score may be 0.
For example, as shown in fig. 4B, it is assumed that 10 test questions including test question a, test question B, test question C, test question d, test question e, test question f, test question g, test question h, test question i, and test question j are provided, the full score of each test question is 10 points, and 3 target objects including target object a, target object B, and target object C are provided to answer 10 test questions. The target object A answers the test questions a, b and c, and the score corresponding to the test question a is 8, the score corresponding to the test question b is 7 and the score corresponding to the test question c is 9; the target object B answers the test question d, the test question e, the test question f and the test question g, and the score corresponding to the test question d is 7 points, the score corresponding to the test question e is 4 points, the score corresponding to the test question f is 5 points, and the score corresponding to the test question g is 6 points; the target object C answers the test question h, the test question i and the test question j, and the score corresponding to the test question h is 9, the score corresponding to the test question i is 8 and the score corresponding to the test question j is 6.
Thus, the initial score set that may be obtained includes: target object a: the number of the answered test questions corresponding to the test question a is 0.8, the number of the answered test questions corresponding to the test question b is 0.7, the number of the answered test questions corresponding to the test question c is 0.9, and the number of the rest test questions is 0; target object B: the number of the answered test questions d is 0.7, the number of the answered test questions e is 0.4, the number of the answered test questions f is 0.5, the number of the answered test questions g is 0.6, and the number of the rest test questions is 0; target object C: the number of the answered test questions h is 0.9, the number of the answered test questions i is 0.8, the number of the answered test questions j is 0.6, and the number of the other test questions is 0.
Step S402, inputting the initial score set into the trained score prediction model, and obtaining a corresponding target score set based on a first convolution network and a second convolution network in the score prediction model.
The initial score set can be input into a trained score prediction model, a first convolution operation is carried out on the initial score set based on a first convolution network in the score prediction model, the operation can compress the answer condition vectors of the target objects in the initial score set to obtain a dense vector, namely, a corresponding characteristic score set can be obtained, in the process of carrying out dimensionality reduction operation on the initial score set, abstract representation of the answer condition of the target objects can be obtained, and implicit relations between the questions can also be obtained.
After the feature score set is obtained, a second convolution operation may be performed on the feature score set based on a second convolution network in the score prediction model to obtain a corresponding target score set, where the target score set may include prediction scores of unanswered test questions of at least two target objects.
For example, assuming that the initial score set is the initial score set in fig. 4b, as shown in fig. 4c, the initial score set may be input into a trained score prediction model, and the target score set may be obtained by a first convolution network and a second convolution network in the score prediction model as follows: target object a: the prediction score corresponding to the test question d is 0.8, the prediction score corresponding to the test question e is 0.6, the prediction score corresponding to the test question f is 0.8, the prediction score corresponding to the test question g is 0.7, the prediction score corresponding to the test question h is 1, the prediction score corresponding to the test question i is 0.8, the prediction score corresponding to the test question j is 0.6, and the scores of the rest of the test questions are the corresponding already-answered test question scores; target object B: the prediction score corresponding to the test question a is 0.7, the prediction score corresponding to the test question b is 0.6, the prediction score corresponding to the test question c is 0.7, the prediction score corresponding to the test question h is 0.7, the prediction score corresponding to the test question i is 0.6, the prediction score corresponding to the test question j is 0.5, and the scores of the rest of the test questions are the corresponding answered test question scores; target object C: the prediction score corresponding to the test question a is 0.8, the prediction score corresponding to the test question b is 0.6, the prediction score corresponding to the test question c is 0.7, the prediction score corresponding to the test question d is 0.8, the prediction score corresponding to the test question e is 0.7, the prediction score corresponding to the test question f is 0.9, the prediction score corresponding to the test question g is 0.8, and the scores of the rest of the test questions are the corresponding already-answered test question scores.
After the target score set is obtained, as shown in fig. 4d, the prediction scores of the target object a that do not answer the test questions d, e, f, g, h, i, and j, the prediction scores of the target object B that do not answer the test questions a, B, C, h, i, and j, and the prediction scores of the target object C that do not answer the test questions a, B, C, e, f, and g may be obtained from the prediction scores in the target score set. Wherein, the target object A: the prediction score of the test question d is 8, the prediction score of the test question e is 6, the prediction score of the test question f is 8, the prediction score of the test question g is 7, the prediction score of the test question h is 10, the prediction score of the test question i is 8 and the prediction score of the test question j is 6; target object B: the prediction score of the test question a is 7 scores, the prediction score of the test question b is 6 scores, the prediction score of the test question c is 7 scores, the prediction score of the test question h is 7 scores, the prediction score of the test question i is 6 scores, and the prediction score of the test question j is 5 scores; target object C: the prediction score of the test question a is 8, the prediction score of the test question b is 6, the prediction score of the test question c is 7, the prediction score of the test question d is 8, the prediction score of the test question e is 7, the prediction score of the test question f is 9 and the prediction score of the test question g is 8.
In one embodiment, after the prediction scores of the unanswered test questions of the at least two target objects are obtained, corresponding test questions can be recommended for the target objects according to the prediction scores of the unanswered test questions of the at least two target objects.
For example, there are 50 test questions, 20 target objects in total answer 50 test questions, and after the prediction scores of unanswered test questions of at least two target objects among the 50 test questions are obtained, assuming that only the 7 th to 16 th test questions among the 50 test questions are answered by the target object a among the 20 target objects, the remaining 40 test questions are unanswered test questions of the target object a. And in each prediction score corresponding to the target object a, the prediction scores of the 1 st test question, the 2 nd test question and the 3 rd test question are relatively low, so that the 1 st test question, the 2 nd test question and the 3 rd test question can be recommended to the target object a. And according to the number of the answered test questions or the number of the unanswered predicted test questions which are answered by the 1 st test question by the 20 target objects, the test question difficulty of the 1 st test question can be determined to be difficult, the test question difficulty of the 2 nd test question can also be determined to be easy, and the test question difficulty of the 3 rd test question is medium. The target object a may be recommended with the test questions as shown in fig. 5.
In the embodiment, after the prediction scores of the unanswered test questions of each target object are obtained, the corresponding test questions can be recommended to each target object according to the prediction scores, so that the test questions can be recommended to each target object in a targeted manner, and the test question recommending process has stronger basis and interpretability.
The score prediction method shown in fig. 2 is based on the same inventive concept, and the embodiment of the present application further provides a score prediction device, which may be disposed in a server or a terminal device. Because the device is a device corresponding to the score prediction method of the application and the principle of solving the problem of the device is similar to that of the method, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Fig. 6 shows a schematic structural diagram of a score prediction apparatus provided in an embodiment of the present application, and as shown in fig. 6, the score prediction apparatus includes an initial score obtaining unit 601, a feature relationship determining unit 602, and a prediction score determining unit 603.
An initial score obtaining unit 601, configured to obtain an initial score set; the initial score set includes: the method comprises the following steps of (1) grading the answered test questions of at least two target objects;
A feature relationship determining unit 602, configured to determine an object feature relationship between every two target objects, a score feature relationship between every two already-answered test question scores, and an attribute feature relationship between each target object and each already-answered test question score, respectively;
and a prediction score determining unit 603, configured to perform convolution operation on the obtained object feature relationships, the obtained score feature relationships, and the obtained attribute feature relationships, to obtain prediction scores of unanswered test questions of at least two target objects.
In an alternative embodiment, the feature relation determining unit 602 is specifically configured to:
inputting the initial score set into a first convolution network in a trained score prediction model, and performing first convolution operation on the initial score set based on the first convolution network to obtain a corresponding feature score set;
the characteristic score set comprises a plurality of characteristic scores obtained by performing first convolution operation on each answered test question score; in the feature score set, the relationship between the feature scores corresponding to each two adjacent column elements represents the object feature relationship between two corresponding target objects;
in the feature score set, the relationship between every two adjacent feature scores represents the score feature relationship between corresponding two answered test question scores;
And in the feature score set, the relationship between each feature score represents the attribute feature relationship between each target object and each answered test question score.
In an alternative embodiment, the prediction score determining unit 603 is specifically configured to:
inputting the feature score set into a second convolution network in the score prediction model, and performing second convolution operation on the feature score set based on the second convolution network to obtain a corresponding target score set;
wherein the set of target scores comprises: the predicted scores of the unanswered questions of at least two target objects.
In an alternative embodiment, as shown in fig. 7, the score prediction apparatus may further include a model training unit 701, configured to:
acquiring a sample initial score set; the sample initial score set includes: the method comprises the following steps of (1) answering sample questions, non-answering sample questions, the answered sample scores of the answered sample questions of at least two sample objects, and the non-answering sample scores of the non-answering sample questions of at least two sample objects; each unanswered sample score is determined based on the answered sample scores associated with the corresponding sample objects;
training the score prediction model based on the initial score set of the samples until the score prediction model converges, wherein one training process comprises the following steps:
Inputting the initial sample score set into a score prediction model to be trained, and obtaining a corresponding sample prediction score set based on the score prediction model;
determining a corresponding target loss value according to the sample prediction score set and the sample initial score set;
and adjusting parameters of the score prediction model to be trained according to the target loss value.
In an alternative embodiment, the model training unit 701 is further configured to:
based on a first convolution network in the fraction prediction model, carrying out first convolution operation on the initial fraction set of the sample to obtain a corresponding characteristic fraction set of the sample;
and performing second convolution operation on the sample characteristic score set based on a second convolution network in the score prediction model to obtain a corresponding sample prediction score set.
In an alternative embodiment, the set of sample prediction scores comprises: the sample prediction scores of the answered sample questions of the at least two sample objects and the sample prediction scores of the unanswered samples of the unanswered sample questions of the at least two sample objects are calculated; the model training unit 701 is further configured to:
determining a first loss value according to the norm of the difference between each answered sample score and the corresponding each answered sample prediction score;
Determining a corresponding average score of the answered samples according to the scores of the answered samples, and determining a corresponding average score of the unanswered samples according to the predicted scores of the unanswered samples; determining a second loss value based on the difference between the average score of the samples that have been answered and the average score of the predictions that have not been answered;
based on the first loss value and the second loss value, a corresponding target loss value is determined.
In an alternative embodiment, the set of sample prediction scores comprises: the sample prediction scores of the answered sample questions of the at least two sample objects and the sample prediction scores of the unanswered samples of the unanswered sample questions of the at least two sample objects are calculated; the model training unit 701 is further configured to:
determining a first loss value according to the norm of the difference between each answered sample score and the corresponding each answered sample prediction score;
determining a corresponding average score of the answered samples according to the scores of the answered samples, and determining a corresponding average score of the unanswered samples according to the predicted scores of the unanswered samples; determining a second loss value based on the difference between the average score of the samples that have been answered and the average score of the predictions that have not been answered;
respectively determining sample fraction degrees corresponding to the sample fractions which are already answered according to the ratio of the sample fractions which are already answered to the test question difficulty grades corresponding to the corresponding test questions; respectively determining the prediction score degrees corresponding to the prediction scores of the unanswered samples according to the ratio of the prediction scores of the unanswered samples to the test question difficulty grades corresponding to the corresponding test questions; determining corresponding sample average score according to each sample score, and determining corresponding prediction average score according to each prediction score; determining a third loss value according to the difference between the average fraction degree of the sample and the predicted average fraction degree;
Based on the first loss value, the second loss value, and the third loss value, a respective target loss value is determined.
In an alternative embodiment, as shown in fig. 7, the score prediction apparatus may further include a test question recommending unit 702, configured to:
and recommending corresponding test questions for each target object according to the prediction scores of the unanswered test questions of at least two target objects.
The embodiment of the method and the embodiment of the device are based on the same inventive concept, and the embodiment of the application also provides electronic equipment.
In one embodiment, the electronic device may be a server, such as server 100 shown in FIG. 1. In this embodiment, the electronic device may be configured as shown in fig. 8, and include a memory 801, a communication module 803, and one or more processors 802.
A memory 801 for storing computer programs executed by the processor 802. The memory 801 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, programs required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 801 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 801 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 801 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 801 may be a combination of the above memories.
The processor 802 may include one or more Central Processing Units (CPUs), a digital processing unit, and the like. A processor 802 for implementing the above-described score prediction method when invoking a computer program stored in the memory 801.
The communication module 803 is used for communicating with terminal devices and other electronic devices. If the electronic device is a server, the server may receive the initial score set sent by the terminal device through the communication module 803.
The embodiment of the present application does not limit the specific connection medium among the memory 801, the communication module 803 and the processor 802. In fig. 8, the memory 801 and the processor 802 are connected by a bus 804, the bus 804 is represented by a thick line in fig. 8, and the connection manner between other components is merely illustrative and not limited. The bus 804 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
In another embodiment, the electronic device may be any electronic device such as a mobile phone, a tablet computer, a Point of sale (POS), a vehicle-mounted computer, a smart wearable device, and a PC, and the electronic device may also be the terminal device 300 shown in fig. 1.
Fig. 9 shows a block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic apparatus includes: a Radio Frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, a processor 980, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the electronic device in detail with reference to fig. 9:
the RF circuit 910 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then processing the received downlink information to the processor 980; in addition, the data for designing uplink is transmitted to the base station.
The memory 920 may be used to store software programs and modules, such as program instructions/modules corresponding to the score prediction method and apparatus in the embodiments of the present application, and the processor 980 may execute various functional applications and data processing of the electronic device, such as the score prediction method provided in the embodiments of the present application, by running the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program of at least one application, and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive numeric or character information input by a user and generate key signal inputs related to user settings and function control of the terminal.
Optionally, the input unit 930 may include a touch panel 931 and other input devices 932.
The touch panel 931, also referred to as a touch screen, may collect touch operations of a user on or near the touch panel 931 (for example, operations of the user on the touch panel 931 or near the touch panel 931 by using any suitable object or accessory such as a finger or a stylus pen), and implement corresponding operations according to a preset program, for example, operations of the user clicking a shortcut identifier of a function module. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave.
Alternatively, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by a user or interface information presented to the user and various menus of the electronic device. The display unit 940 is a display system of the terminal device, and is configured to present an interface, such as a display desktop, an operation interface of an application, or an operation interface of a live application.
The display unit 940 may include a display panel 941. Alternatively, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Further, the touch panel 931 may cover the display panel 941, and when the touch panel 931 detects a touch operation on or near the touch panel 931, the touch panel transmits the touch operation to the processor 980 to determine the type of the touch event, and then the processor 980 provides a corresponding interface output on the display panel 941 according to the type of the touch event.
Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the terminal.
The electronic device may also include at least one sensor 950, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that can adjust the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that can turn off the backlight of the display panel 941 when the electronic device is moved to the ear. As one of the motion sensors, the accelerometer sensor may detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and may be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), and the like, for recognizing the attitude of the electronic device, and the electronic device may further be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein again.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and the electronic device. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; microphone 962, on the other hand, converts collected sound signals into electrical signals, which are received by audio circuit 960 and converted into audio data, which are processed by audio data output processor 980, either through RF circuit 910 for transmission to another electronic device, for example, or output to memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the electronic device can help the user send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 970, which provides wireless broadband internet access for the user. Although fig. 9 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 980 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the electronic device. Alternatively, processor 980 may include one or more processing units; optionally, the processor 980 may integrate an application processor and a modem processor, wherein the application processor mainly processes software programs such as an operating system, applications, and functional modules inside the applications, such as the score prediction method provided in the embodiments of the present application. The modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
It will be appreciated that the configuration shown in fig. 9 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 9 or have a different configuration than shown in fig. 9. The components shown in fig. 9 may be implemented in hardware, software, or a combination thereof.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the score prediction method in the above-described embodiments. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A method of score prediction, the method comprising:
acquiring an initial score set; the initial set of scores comprises: the method comprises the following steps of (1) marking answered test questions, unanswered test questions and the scores of the answered test questions of at least two target objects;
inputting the initial score set into a first convolution network in a trained score prediction model, and performing a first convolution operation on the initial score set based on the first convolution network to obtain a corresponding feature score set; the feature score set comprises a plurality of feature scores obtained by performing the first convolution operation on each answered test question score; in the feature score set, the relationship between the feature scores corresponding to each two adjacent column elements represents the object feature relationship between two corresponding target objects; in the feature score set, the relationship between every two adjacent feature scores represents the score feature relationship between corresponding two answered test question scores; in the feature score set, the relationship between the feature scores represents the attribute feature relationship between the target objects and the answered test question scores;
And carrying out convolution operation on the obtained object feature relations, the obtained score feature relations and the obtained attribute feature relations to obtain the prediction scores of the unanswered test questions of the at least two target objects.
2. The method according to claim 1, wherein the convolving each of the obtained object feature relationships, each of the fractional feature relationships, and each of the attribute feature relationships to obtain the predicted scores of the unanswered questions of the at least two target objects comprises:
inputting the feature score set into a second convolution network in the score prediction model, and performing second convolution operation on the feature score set based on the second convolution network to obtain a corresponding target score set;
wherein the set of target scores comprises: predicted scores of the unanswered questions of the at least two target objects.
3. The method of claim 1, wherein the training of the fractional prediction model comprises:
acquiring a sample initial score set; the sample initial score set comprises: the method comprises the following steps of (1) answering sample questions, non-answering sample questions, the answered sample scores of the answered sample questions of at least two sample objects, and the non-answering sample scores of the non-answering sample questions of the at least two sample objects; each unanswered sample score is determined based on the answered sample scores associated with the corresponding sample objects;
Training a score prediction model based on the sample initial score set until the score prediction model converges, wherein one training process comprises the following steps:
inputting the initial sample score set into a score prediction model to be trained, and obtaining a corresponding sample prediction score set based on the score prediction model;
determining a corresponding target loss value according to the sample prediction score set and the sample initial score set;
and adjusting parameters of the score prediction model to be trained according to the target loss value.
4. The method of claim 3, wherein said deriving a set of sample prediction scores based on said score prediction model comprises:
performing a first convolution operation on the initial sample fraction set based on a first convolution network in the fraction prediction model to obtain a corresponding sample characteristic fraction set;
and performing second convolution operation on the sample characteristic score set based on a second convolution network in the score prediction model to obtain a corresponding sample prediction score set.
5. The method of claim 3, wherein the set of sample prediction scores comprises: (ii) the answered sample prediction scores of the answered sample questions of the at least two sample objects and the unanswered sample prediction scores of the unanswered sample questions of the at least two sample objects;
Determining a corresponding target loss value according to the sample prediction score set and the sample initial score set, including:
determining a first loss value according to the norm of the difference between each answered sample score and the corresponding each answered sample prediction score;
determining a corresponding answered sample average score according to each answered sample score, and determining a corresponding unanswered predicted average score according to each unanswered sample predicted score; determining a second loss value based on a difference between the sample average score answered and the predicted average score not answered;
based on the first loss value and the second loss value, a respective target loss value is determined.
6. The method of claim 3, wherein the set of sample prediction scores comprises: (ii) the answered sample prediction scores of the answered sample questions of the at least two sample objects and the unanswered sample prediction scores of the unanswered sample questions of the at least two sample objects;
determining a corresponding target loss value according to the sample prediction score set and the sample initial score set, including:
Determining a first loss value according to the norm of the difference between each answered sample score and the corresponding answered sample prediction score;
determining a corresponding answered sample average score according to each answered sample score, and determining a corresponding unanswered predicted average score according to each unanswered sample predicted score; determining a second loss value based on a difference between the sample average score answered and the predicted average score not answered;
respectively determining sample fraction degrees corresponding to the answered sample fractions according to the ratio of the answered sample fractions to the test question difficulty grades corresponding to the corresponding test questions; respectively determining the prediction score degrees corresponding to the prediction scores of the unanswered samples according to the ratio of the prediction scores of the unanswered samples to the test question difficulty grades corresponding to the corresponding test questions; determining a corresponding sample average score according to each sample score, and determining a corresponding predicted average score according to each predicted score; determining a third loss value according to the difference between the sample average fraction degree and the predicted average fraction degree;
Determining a respective target loss value based on the first, second, and third loss values.
7. The method of any one of claims 1 to 6, wherein after obtaining the predicted scores of the unanswered questions of the at least two target subjects, the method further comprises:
and recommending corresponding test questions for each target object according to the prediction scores of the unanswered test questions of the at least two target objects.
8. A score prediction apparatus, comprising:
an initial score acquisition unit for acquiring an initial score set; the initial set of scores comprises: the method comprises the following steps of (1) marking answered test questions, unanswered test questions and the scores of the answered test questions of at least two target objects;
a feature relation determining unit, configured to input the initial score set into a first convolution network in a trained score prediction model, and perform a first convolution operation on the initial score set based on the first convolution network to obtain a corresponding feature score set; the feature score set comprises a plurality of feature scores obtained by performing the first convolution operation on each answered test question score; in the feature score set, the relationship between the feature scores corresponding to each two adjacent column elements represents the object feature relationship between two corresponding target objects; in the feature score set, the relationship between every two adjacent feature scores represents the score feature relationship between corresponding two answered test question scores; in the feature score set, the relationship between the feature scores represents the attribute feature relationship between the target objects and the answered test question scores;
And the prediction score determining unit is used for performing convolution operation on the obtained object feature relations, the obtained score feature relations and the obtained attribute feature relations to obtain the prediction scores of the unanswered test questions of the at least two target objects.
9. A computer-readable storage medium having a computer program stored therein, the computer program characterized in that: the computer program, when executed by a processor, implements the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, the computer program, when executed by the processor, implementing the method of any of claims 1-7.
CN202111133101.2A 2021-09-27 2021-09-27 Score prediction method and device, storage medium and electronic equipment Active CN113628080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111133101.2A CN113628080B (en) 2021-09-27 2021-09-27 Score prediction method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111133101.2A CN113628080B (en) 2021-09-27 2021-09-27 Score prediction method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113628080A CN113628080A (en) 2021-11-09
CN113628080B true CN113628080B (en) 2022-08-12

Family

ID=78390658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111133101.2A Active CN113628080B (en) 2021-09-27 2021-09-27 Score prediction method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113628080B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785039A (en) * 2020-12-31 2021-05-11 科大讯飞股份有限公司 Test question answering score prediction method and related device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301165A (en) * 2016-04-14 2017-10-27 科大讯飞股份有限公司 A kind of item difficulty analysis method and system
CN106682768B (en) * 2016-12-08 2018-05-08 北京粉笔蓝天科技有限公司 A kind of Forecasting Methodology, system, terminal and the server of answer fraction
CN109272160A (en) * 2018-09-17 2019-01-25 广州讯飞易听说网络科技有限公司 Score on Prediction system and prediction technique
US11960843B2 (en) * 2019-05-02 2024-04-16 Adobe Inc. Multi-module and multi-task machine learning system based on an ensemble of datasets

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785039A (en) * 2020-12-31 2021-05-11 科大讯飞股份有限公司 Test question answering score prediction method and related device

Also Published As

Publication number Publication date
CN113628080A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN110599557B (en) Image description generation method, model training method, device and storage medium
CN111897941B (en) Dialogue generation method, network training method, device, storage medium and equipment
WO2020103721A1 (en) Information processing method and apparatus, and storage medium
CN111353299B (en) Dialog scene determining method based on artificial intelligence and related device
CN111666416B (en) Method and device for generating semantic matching model
CN111368525A (en) Information searching method, device, equipment and storage medium
CN111813910A (en) Method, system, terminal device and computer storage medium for updating customer service problem
JP2022500808A (en) Statement generation methods and devices, electronic devices and programs
CN112749558A (en) Target content acquisition method and device, computer equipment and storage medium
CN112995757B (en) Video clipping method and device
CN112862021B (en) Content labeling method and related device
CN112562723B (en) Pronunciation accuracy determination method and device, storage medium and electronic equipment
CN112488157A (en) Dialog state tracking method and device, electronic equipment and storage medium
CN113628080B (en) Score prediction method and device, storage medium and electronic equipment
CN111314771A (en) Video playing method and related equipment
CN116401522A (en) Financial service dynamic recommendation method and device
CN113486260B (en) Method and device for generating interactive information, computer equipment and storage medium
CN113392640B (en) Title determination method, device, equipment and storage medium
CN113392686A (en) Video analysis method, device and storage medium
CN113569043A (en) Text category determination method and related device
CN113822435A (en) User conversion rate prediction method and related equipment
CN111723783A (en) Content identification method and related device
CN111709789A (en) User conversion rate determining method and related equipment
CN113539272A (en) Voice recognition method and device, storage medium and electronic equipment
CN113763929A (en) Voice evaluation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055305

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant