CN111695680A - Score prediction method, score prediction model training device and electronic equipment - Google Patents

Score prediction method, score prediction model training device and electronic equipment Download PDF

Info

Publication number
CN111695680A
CN111695680A CN202010543758.5A CN202010543758A CN111695680A CN 111695680 A CN111695680 A CN 111695680A CN 202010543758 A CN202010543758 A CN 202010543758A CN 111695680 A CN111695680 A CN 111695680A
Authority
CN
China
Prior art keywords
team
predicted
information
neural network
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010543758.5A
Other languages
Chinese (zh)
Other versions
CN111695680B (en
Inventor
刘浩
郭庆宇
祝恒书
庄福振
杨胜文
熊辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010543758.5A priority Critical patent/CN111695680B/en
Publication of CN111695680A publication Critical patent/CN111695680A/en
Application granted granted Critical
Publication of CN111695680B publication Critical patent/CN111695680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a score prediction method, a score prediction model training device and electronic equipment, and relates to the field of deep learning. The concrete implementation scheme of the achievement prediction method is as follows: determining team information and past result information of a team to be predicted; inputting team information and past result information of the team to be predicted into a pre-trained result prediction model; and acquiring the predicted achievement of the team to be predicted, which is output by the achievement prediction model. With the above arrangement, on the one hand, the accuracy of prediction can be improved by combining the team information of the team and the past result information as a reference for the result prediction. On the other hand, the result prediction model is an integral model, and the integral model is trained during training, so that the final trained output result prediction result is closer to the actual situation.

Description

Score prediction method, score prediction model training device and electronic equipment
Technical Field
The present application relates to the field of data processing, and more particularly, to the field of deep learning.
Background
The ranking prediction of the competitive game mainly comprises the step of predicting the future ranking of the competition teams by taking the current ranking of each competition team as a reference. Alternatively, the future ranking of the participating teams may be predicted based on the experience or status of the participants in the participating teams. However, the scheme is highly subjective, and the prediction accuracy is low.
Disclosure of Invention
The application provides a score prediction method, a score prediction model training device and electronic equipment.
According to an aspect of the present application, there is provided a performance prediction method including:
determining team information and past result information of a team to be predicted;
inputting team information and past result information of a team to be predicted into a pre-trained result prediction model;
and acquiring the predicted achievement of the team to be predicted, which is output by the achievement prediction model.
According to another aspect of the present application, there is provided a performance prediction model training method, including:
receiving team information samples and past result information samples of a sample team;
the score prediction model to be trained obtains the predicted scores of the sample team according to the team information samples and past score information samples of the sample team;
and training the score prediction model to be trained according to the predicted scores of the sample team and the real scores of the sample team until the error between the predicted scores and the real scores is within an allowable range.
According to a third aspect of the present application, there is provided a performance prediction apparatus including:
the information determining module is used for determining team information and past score information of a team to be predicted;
the information input module is used for inputting team information and past result information of a team to be predicted into a pre-trained result prediction model;
and the predicted result acquisition module is used for acquiring the predicted result of the team to be predicted, which is output by the result prediction model.
According to a fourth aspect of the present application, there is provided an achievement prediction model training device including:
the system comprises an information sample receiving module, a data processing module and a data processing module, wherein the information sample receiving module is used for receiving team information samples and past result information samples of a sample team;
the system comprises a score prediction module of a sample team, a score prediction module of the sample team and a training module, wherein the score prediction module is used for enabling a score prediction model to be trained to obtain a predicted score of the sample team according to team information samples and past score information samples of the sample team;
and the model training module is used for training the score prediction model to be trained according to the predicted scores of the sample team and the real scores of the sample team until the error between the predicted scores and the real scores is within an allowable range.
According to a fifth aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method provided by any one of the embodiments of the present application.
According to a sixth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions, wherein the computer instructions are configured to cause a computer to perform the method provided by any one of the embodiments of the present application.
With the above arrangement, the team information of the team and the past result information are combined to be used as a reference for the result prediction, and the accuracy of the prediction can be improved. Moreover, the result prediction model is an integral model, and the integral model is trained during training, so that the output result of the finally trained result prediction model is closer to the actual situation.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a performance prediction method according to a first embodiment of the present application;
FIG. 2 is a flow chart for generating a first feature vector according to a first embodiment of the present application;
FIG. 3 is a flow chart for generating a second feature vector according to the first embodiment of the present application;
FIG. 4 is a flow diagram of a performance prediction model training method according to a second embodiment of the present application;
FIG. 5 is a schematic diagram of an achievement prediction device according to a third embodiment of the present application;
FIG. 6 is a schematic diagram of an achievement prediction model training device according to a fourth embodiment of the present application;
FIG. 7 is a block diagram of an electronic device for implementing a performance prediction method and/or a performance prediction model training method of embodiments of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, in one embodiment, a performance prediction method is provided, including:
s101: determining team information and past result information of a team to be predicted;
s102: inputting team information and past result information of a team to be predicted into a pre-trained result prediction model;
s103: and acquiring the predicted achievement of the team to be predicted, which is output by the achievement prediction model.
The achievement prediction method can predict the achievement of data science competitions, sports competitions and other events. Taking the data science competition as an example, the number of contestants in a team may include a single person or a plurality of persons. The team information of the participating team may include the number of participants of the participating team, personal information of each participant, and the like. The personal information is used to describe the characteristics of the contestants from different dimensions.
For example, the team a includes two contestants, the nationality of the first contestant is china, the subject is the subject, the unit is XX school, and the work type is a student. The nationality of the second contestant is the United states, the academic story is Master, the unit is XXX corporation, and the work type is software engineer. The personal information set of the first contestant may be denoted as Fu1=[Fu1-1,Fu1-2,……,Fu1-i]. Wherein, Fu1-iCan represent a first player (F)u1) Personal information of the ith dimension of (1). Similarly, the personal information set of the second contestant may be denoted as Fu2. Finally, the number of players and the individual information of each player may be set as the team information of the A-team, and the team information set of the A-team may be represented as A-Fu=[2,Fu1,Fu2]。
The past result information may be the results of the participating team in the past period of time, for example, the results of the past T competition days. The achievement information may include a plurality of information such as daily ranking, number of times answers are submitted daily, score of answers submitted daily, daily activity, etc.
Still participating in AFor example, the team may show the past result information set of the team Atq=[Ft1q1,……,Ftmqn]. Wherein, FtmqnThe nth past result information of the A-party team on the mth day can be represented. For example, if the game day is 5 days in the past, m is an integer between 1 and 5. The achievement information comprises 4 items (daily ranking, number of times of submitting answers daily, fraction of submitting answers daily, daily activity), and then n takes on an integer between 1 and 4. Then, Ft1q1May represent the first item (dimension) performance information of the team of the a-party on the first match day.
The achievement prediction model can be an end-to-end (end-to-end) model, and a neural network for extracting features and outputting prediction results can be included in the achievement prediction model. According to different training data sets, different features can be extracted by the neural network for extracting the features.
For example, team information of a participating team may be converted into a computable numerical vector using an embedded (Embedding) neural network. The hidden state vector between each grade information can be obtained according to the past grade information of the competition team by using a Gate Recurrent neural network (GRU).
And the neural network for outputting the prediction result can output the prediction result of the team to be predicted according to the digital vector and the hidden state vector. In the training process of the achievement prediction model, the input end of the model receives team information and past achievement information of a team to be predicted, and the output end of the model can obtain a result. There is an error in comparing the result with the true result (achievement sample). This error is propagated back through each layer in the model, and the parameters of each layer in the model are adjusted according to the error until the output of the model converges or the desired effect is achieved.
With the above arrangement, on the one hand, the accuracy of prediction can be improved by combining the team information of the team and the past result information as a reference for the result prediction. On the other hand, the result prediction model is an integral model, and the integral model is trained during training, so that the final trained output result prediction result is closer to the actual situation. In addition, compared with the prior art, the method and the device for predicting the achievement can generate more accurate and objective achievement prediction by only depending on player experience.
In one embodiment, the performance prediction model includes a first neural network, a second neural network, and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information of a team to be predicted, generate a first feature vector corresponding to the team information of the team to be predicted, and input the first feature vector into the third neural network;
the second neural network is configured to receive the past result information of the team to be predicted, generate a second feature vector corresponding to the past result information of the team to be predicted, and input the second feature vector into the third neural network;
the third neural network is configured to receive the first feature vector and the second feature vector and determine the predicted achievement of the team to be predicted according to the first feature vector and the second feature vector.
In this embodiment, the first neural network may be an Embedding neural network, and is configured to convert the received team information into a calculable digital vector, and may further represent the digital vector as a low-dimensional vector, that is, generate a first feature vector corresponding to the team information of the team to be predicted. For example, for team information in text form, each participle of the text may be represented in encoded form. And performing word segmentation vector operation on each word segmentation to obtain a vector reflecting the characteristics of each word segmentation. The feature of a participle may be the semantics of the participle in a text sample or in natural science, etc. And carrying out dimensionality reduction operation on the vector reflecting the characteristics of each word segmentation to obtain a low-dimensional vector.
The second neural network can be a GRU neural network and is used for acquiring a second feature vector corresponding to the information for representing the past performance. The GRU neural network includes a plurality of nodes, and the inputs to the current node include: the score information of the current day and the hidden state vector transmitted by the previous node. The output of the current node includes the hidden state vector that is passed to the next node. The hidden state vector output by the last node of the GRU neural network can be used as a second feature vector corresponding to past performance information generated by the second neural network.
And respectively inputting a first feature vector corresponding to team information of the team to be predicted and a second feature vector corresponding to past result information into an input layer of a third neural network. The input Layer of the third neural network may be a fully connected Layer (FullyConnected Layer), each node of the fully connected Layer being connected to all nodes of the previous Layer for correlating the extracted features of all nodes of the previous Layer. That is, the input layer of the third neural network is connected to all the nodes of the output layers of the first and second neural networks, respectively.
In the third neural network, a performance prediction model may be further provided, and connected to the output end of the full link layer. The first neural network, the second neural network and the third neural network can be combined into an end-to-end model through a full connection layer of the third neural network. And the output end of the result prediction model is used as the output end of the end-to-end model to output the predicted result.
Through the scheme, the rank of each team is used as the result of result prediction, and the prediction of the rank of each team participating in the competition can be realized. Because the model is an end-to-end model, three neural networks in the whole model are trained in a linkage manner during training, and barriers among the models can be broken. When the trained result prediction model is used for prediction, the accuracy of result prediction can be further improved.
In one embodiment, the predicted performance of the team to be predicted comprises at least one of rank, score, and liveness.
In this embodiment, the performance prediction model in the third neural network may include at least one of a ranking prediction model, a score prediction model, and an activity prediction model. And the input layers of the ranking prediction model, the score prediction model and the activity prediction model are connected with the output layer of the full-connection layer.
The ranking prediction model is described as an example, and the model can output ranking prediction of a team to be predicted by using a first feature vector corresponding to team information of the team to be predicted and a second feature vector corresponding to past result information.
Because the rank of the participating team is related to the score and activity of the team. For example, the number of times a participating team submits an answer may affect the increase in liveness, with higher liveness participating team rankings increasing. The scores of the participating teams may also affect the progress in the ranking. Therefore, through the scheme, the ranking, the score and/or the activity of the competition team are used as the result of the result prediction. In addition to ranking predictions, predictions of scores and liveness may also be implemented. Therefore, the score prediction auxiliary task can bring more information, and the ranking prediction accuracy is improved.
As shown in fig. 2, in one embodiment, the generation of the first feature vector corresponding to the team information of the team to be predicted includes:
s201: generating numerical vectors corresponding to personal information of each contestant, wherein the personal information comprises at least one of gender, age, nationality, school calendar, work unit and work type;
s202: and generating a first feature vector by using the number of the participants of the team to be predicted and the numerical vector.
The personal information of the player can be information of different dimensions such as the age, sex, nationality, academic calendar, unit and work type of the player. When acquiring the player personal information, there may be a case where the information is missing. For example, the player does not fill in the lesson, and based on this, the missing information may be filled in with fields such as "none" or "others".
For non-digital information such as gender, nationality, academic calendar, unit and work type, unique hot code (OneHot) can be adopted for conversion, so as to obtain numerical vectors corresponding to personal information of the player. In addition, the method also comprises representing the numerical value vector by a low-dimensional vector by using an Embedding neural network.
With the above arrangement, the number of players of each participating team and the numerical vector corresponding to the personal information of each player are used as the team information of the participating team. The team composition modes of all the participating teams, the background information of the participating players and the like are comprehensively considered, and the accuracy of the prediction result is ensured.
As shown in fig. 3, in one embodiment, the generating the second feature vector corresponding to the past performance information of the team to be predicted includes the following processes:
s301: and (4) normalizing the past result information of the team to be predicted.
S302: and taking the hidden state vector between the past results after the normalization processing as a second feature vector.
Past performance information for the team to be predicted may include daily rankings for the past T game days, the number of times answers were submitted daily, the number of points of answers submitted daily, daily liveness, etc.
The following formula is adopted to carry out normalization processing on the past result information:
Figure BDA0002539946190000071
where x' represents the result of the normalization process, x may represent performance, e.g., the number of times answers were submitted on the first match day, min (x) may represent the minimum value of performance, e.g., the minimum value of the number of times answers were submitted on the first match day, and max (x) may represent the maximum value of performance, e.g., the maximum value of the number of times answers were submitted on the first match day.
Through the normalization processing, the normalization result of each grade information can be obtained. And (4) inputting the normalization result of each score of each competition day to the configured GRE neural network, so as to obtain the hidden state vector between the past scores. The hidden state vector serves as a second feature vector.
Through the scheme, past result data of the competition team are used as prediction references, and the accuracy of model prediction can be further improved along with accumulation of the result data.
As shown in fig. 4, in an embodiment, the present application further provides a performance prediction model training method, including the following steps:
s401: a team information sample and past performance information sample of a sample team are received.
S401: and the score prediction model to be trained predicts the predicted scores of the sample team according to the team information samples and the past score information samples of the sample team.
S403: and training the score prediction model to be trained according to the predicted scores of the sample team and the real scores of the sample team until the error between the predicted scores and the real scores is within an allowable range.
In the training process, the score prediction model to be trained receives team information samples and past score information samples of a sample team and directly obtains the predicted scores of the sample team at the output end. There may be an error in the predicted performance of the sample team as compared to the actual performance of the sample team. The error is propagated backwards in each layer of the score prediction model to be trained, and the parameters of each layer are adjusted according to the error until the output of the score prediction model to be trained converges or the expected effect is achieved.
When training data is collected, a time window with the length of T (competition day) can be adopted, and a group of past performance information samples can be obtained by sliding on the time window. 80% of the data can be used to make a training set to train the model and 20% of the data can be used to make a test set to verify the model generalization ability.
In addition, for the trained result prediction model, the model can be optimized through an adaptive moment estimation algorithm, so that a final result prediction model is obtained.
Through the scheme, the result prediction model is an integral model, so that the integral model is trained during training, and the final trained output result prediction result is closer to the actual situation.
In one embodiment, the performance prediction model to be trained comprises a first neural network, a second neural network and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information samples of the sample team, generate a first feature vector corresponding to the team information samples of the sample team, and input the first feature vector into the third neural network;
the second neural network is configured to receive the past result information samples of the sample team, generate a second feature vector corresponding to the past result information samples of the sample team, and input the second feature vector into the third neural network;
the third neural network is configured to receive the first feature vector and the second feature vector and determine a predicted performance of the sample team based on the first feature vector and the second feature vector.
Through the scheme, the rank of each team is used as the result of result prediction, and the prediction of the rank of each team participating in the competition can be realized.
In one embodiment, the predicted performance of the sample team comprises at least one of a rank, a score, and an activity.
The third neural network may include a fully connected layer and at least one of a ranking prediction model, a score prediction model, and an activity prediction model connected to the fully connected layer.
The input end of the ranking prediction model is connected with the output end of the full connection layer. The ranking prediction model may be a model having a mean square error loss function as an objective function. In other words, in the training process, the historical true ranking samples of the sample team are used for training.
And when the mean square error of the output result of the model and the historical real ranking of the current team is within an allowable range, the training is finished.
Since the ranking output by the model may not be an integer, the predicted ranking may also be reordered, and the predicted ranking result may be obtained by calculating the relative ranking of all teams per day.
The input end of the fraction prediction model is connected with the output end of the full connection layer. The fractional prediction model may be a model having a mean square error loss function as an objective function. That is, during training, the historical truthful score of the sample team is used as a performance sample. And when the mean square error of the output result of the model and the historical real fraction of the current team is in an allowable range, the training is finished.
And the input end of the prediction activity model is connected with the output end of the full connection layer. The predictive liveness model may be a model having a cross entropy loss function as an objective function. In the training process, the historical real activity of a sample team is used as a sample. The true liveness can be represented as 0 and 1, i.e. negative and positive samples. And when the error of the output result of the model and the real activity is within an allowable range, the training is finished.
Because the rank of the team is related to the score and the activeness of the team. For example, the number of times a team submits an answer may affect the increase in liveness, with higher team rankings increasing. And team scores may also affect progress in ranking. Therefore, through the scheme, the rank, the score and/or the liveness of the team are used as the result of the achievement prediction. In addition to ranking predictions, predictions of scores and liveness may also be implemented. Therefore, the score prediction auxiliary task can bring more information, and the ranking prediction accuracy is improved.
As shown in fig. 5, in one embodiment, the present application also provides a performance prediction apparatus including:
the information determining module 501 is used for determining team information and past result information of a team to be predicted;
the information input module 502 is used for inputting team information and past result information of a team to be predicted into a pre-trained result prediction model;
and the predicted result acquiring module 503 is used for acquiring the predicted result of the team to be predicted, which is output by the result prediction model.
In one embodiment, the performance prediction model includes a first neural network, a second neural network, and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information of a team to be predicted, generate a first feature vector corresponding to the team information of the team to be predicted, and input the first feature vector into the third neural network;
the second neural network is configured to receive the past result information of the team to be predicted, generate a second feature vector corresponding to the past result information of the team to be predicted, and input the second feature vector into the third neural network;
the third neural network is configured to receive the first feature vector and the second feature vector and determine the predicted achievement of the team to be predicted according to the first feature vector and the second feature vector.
In one embodiment, the predicted performance of the team to be predicted comprises at least one of rank, score, and liveness.
In one embodiment, the team information includes the number of participants of the team to be predicted and individual information of each participant;
generating a first feature vector corresponding to team information of a team to be predicted, wherein the first feature vector comprises the following steps:
generating numerical vectors corresponding to personal information of each contestant, wherein the personal information comprises at least one of gender, age, nationality, school calendar, work unit and work type;
and generating a first feature vector by using the number of the participants of the team to be predicted and the numerical vector.
In one embodiment, generating a second feature vector corresponding to past performance information of a team to be predicted includes:
normalizing the past result information of the team to be predicted;
and taking the hidden state vector between the past results after the normalization processing as a second feature vector.
As shown in fig. 6, in one embodiment, the present application also provides a performance prediction model training device including:
an information sample receiving module 601, configured to receive team information samples and past performance information samples of a sample team;
a score prediction module 602 of the sample team for causing the score prediction model to be trained to predict the predicted score of the sample team according to the team information sample and the past score information sample of the sample team;
and the model training module 603 is used for training a score prediction model to be trained according to the predicted score of the sample team and the actual score of the sample team until the error between the predicted score and the actual score is within an allowable range.
In one embodiment, the performance prediction model to be trained comprises a first neural network, a second neural network and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information samples of the sample team, generate a first feature vector corresponding to the team information samples of the sample team, and input the first feature vector into the third neural network;
the second neural network is configured to receive the past result information samples of the sample team, generate a second feature vector corresponding to the past result information samples of the sample team, and input the second feature vector into the third neural network;
the third neural network is configured to receive the first feature vector and the second feature vector and determine a predicted performance of the sample team based on the first feature vector and the second feature vector.
In one embodiment, the predicted performance of the sample team comprises at least one of a rank, a score, and an activity.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to an achievement prediction method and/or an achievement prediction model training method of the embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 710, a memory 720, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). One processor 710 is illustrated in fig. 7.
Memory 720 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the performance prediction method and/or performance prediction model training method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the performance prediction method and/or performance prediction model training method provided herein.
The memory 720, which is a non-transitory computer readable storage medium, can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the achievement prediction method and/or the achievement prediction model training method in the embodiments of the present application (for example, the information determination module 501, the information input module 502 prediction achievement acquisition module 503 shown in fig. 5, or the information sample receiving module 601, the achievement prediction module 602 of the sample team, and the model training module 603 shown in fig. 6). The processor 710 executes various functional applications of the server and data processing, i.e., implementing the performance prediction method and/or the performance prediction model training method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 720.
The memory 720 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the achievement prediction method and/or the achievement prediction model training method, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 720 may optionally include memory located remotely from the processor 710, which may be connected over a network to an electronic device for the performance prediction method and/or the performance prediction model training method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the achievement prediction method and/or the achievement prediction model training method may further include: an input device 730 and an output device 740. The processor 710, the memory 720, the input device 730, and the output device 740 may be connected by a bus or other means, such as the bus connection in fig. 7.
The input device 730 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the achievement prediction method and/or the achievement prediction model training method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 740 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A performance prediction method, comprising:
determining team information and past result information of a team to be predicted;
inputting team information and past result information of the team to be predicted into a pre-trained result prediction model;
and acquiring the predicted achievement of the team to be predicted, which is output by the achievement prediction model.
2. The method of claim 1, wherein the performance prediction model comprises a first neural network, a second neural network, and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information of the team to be predicted, generate a first feature vector corresponding to the team information of the team to be predicted, and input the first feature vector into the third neural network;
the second neural network is configured to receive past result information of the team to be predicted, generate a second feature vector corresponding to the past result information of the team to be predicted, and input the second feature vector into the third neural network;
the third neural network is configured to receive the first feature vector and the second feature vector and determine the predicted achievement of the team to be predicted according to the first feature vector and the second feature vector.
3. The method of claim 1 or 2, wherein the predicted achievement of the team to be predicted comprises at least one of a ranking, a score, and an activity.
4. The method according to claim 2, wherein the team information includes the number of players of the team to be predicted and individual information of each of the players;
the generating a first feature vector corresponding to team information of the team to be predicted includes:
generating numerical vectors corresponding to personal information of each contestant, wherein the personal information comprises at least one of gender, age, nationality, academic history, work unit and work type;
and generating the first feature vector by using the number of the players of the team to be predicted and the numerical vector.
5. The method of claim 2, wherein the generating a second feature vector corresponding to past performance information of the team to be predicted comprises:
normalizing the past result information of the team to be predicted;
and taking the hidden state vector between the past achievements after the normalization processing as the second feature vector.
6. A performance prediction model training method, comprising:
receiving team information samples and past result information samples of a sample team;
the score prediction model to be trained obtains the predicted score of the sample team according to the team information sample and the past score information sample of the sample team;
and training the score prediction model to be trained according to the predicted scores of the sample team and the real scores of the sample team until the error between the predicted scores and the real scores is within an allowable range.
7. The method of claim 6, wherein the performance prediction model to be trained comprises a first neural network, a second neural network, and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information samples of the sample team, generate a first feature vector corresponding to the team information samples of the sample team, and input the first feature vector into the third neural network;
the second neural network is configured to receive past performance information samples of the sample team, generate a second feature vector corresponding to the past performance information samples of the sample team, and input the second feature vector into the third neural network;
a third neural network is configured to receive the first and second feature vectors and determine a predicted performance of the sample team based on the first and second feature vectors.
8. The method of claim 6 or 7, wherein the predicted performance of the sample team comprises at least one of a rank, a score, and an activity.
9. An achievement prediction device comprising:
the information determining module is used for determining team information and past score information of a team to be predicted;
the information input module is used for inputting team information and past result information of the team to be predicted into a pre-trained result prediction model;
and the predicted result acquisition module is used for acquiring the predicted result of the team to be predicted, which is output by the result prediction model.
10. The apparatus of claim 1, wherein the performance prediction model comprises a first neural network, a second neural network, and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information of the team to be predicted, generate a first feature vector corresponding to the team information of the team to be predicted, and input the first feature vector into the third neural network;
the second neural network is configured to receive past result information of the team to be predicted, generate a second feature vector corresponding to the past result information of the team to be predicted, and input the second feature vector into the third neural network;
the third neural network is configured to receive the first feature vector and the second feature vector and determine the predicted achievement of the team to be predicted according to the first feature vector and the second feature vector.
11. The apparatus of claim 9 or 10, wherein the predicted achievement of the team to be predicted comprises at least one of a rank, a score, and an activity.
12. The apparatus according to claim 10, wherein the team information includes the number of players of the team to be predicted and individual information of each of the players;
the generating a first feature vector corresponding to team information of the team to be predicted includes:
generating numerical vectors corresponding to personal information of each contestant, wherein the personal information comprises at least one of gender, age, nationality, academic history, work unit and work type;
and generating the first feature vector by using the number of the players of the team to be predicted and the numerical vector.
13. The apparatus of claim 10, wherein generating a second feature vector corresponding to past performance information of the team to be predicted comprises:
normalizing the past result information of the team to be predicted;
and taking the hidden state vector between the past achievements after the normalization processing as the second feature vector.
14. An achievement prediction model training device, comprising:
the system comprises an information sample receiving module, a data processing module and a data processing module, wherein the information sample receiving module is used for receiving team information samples and past result information samples of a sample team;
the system comprises a score prediction module of a sample team, a score prediction module of the sample team and a training module, wherein the score prediction module is used for enabling a score prediction model to be trained to obtain a predicted score of the sample team according to a team information sample and a past score information sample of the sample team;
and the model training module is used for training the score prediction model to be trained according to the predicted scores of the sample team and the actual scores of the sample team until the error between the predicted scores and the actual scores is within an allowable range.
15. The apparatus of claim 14, wherein the performance prediction model to be trained comprises a first neural network, a second neural network, and a third neural network; wherein the content of the first and second substances,
the first neural network is configured to receive team information samples of the sample team, generate a first feature vector corresponding to the team information samples of the sample team, and input the first feature vector into the third neural network;
the second neural network is configured to receive past performance information samples of the sample team, generate a second feature vector corresponding to the past performance information samples of the sample team, and input the second feature vector into the third neural network;
a third neural network is configured to receive the first and second feature vectors and determine a predicted performance of the sample team based on the first and second feature vectors.
16. The apparatus of claim 14 or 15, wherein the predicted performance of the sample team comprises at least one of a rank, a score, and an activity.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
18. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 8.
CN202010543758.5A 2020-06-15 2020-06-15 Score prediction method, score prediction model training method and device and electronic equipment Active CN111695680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010543758.5A CN111695680B (en) 2020-06-15 2020-06-15 Score prediction method, score prediction model training method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010543758.5A CN111695680B (en) 2020-06-15 2020-06-15 Score prediction method, score prediction model training method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111695680A true CN111695680A (en) 2020-09-22
CN111695680B CN111695680B (en) 2023-11-10

Family

ID=72481194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010543758.5A Active CN111695680B (en) 2020-06-15 2020-06-15 Score prediction method, score prediction model training method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111695680B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112632351A (en) * 2020-12-28 2021-04-09 北京百度网讯科技有限公司 Training method, classification method, device and equipment of classification model
CN113240190A (en) * 2021-06-02 2021-08-10 郑州大学体育学院 Athlete pre-race state evaluation method based on multi-period evolution entropy technology
CN116822875A (en) * 2023-06-28 2023-09-29 浙江海亮科技有限公司 Competition group allocation processing method and device, electronic equipment and storage medium
CN117114509A (en) * 2023-10-20 2023-11-24 中南大学 Method, system, equipment and storage medium for predicting achievement of high-job and highly-created game

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301588A (en) * 2004-04-09 2005-10-27 Game Republic:Kk Racehorse securitization server
CN101739854A (en) * 2008-11-25 2010-06-16 梁昌年 Method and device for performing self-adaptive estimation to user by computer system
CN102201080A (en) * 2010-03-24 2011-09-28 新奥特(北京)视频技术有限公司 Competition management system for supporting multiple sporting events
CN104992482A (en) * 2015-04-27 2015-10-21 林晓勇 Athletic competition data processing system and method thereof
CN105046559A (en) * 2015-09-10 2015-11-11 河海大学 Bayesian network and mutual information-based client credit scoring method
CN105184708A (en) * 2015-08-14 2015-12-23 北京联校传奇信息科技有限公司 Overseas study application matching method and system
CN105631538A (en) * 2015-12-23 2016-06-01 北京奇虎科技有限公司 User activity prediction method and device, and application method and system thereof
CN106021364A (en) * 2016-05-10 2016-10-12 百度在线网络技术(北京)有限公司 Method and device for establishing picture search correlation prediction model, and picture search method and device
CN107145596A (en) * 2017-05-31 2017-09-08 南京理工大学 E-sports decision of a game Forecasting Methodology based on deep neural network
CN107224714A (en) * 2017-06-20 2017-10-03 安徽禹缤体育科技有限公司 A kind of badminton game data record management system
CN107967572A (en) * 2017-12-15 2018-04-27 华中师范大学 A kind of intelligent server based on education big data
CN107977411A (en) * 2017-11-21 2018-05-01 腾讯科技(成都)有限公司 Group recommending method, device, storage medium and server
CN108121785A (en) * 2017-12-15 2018-06-05 华中师范大学 A kind of analysis method based on education big data
CN108171358A (en) * 2017-11-27 2018-06-15 科大讯飞股份有限公司 Score prediction method and device, storage medium and electronic device
CN108270807A (en) * 2016-12-30 2018-07-10 阿里巴巴集团控股有限公司 A kind of data transmission method and device
CN108564272A (en) * 2018-04-08 2018-09-21 大连理工大学 A kind of team's recommendation system building method based on Catfish Effect
CN109275017A (en) * 2018-10-10 2019-01-25 武汉斗鱼网络科技有限公司 A kind of methods of exhibiting and device that barrage information is set
CN109299866A (en) * 2018-09-11 2019-02-01 张连祥 A kind of performance appraisal system for inviting outside investment
CN109558974A (en) * 2018-11-16 2019-04-02 北京中竞鸽体育文化发展有限公司 A kind of method and device of race ranking prediction
CN109636047A (en) * 2018-12-17 2019-04-16 江苏满运软件科技有限公司 User activity prediction model training method, system, equipment and storage medium
CN109711200A (en) * 2018-12-29 2019-05-03 百度在线网络技术(北京)有限公司 Accurate poverty alleviation method, apparatus, equipment and medium based on block chain
CN109847364A (en) * 2019-03-04 2019-06-07 上海珑讯电竞信息科技有限公司 A kind of e-sports match race engine
CN110119547A (en) * 2019-04-28 2019-08-13 腾讯科技(深圳)有限公司 A kind of prediction group defeats negative method, apparatus and control equipment
CN110163534A (en) * 2019-06-04 2019-08-23 中铁四局集团有限公司城市轨道交通工程分公司 A kind of Project Manager of Construction Enterprise performance appraisal method
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN110555459A (en) * 2019-07-24 2019-12-10 四川大学 Score prediction method based on fuzzy clustering and support vector regression
US20200020203A1 (en) * 2018-07-08 2020-01-16 Kent Wilcoxson Jordan Method and Apparatus for GPS enabled Live Predictive Sports Game Scoring Outcome Wagering and Social Networking
CN111080360A (en) * 2019-12-13 2020-04-28 中诚信征信有限公司 Behavior prediction method, model training method, device, server and storage medium
JP2020077361A (en) * 2018-11-05 2020-05-21 株式会社トランス Learning model building device, after-employment evaluation predicting device, learning model building method, and after-employment evaluation prediction method
CN111242515A (en) * 2020-03-05 2020-06-05 长沙师范学院 Classroom teaching quality evaluation system and method based on education big data
CN111240544A (en) * 2020-01-06 2020-06-05 腾讯科技(深圳)有限公司 Data processing method, device and equipment for virtual scene and storage medium
CN111242364A (en) * 2020-01-07 2020-06-05 上海钧正网络科技有限公司 Neural network-based vehicle fault and comfort prediction method, device, terminal and medium

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301588A (en) * 2004-04-09 2005-10-27 Game Republic:Kk Racehorse securitization server
CN101739854A (en) * 2008-11-25 2010-06-16 梁昌年 Method and device for performing self-adaptive estimation to user by computer system
CN102201080A (en) * 2010-03-24 2011-09-28 新奥特(北京)视频技术有限公司 Competition management system for supporting multiple sporting events
CN104992482A (en) * 2015-04-27 2015-10-21 林晓勇 Athletic competition data processing system and method thereof
CN105184708A (en) * 2015-08-14 2015-12-23 北京联校传奇信息科技有限公司 Overseas study application matching method and system
CN105046559A (en) * 2015-09-10 2015-11-11 河海大学 Bayesian network and mutual information-based client credit scoring method
CN105631538A (en) * 2015-12-23 2016-06-01 北京奇虎科技有限公司 User activity prediction method and device, and application method and system thereof
CN106021364A (en) * 2016-05-10 2016-10-12 百度在线网络技术(北京)有限公司 Method and device for establishing picture search correlation prediction model, and picture search method and device
CN108270807A (en) * 2016-12-30 2018-07-10 阿里巴巴集团控股有限公司 A kind of data transmission method and device
CN107145596A (en) * 2017-05-31 2017-09-08 南京理工大学 E-sports decision of a game Forecasting Methodology based on deep neural network
CN107224714A (en) * 2017-06-20 2017-10-03 安徽禹缤体育科技有限公司 A kind of badminton game data record management system
CN107977411A (en) * 2017-11-21 2018-05-01 腾讯科技(成都)有限公司 Group recommending method, device, storage medium and server
CN108171358A (en) * 2017-11-27 2018-06-15 科大讯飞股份有限公司 Score prediction method and device, storage medium and electronic device
CN107967572A (en) * 2017-12-15 2018-04-27 华中师范大学 A kind of intelligent server based on education big data
CN108121785A (en) * 2017-12-15 2018-06-05 华中师范大学 A kind of analysis method based on education big data
CN108564272A (en) * 2018-04-08 2018-09-21 大连理工大学 A kind of team's recommendation system building method based on Catfish Effect
US20200020203A1 (en) * 2018-07-08 2020-01-16 Kent Wilcoxson Jordan Method and Apparatus for GPS enabled Live Predictive Sports Game Scoring Outcome Wagering and Social Networking
CN109299866A (en) * 2018-09-11 2019-02-01 张连祥 A kind of performance appraisal system for inviting outside investment
CN109275017A (en) * 2018-10-10 2019-01-25 武汉斗鱼网络科技有限公司 A kind of methods of exhibiting and device that barrage information is set
JP2020077361A (en) * 2018-11-05 2020-05-21 株式会社トランス Learning model building device, after-employment evaluation predicting device, learning model building method, and after-employment evaluation prediction method
CN109558974A (en) * 2018-11-16 2019-04-02 北京中竞鸽体育文化发展有限公司 A kind of method and device of race ranking prediction
CN109636047A (en) * 2018-12-17 2019-04-16 江苏满运软件科技有限公司 User activity prediction model training method, system, equipment and storage medium
CN109711200A (en) * 2018-12-29 2019-05-03 百度在线网络技术(北京)有限公司 Accurate poverty alleviation method, apparatus, equipment and medium based on block chain
CN109847364A (en) * 2019-03-04 2019-06-07 上海珑讯电竞信息科技有限公司 A kind of e-sports match race engine
CN110119547A (en) * 2019-04-28 2019-08-13 腾讯科技(深圳)有限公司 A kind of prediction group defeats negative method, apparatus and control equipment
CN110163534A (en) * 2019-06-04 2019-08-23 中铁四局集团有限公司城市轨道交通工程分公司 A kind of Project Manager of Construction Enterprise performance appraisal method
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN110555459A (en) * 2019-07-24 2019-12-10 四川大学 Score prediction method based on fuzzy clustering and support vector regression
CN111080360A (en) * 2019-12-13 2020-04-28 中诚信征信有限公司 Behavior prediction method, model training method, device, server and storage medium
CN111240544A (en) * 2020-01-06 2020-06-05 腾讯科技(深圳)有限公司 Data processing method, device and equipment for virtual scene and storage medium
CN111242364A (en) * 2020-01-07 2020-06-05 上海钧正网络科技有限公司 Neural network-based vehicle fault and comfort prediction method, device, terminal and medium
CN111242515A (en) * 2020-03-05 2020-06-05 长沙师范学院 Classroom teaching quality evaluation system and method based on education big data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
祝子涵: ""通过数据建模方法分析预测奥运奖牌榜"", 《电子技术与软件工程》, pages 167 *
雷光裕: ""世界杯足球比赛多分类预测模型研究"", 《软件导刊》, vol. 18, no. 7, pages 45 - 48 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112632351A (en) * 2020-12-28 2021-04-09 北京百度网讯科技有限公司 Training method, classification method, device and equipment of classification model
CN112632351B (en) * 2020-12-28 2024-01-16 北京百度网讯科技有限公司 Classification model training method, classification method, device and equipment
CN113240190A (en) * 2021-06-02 2021-08-10 郑州大学体育学院 Athlete pre-race state evaluation method based on multi-period evolution entropy technology
CN116822875A (en) * 2023-06-28 2023-09-29 浙江海亮科技有限公司 Competition group allocation processing method and device, electronic equipment and storage medium
CN117114509A (en) * 2023-10-20 2023-11-24 中南大学 Method, system, equipment and storage medium for predicting achievement of high-job and highly-created game

Also Published As

Publication number Publication date
CN111695680B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN111695680A (en) Score prediction method, score prediction model training device and electronic equipment
Nan et al. Improving factual consistency of abstractive summarization via question answering
US10713574B2 (en) Cognitive distributed network
CN112560479B (en) Abstract extraction model training method, abstract extraction device and electronic equipment
CN111144507B (en) Emotion analysis model pre-training method and device and electronic equipment
CN111831813B (en) Dialog generation method, dialog generation device, electronic equipment and medium
CN110023928A (en) Forecasting search engine ranking signal value
JP2021108115A (en) Method and device for training machine reading comprehension model, electronic apparatus, and storage medium
CN111737954A (en) Text similarity determination method, device, equipment and medium
CN112163676A (en) Multitask service prediction model training method, device, equipment and storage medium
CN110543558B (en) Question matching method, device, equipment and medium
CN111274397B (en) Method and device for establishing entity relation detection model
CN111259222A (en) Article recommendation method, system, electronic device and storage medium
US11947578B2 (en) Method for retrieving multi-turn dialogue, storage medium, and electronic device
CN112329453B (en) Method, device, equipment and storage medium for generating sample chapter
CN111326251A (en) Method and device for outputting inquiry questions and electronic equipment
CN111695698A (en) Method, device, electronic equipment and readable storage medium for model distillation
CN112529180A (en) Method and apparatus for model distillation
WO2023235346A1 (en) Prompting machine-learned models using chains of thought
CN111611808A (en) Method and apparatus for generating natural language model
CN111563198A (en) Material recall method, device, equipment and storage medium
CN112507104B (en) Dialog system acquisition method, apparatus, storage medium and computer program product
JP7128311B2 (en) Recommended methods, apparatus, electronic devices, readable storage media and computer program products for document types
CN112650844A (en) Tracking method and device of conversation state, electronic equipment and storage medium
CN111709778A (en) Travel flow prediction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant