CN109543841A - Deep learning method, apparatus, electronic equipment and computer-readable medium - Google Patents
Deep learning method, apparatus, electronic equipment and computer-readable medium Download PDFInfo
- Publication number
- CN109543841A CN109543841A CN201811333128.4A CN201811333128A CN109543841A CN 109543841 A CN109543841 A CN 109543841A CN 201811333128 A CN201811333128 A CN 201811333128A CN 109543841 A CN109543841 A CN 109543841A
- Authority
- CN
- China
- Prior art keywords
- feature
- training
- student
- deep learning
- pretreatment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000013135 deep learning Methods 0.000 title claims abstract description 72
- 238000012549 training Methods 0.000 claims abstract description 212
- 239000013598 vector Substances 0.000 claims description 53
- 238000006243 chemical reaction Methods 0.000 claims description 18
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 description 29
- 238000012545 processing Methods 0.000 description 13
- 238000013528 artificial neural network Methods 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 230000006854 communication Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 235000013399 edible fruits Nutrition 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004140 cleaning Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000012217 deletion Methods 0.000 description 3
- 230000037430 deletion Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000017105 transposition Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
Abstract
This disclosure relates to a kind of deep learning method, apparatus, electronic equipment and computer-readable medium.It is a kind of for customize training deep learning method include: to be pre-processed to student's feature, obtain pretreatment student's feature;Customization training feature is pre-processed, pretreatment customization training feature is obtained;Student's feature will be pre-processed and pretreatment customization training feature is used as input, utilize deep learning correlation model prediction training achievement.Deep learning correlation model includes: the first CNN network, receives pretreatment student's feature, generates fisrt feature data;2nd CNN network receives pretreatment customization training feature, generates second feature data;Full articulamentum receives fisrt feature data and second feature data, generates third feature data;Output layer, the training achievement based on the third feature data output prediction from full articulamentum.By the method for customization training, training efficiency is improved.
Description
Technical field
This disclosure relates to computer information processing field, in particular to a kind of for customizing the deep learning of training
Method, apparatus, electronic equipment and computer-readable medium.
Background technique
Self-driving trip has become a kind of common trip mode, for this purpose, obtaining driving license by examination has been many people
Rigid need.Currently, people mainly pass through the training class for participating in driving school, by training, driving Course Examination is then participated in.Driving school one
As to participate in the student of training preset several classes of types, student is generally according to itself economic conditions and time situation selection one
Kind class's type.But the setting of these class of type and student are typically not considered training efficiency for the selection of class's type and examine
Try percent of pass.
The necessary ways for obtaining driving license are exactly to be taken an examination by driving training, therefore in order to improve the rate of passing the examination, improve training
The efficiency of instruction and the percent of pass of examination, prediction driving training exam score just become particularly important.
Therefore, it is necessary to a kind of new training method for customizing, improve training efficiency.
Above- mentioned information are only used for reinforcing the understanding to the background of the disclosure, therefore it disclosed in the background technology part
It may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
The application provides a kind of for customizing deep learning method, apparatus, electronic equipment and the computer-readable Jie of training
Matter can be improved training efficiency.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure
Practice and acquistion.
According to the one side of the disclosure, it provides a kind of for customizing the deep learning method of training, comprising: to student's feature
It is pre-processed, obtains pretreatment student's feature;Customization training feature is pre-processed, it is special to obtain pretreatment customization training
Sign;Using the pretreatment student feature and pretreatment customization training feature as input, deep learning correlation model is utilized
Prediction training achievement, wherein the deep learning correlation model includes: the first CNN network, and it is special to receive the pretreatment student
Sign generates fisrt feature data;2nd CNN network receives the pretreatment customization training feature, generates second feature data;
Full articulamentum receives the fisrt feature data and the second feature data, generates third feature data;Output layer is based on
The training achievement of third feature data output prediction from the full articulamentum.
According to some embodiments, deep learning method further include: from the student of history training sample extraction row vector form
Feature and training feature;Student's feature of row vector form and training feature are respectively converted into student's eigenmatrix and training is special
Levy matrix;According to student's eigenmatrix and the training eigenmatrix training deep learning correlation model.
According to some embodiments, the history training sample includes positive sample and negative sample, and the positive sample is born with described
The ratio of sample is 1:1.
According to some embodiments, the history training sample includes positive sample and negative sample, and the positive sample is born with described
The ratio of sample is 8:1 to 3:1.
According to some embodiments, deep learning method further include: be associated with mould using deep learning described in new Sample Refreshment
Type.
According to some embodiments, the first CNN network and the 2nd CNN network respectively include input layer, convolutional layer,
Activation primitive layer.
According to some embodiments, the customization training is customization driving training.
According to some embodiments, student's feature includes at least two in following characteristics: age, gender, professional class
Type, education degree, time guarantee degree, self-description option.
According to some embodiments, student's feature includes manipulation class game test result.
According to some embodiments, the customization training feature includes: time classification, teaching method, coach quantity, vehicle.
According to some embodiments, the customization training feature further includes coach's evaluation.
According to some embodiments, the customization training feature includes trainer attributes.
According to some embodiments, it is described to student's feature carry out pretreatment include: by student's character representation be row vector shape
Formula;It is student's eigenmatrix by student's Feature Conversion of row vector form, described pair of customization training feature carries out pretreatment and include:
It is row vector form by customization training character representation;It is customization training feature square by the customization training Feature Conversion of row vector form
Battle array.
According to another aspect of the present invention, it provides a kind of for customizing the deep learning device of training, comprising: the first pre- place
Module is managed, pretreatment is carried out to student's feature to obtain pretreatment student's feature;Second preprocessing module, it is special to customization training
Sign carries out pretreatment to obtain pretreatment customization training feature;Prediction module, by the pretreatment student feature and described pre-
Processing customization training feature utilizes deep learning correlation model prediction training achievement, wherein the deep learning is closed as input
Gang mould type includes: the first CNN network, receives pretreatment student's feature, generates fisrt feature data;2nd CNN network, connects
The pretreatment customization training feature is received, second feature data are generated;Full articulamentum receives fisrt feature data and described
Second feature data generate third feature data;Output layer, it is defeated based on the third feature data from the full articulamentum
The training achievement predicted out.
In accordance with a further aspect of the present invention, a kind of electronic equipment is provided, comprising: one or more processors;Storage device,
For storing one or more programs;When one or more of programs are executed by one or more of processors, so that institute
It states one or more processors and realizes aforementioned any method.
In accordance with a further aspect of the present invention, a kind of computer-readable medium is provided, computer program is stored thereon with, it is special
Sign is, aforementioned any method is realized when described program is executed by processor.
According to some embodiments of the present invention, by the method for customization training, training efficiency is improved, and then it is logical to improve examination
Cross rate.
Other embodiments according to the present invention, according to student's situation and training method, exam score is trained in prediction, thus
Training method can be adjusted according to prediction result.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited
It is open.
Detailed description of the invention
Its example embodiment is described in detail by referring to accompanying drawing, above and other target, feature and the advantage of the disclosure will
It becomes more fully apparent.
Figure 1A show accoding to exemplary embodiment using according to the method for the embodiment of the present invention or the system of device
Block diagram;
Figure 1B shows according to an exemplary embodiment of the present invention for customizing the frame principles of the deep learning method of training
Figure;
Fig. 2 shows according to an exemplary embodiment of the present invention for customizing the flow chart of the deep learning method of training;
Fig. 3 shows according to an embodiment of the present invention for customizing the deep learning correlation model of training;
Fig. 4 shows example embodiment according to the present invention and carries out pretreated method to student's feature and customization training feature;
Fig. 5 shows example embodiment according to the present invention and is characterized the Feature Conversion of row vector form by way of folds
Matrix;
Fig. 6 show according to the present invention another embodiment in such a way that row vector is multiplied with its transposition by row vector form
Feature Conversion is characterized matrix;
Fig. 7 shows the mistake that example embodiment according to the present invention utilizes history training sample training deep learning correlation model
Journey;
Fig. 8 diagrammatically illustrates the frame of the deep learning device for customizing training of example embodiment according to the present invention
Figure;
Fig. 9 is shown accoding to exemplary embodiment for customizing the block diagram of the electronic equipment of training.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be real in a variety of forms
It applies, and is not understood as limited to embodiment set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will be comprehensively and complete
It is whole, and the design of example embodiment is comprehensively communicated to those skilled in the art.Identical appended drawing reference indicates in figure
Same or similar part, thus repetition thereof will be omitted.
In addition, described feature, structure or characteristic can be incorporated in one or more implementations in any suitable manner
In example.In the following description, many details are provided to provide and fully understand to embodiment of the disclosure.However,
It will be appreciated by persons skilled in the art that can with technical solution of the disclosure without one or more in specific detail,
Or it can be using other methods, constituent element, device, step etc..In other cases, it is not shown in detail or describes known side
Method, device, realization or operation are to avoid fuzzy all aspects of this disclosure.
Block diagram shown in the drawings not necessarily must be corresponding with physically separate entity.I.e., it is possible to using software
Form realizes these functional entitys, or these functional entitys are realized in one or more hardware modules or integrated circuit, or
These functional entitys are realized in heterogeneous networks and/or processor device and/or microcontroller device.
Flow chart shown in the drawings is merely illustrative, it is not necessary to including all content and operation/step,
It is not required to execute by described sequence.For example, some operation/steps can also decompose, and some operation/steps can close
And or part merge, therefore the sequence actually executed is possible to change according to the actual situation.
It should be understood that although herein various assemblies may be described using term first, second, third, etc., these groups
Part should not be limited by these terms.These terms are to distinguish a component and another component.Therefore, first group be discussed herein below
Part can be described as the second component without departing from the teaching of disclosure concept.As used herein, term " and/or " include associated
All combinations for listing any of project and one or more.
It will be understood by those skilled in the art that attached drawing is the schematic diagram of example embodiment, module or process in attached drawing
Necessary to not necessarily implementing the disclosure, therefore it cannot be used for the protection scope of the limitation disclosure.
In the prior art, on the one hand, student can only select a kind of participation to train from given training class's type, Huo Zhecan
Add expensive so-called honored guest class;On the other hand, student or training organization are between student and selected training class
A possibility that being in chaos ignorant state with degree, for learning effect and pass through examination is without clearly mensurable assurance.
The present invention proposes a kind of technical concept and scheme, establishes deep learning correlation model, base based on deep neural network
In student's feature and customization training feature, prediction training improves training effect as a result, so as to carry out personalized customization to training
Rate, and then improve rate of passing the examination.In addition, the expection exam score by predicting specific training method, it can be according to prediction result
Training method is adjusted.
It can be readily appreciated that although technical concept and scheme of the invention are described by taking driving training as an example below, originally
The technical concept and scheme of invention can also be applied to other training fields, and protection scope of the present invention, which covers, all these can answer
With the field of technical solution of the present invention.
The embodiment of the present invention is described in detail with reference to the accompanying drawings.
Figure 1A show accoding to exemplary embodiment using according to the method for the embodiment of the present invention or the system of device
Block diagram.
As shown in Figure 1A, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications, such as prediction application, webpage can be installed on terminal device 101,102,103
Browser application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be the various electronic equipments with display screen and supported web page browsing, packet
Include but be not limited to smart phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as utilize terminal device 101,102,103 to user
The information submitted provides the back-stage management server of prediction processing.Back-stage management server can use prediction model to reception
To the stored data of information and system carry out the processing such as calculating, and processing result is fed back into terminal device.Server 105
Other relevant operations and processing can be also carried out according to actual needs.Server 105 can be the server of an entity, can also example
As being to be made of multiple servers.
Figure 1B shows according to an exemplary embodiment of the present invention for customizing the frame principles of the deep learning method of training
Figure.
As shown in Figure 1B, S102 includes data acquisition cleaning and processing.Specifically, history training data is acquired, is cleaned different
Normal training data converts dirty data to and meets the quality of data using mathematical statistics, data mining or predefined cleaning rule
It is required that data.
In the process, generally require by consistency check, invalid value and missing values processing (including estimation, whole example
Deletion, variable deletion, in pairs deletion etc.).Main wash data type includes incomplete data, wrong data, repeated data etc..
The method of cleaning data generally comprises the detection and solution for solving the method, error value of deficiency of data (i.e. value lacks)
The certainly detection and solution of method, the detection of repetition record and removing method, inconsistency (inside data source and between data source)
Method etc..
Then, data can be counted and is analyzed, and can provide a kind of mode of statistical data result visualization.
S104 includes feature selecting and characteristic processing.For example, to historical data carry out characteristic processing, logarithm type data into
Line amplitude adjustment/normalization is encoded or is converted to term vector to classification type data (such as text information) and handled, right
Time data carry out successive value extraction (duration etc.) and discrete value extraction (time point analysis), to text type data into
Row natural language processing etc..
Feature generally may include plus-minus average, lane place line, order type, proportional-type etc..Plus-minus is averagely, for example, this student
Exam score be higher than all student's exam scores number the examination ability of one people (tradeoff), student's number of days of continuously taking an examination is super
Cross average how many (showing this student to the stickiness for driving training).Lane place line is, for example, how many points for training number of days and being in all students
At bit line.Order type is which position come.
S106 includes construction feature engineering.Historical data feature may include such as student's feature, training feature and other
Feature.Student's feature may include age, gender, occupation type, education degree, time guarantee degree, self-description option etc..Training
Feature may include time classification, teaching method, trainer attributes, coach's quantity, vehicle etc..Other features may include coach's evaluation
Deng.
Feature selecting and construction feature engineering can be carried out from these features.It is, for example, possible to use PCA principal component analysis,
LDA linear discriminant analysis, ICA independent component analysis carry out feature extraction.Feature selecting is to reject uncorrelated or redundancy spy
Sign, reduces the number of validity feature, reduces the time of model training, improve the accuracy of model.With statistical method, weighing apparatus
Measure the relationship between single feature and response variable (Lable).
Certainly, based on the use of deep neural network, it may not be necessary to it does a large amount of Feature Engineering or even omits, Ke Yizhi
It connects and data filling is entered, allow network oneself to train, self-recision.
S108 includes building deep neural network model, such as building DNN deep neural network regression model or classification mould
Type, and usage history data such as driving training data are trained model.Neural net layer inside DNN may include three
Class: input layer, hidden layer and output layer.First layer is input layer, and the last layer is output layer, and centre is all hidden layer.
Using DNN propagated forward algorithm, loss function is set, the training and prediction of neural network are carried out.
It can also construction logic regression model, linear regression mould in other methods in addition to building deep neural network model
Type, support vector regression model, polynomial regression model, ridge regression model, Lasso regression model, elastomeric network regression model
Deng.
S110 is predicted including the use of model.It is row vector form by student's character representation;By row vector form
Member's Feature Conversion is student's eigenmatrix;It is row vector form by customization training character representation;The customization of row vector form is trained
Instructing Feature Conversion is customization training eigenmatrix;Using eigenmatrix as input, model prediction training achievement is utilized.
The foregoing describe general principles, illustrate the technical solution of technical concept according to the present invention with reference to the accompanying drawings.
Fig. 2 shows according to an exemplary embodiment of the present invention for customizing the flow chart of the deep learning method of training.
As shown in Fig. 2, pre-processing in S202 to student's feature, pretreatment student's feature is obtained.
In accordance with the technical idea of the present invention, training concrete scheme is customized according to student's feature selecting.To customize driving training
For, according to some embodiments, student's feature includes at least two in following characteristics: age, gender, occupation type, education
Degree, time guarantee degree, self-description option.Occupation type can manipulate class for example to research and develop class, administrative sale class, equipment
Deng.Time guarantee degree, which can be divided into, to be completely secured, most of guarantee, partially guarantees, is difficult to ensure, can also be used and accordingly be commented
Divide system.Self-description may include such as carefulness degree, patience degree, harmony, driving hobby degree.
According to other embodiments, student's feature includes manipulation class game test result.Driving efficiency is one and manipulation
The relevant technical ability of ability.Therefore, it is tested by carrying out manipulation class game to student in advance, such as driving game, Ke Yigeng
Student's speciality is understood well, to provide more matched training.
In accordance with the technical idea of the present invention, prediction is giveed training using deep neural network.For this reason, it may be necessary to student's feature
It is pre-processed, is converted into the input data that deep neural network can come out.Further below this will exemplary retouch in detail
It states.
In S204, customization training feature is pre-processed, obtains pretreatment customization training feature.
Customization training feature mentioned here can be the feature for having each training class's type, be also possible to according to training side
The feature for the various customization trainings that formula is combined into.
By taking driving training as an example, there are class, all-day class, common honored guest's all-day class, honored guest class etc. on ordinary days in some driving schools.These classes
Difference essentially consist in about the vehicle time, coach whether fix.In addition, there is also one-to-one teaching and group instructions for type not of the same class
Difference and automatic catch vehicle and manual gear vehicle difference etc..In the prior art, the choice of general student is compared
It is small, it can only be selected from fixed shift type.So-called customization, also only the class of resource improves the increase with corresponding expense, and very
It is few that quantitative analysis prediction is carried out according to the concrete condition of student.
According to some embodiments of the present invention, method of the invention is applied to customization driving training.Customization training feature packet
It includes: time classification, teaching method, coach quantity, vehicle.Time classification can be such as class or all-day class on ordinary days.Teaching method
It can be one-to-one or group instruction.Training quantity can be 1 or 1/N (when N number of coach, inverted i.e. 1/N).Vehicle can be
Automatic catch or manual gear.
According to some embodiments, customization trains feature on the basis of time classification, teaching method, coach's quantity, vehicle,
It may also include coach's evaluation that student provides, such as the overall score that provides of student or classification scoring, it is such as patient, whether be good at teaching
, attitude etc..
According to some embodiments, customization training feature includes trainer attributes.According to statistics, coach is largely determined
Result of training.In addition, different students also more adapt to different types of coach, vice versa.Trainer attributes may include coach's year
Age, personality classification (for example, introversive, export-oriented), the length of teaching, driving age, history percent of pass etc..When calculating history percent of pass, if one
The about excessively N number of coach of a student is then calculated by 1/N student of each coach's band.
In accordance with the technical idea of the present invention, prediction is giveed training using deep neural network.For this reason, it may be necessary to train customization
Feature is pre-processed, and the input data that deep neural network can come out is converted into.This will be carried out further below exemplary detailed
Thin description.
It is associated with using pretreatment student's feature and pretreatment customization training feature as input using deep learning in S206
Model prediction training achievement.
In accordance with the technical idea of the present invention, result of training and student and training method are related.But traditional approach does not have
Selection, adjustment or the customization giveed training according to the combination of the two.On the other hand, although result of training and student and training method
It is related, but the prior art is difficult to find or finds the relationship between student and training method and result of training or is associated with.
According to the technique and scheme of the present invention, this relationship is found using deep learning model.But the present invention is not letter
Singly apply existing neural network model.In view of student's feature and customization training feature are neither identical, and there is training effect
The relevance of fruit, the present invention use a kind of correlation model, i.e. two network models handle different input feature vectors, then unify place again
Reason, as described later.In such manner, it is possible to training pattern quickly, and relatively accurate prediction result can be obtained.
Fig. 3 shows according to an embodiment of the present invention for customizing the deep learning correlation model of training.
As shown in figure 3, deep learning correlation model according to an embodiment of the present invention includes: the first CNN network 301, receive
Student's feature is pre-processed, fisrt feature data are generated;2nd CNN network 303 receives pretreatment customization training feature, generates the
Two characteristics;Full articulamentum 305, receives fisrt feature data and second feature data, generates third feature data;Output layer
307, the training achievement based on the third feature data output prediction from full articulamentum.
Deep learning is a kind of special machine learning, is substantially exactly deep neural network (DNN).The depth of more hidden layers
Neural network has excellent feature learning ability, and the feature learnt has more essential portray to data.Depth nerve net
The powerful place of network is its automatic learning characteristic of multilayered structure energy, and may learn the feature of many levels: shallower
Convolutional layer perception domain is smaller, and the feature of some regional areas is arrived in study;Deeper convolutional layer has biggish perception domain, Neng Gouxue
It practises and is more abstracted some features.A variety of deep neural network models are proposed at present, and people also are continuing to explore
With other deep neural network models of proposition.
The one kind of convolutional neural networks (Convolutional Neural Network, CNN) as deep neural network,
It can learn the mapping relations largely inputted between output, without the accurate mathematics between any output and input
Expression formula.CNN the data of two dimensional form can be converted to pre-set categories quantity, data characteristic information can be retained it is one-dimensional
Vector data is exported, and has stronger data capability of fitting and data generaliza-tion ability.As long as with known mode to convolution
Network is trained, and network just has the mapping ability between inputoutput pair.What convolutional network executed is to have tutor's training, institute
With its sample set be by shaped like: the vector of (input vector, ideal output vector) is to composition.All these vectors pair, are all answered
This is derived from the practical " RUN " result for the system that network will simulate.They, which can be, acquires to come from actual motion system
's.Before starting training, all power should all be initialized with some different small random numbers." small random number " is used to protect
Card network will not enter saturation state and lead to failure to train because weight is excessive;" difference " is used to guarantee that network can be normally
Study.
In the present embodiment, pretreatment student's feature is received by the first CNN network 301, generates fisrt feature data;Together
When, pretreatment customization training feature is received by the 2nd CNN network 303, generates second feature data.In this way, excellent by CNN
Feature learning ability and data generalization ability assist to establish mapping between inputoutput pair.
According to some embodiments, the first CNN network 301 and the 2nd CNN network 303 can respectively include input layer, convolutional layer
With activation primitive layer.Input layer, convolutional layer and activation primitive layer are well known to those skilled in the art, and are repeated no more.
Then, fisrt feature data and second feature data are received by full articulamentum 305, generates third feature data.
Full articulamentum 305 plays the role of " classifier ".If the first and second CNN networks 301 and 303 are to be mapped to initial data
If hidden layer feature space, full articulamentum then plays " the distributed nature expression " that will be acquired and is mapped to the work in sample labeling space
With.Full articulamentum 305 is the straightforward procedure for learning the nonlinear combination of these features.The most of features obtained from CNN network
May be effective to classification task, but the combination of these features may be more preferable.In accordance with the technical idea of the present invention, from student's feature
Input and customization training feature input are utilized respectively CNN network and carry out feature learning and extraction, are then carried out by full articulamentum
Ensemble learning.In this way, can both be learnt respectively by CNN network, the feature obtained using study can also be combined, thus
Improve forecasting accuracy.
Finally, the training achievement of prediction is exported based on the third feature data from full articulamentum by output layer 307,
Such as pass through classification or regression forecasting result.Output layer 307 can not have activation primitive, can directly take knot as regression problem
Fruit, i.e. output score.According to other embodiments, sigmoid activation primitive can be used in output layer 307, export final
By or unacceptable prediction result.These are merely illustrative, and technical solution of the present invention is without being limited thereto.
According to some embodiments, prediction result is a kind of probability value of quantization.When the probability value of output is more than one default
The first probability threshold value when, determine prediction result be " passing through ", be otherwise " not passing through ".
According to other embodiments, prediction result can directly export a fractional value, such as one between 0 and 1
Fractional value, then normal exam score value is transformed to by equal proportion amplification, it can determine whether that examination result is to pass through or do not pass through accordingly.
Fig. 4 shows example embodiment according to the present invention and carries out pretreated method to student's feature and customization training feature.
As shown in figure 4, being row vector form by student's character representation in S401.
It is row vector form by customization training character representation in S403.
Natural language is handled using computer, needs natural language processing becoming the symbol that machine can identify
Number, and in machine-learning process, it needs to quantize.
The original form of student's feature and customization training feature is generally character string forms, carries out by these primitive characters
After head and the tail splice, the feature of row vector form is obtained.
According to some embodiments of the present invention, can by the way that a part of list of feature values is shown as binary value form, meanwhile,
The transformation of another part characteristic value is characterized, and the list of feature values for the feature that transformation obtains is shown as binary value form.By this
A little Boolean sequences are connected, the feature of available row vector form.For example, can indicate manual with 0 or 00 for vehicle
Gear, indicates automatic catch with 1 or 01.For the age, age bracket can be divided, such as is expressed as within 18-28 years old the first age bracket feature, such as
Fruit student falls in this range the age, then the value of the first age bracket feature is 1, at this moment the value of other age bracket features then Xiang Yingwei
0.Certainly, age bracket can also be divided, each age bracket is indicated with such as 000,001,010,011,100 etc. respectively.Other features
It can be processed similarly with this.
It is student's eigenmatrix by student's Feature Conversion of row vector form in S405.
It is customization training eigenmatrix by the customization training Feature Conversion of row vector form in S407.
Feature is to meet CNN network to input data format from the purpose that row vector form is converted to matrix form
It is required that.
Student's feature of natural language and customization training feature conversion are characterized square indeed, it is possible to which there are many modes
Battle array.For example, can realize this purpose by the method for one-hot matrix or word2vec.
It, can be by way of folds by row vector form after obtaining the row vector of feature according to some embodiments
Feature Conversion is characterized matrix, as shown in Figure 5.
According to other embodiments, it is by the Feature Conversion of row vector form in such a way that row vector is multiplied with its transposition
Eigenmatrix, as shown in Figure 6.
Fig. 7 shows the mistake that example embodiment according to the present invention utilizes history training sample training deep learning correlation model
Journey.
As shown in fig. 7, in S701, from the student's feature and training feature of history training sample extraction row vector form.
As before, the original form of student's feature and customization training feature is generally character string forms, by these original spies
After sign carries out head and the tail splicing, the feature of row vector form is obtained.
According to some embodiments of the present invention, can by the way that a part of list of feature values is shown as binary value form, meanwhile,
The transformation of another part characteristic value is characterized, and the list of feature values for the feature that transformation obtains is shown as binary value form.By this
A little Boolean sequences are connected, the feature of available row vector form.For example, can indicate manual with 0 or 00 for vehicle
Gear, indicates automatic catch with 1 or 01.For the age, age bracket can be divided, such as is expressed as within 18-28 years old the first age bracket feature, such as
Fruit student falls in this range the age, then the value of the first age bracket feature is 1, at this moment the value of other age bracket features then Xiang Yingwei
0.Certainly, age bracket can also be divided, each age bracket is indicated with such as 000,001,010,011,100 etc. respectively.Other features
It can be processed similarly with this.
In S703, student's feature of row vector form and training feature are respectively converted into student's eigenmatrix and training is special
Levy matrix.
As before, being to meet CNN network to input data from the purpose that row vector form is converted to matrix form for feature
The requirement of format.
Student's feature of natural language and customization training feature conversion are characterized square indeed, it is possible to which there are many modes
Battle array.For example, can realize this purpose by the method for one-hot matrix or word2vec.
It, can be by way of folds by row vector form after obtaining the row vector of feature according to some embodiments
Feature Conversion is characterized matrix, as shown in Figure 5.
According to other embodiments, it is by the Feature Conversion of row vector form in such a way that row vector is multiplied with its transposition
Eigenmatrix, as shown in Figure 6.
In S705, according to student's eigenmatrix and training eigenmatrix training deep learning correlation model.
According to example embodiment, it is marked using the examination result in history training sample as sample, then utilizes mark
Sample training deep learning correlation model.It can directly use exam score as mark, or will pass the examination or not lead to
It crosses and is marked as sample.
According to some embodiments, history training sample includes positive sample and negative sample, and passing the examination is positive sample, is not led to
Crossing is negative sample.Positive sample and the ratio of negative sample are 1:1.According to other embodiments, the ratio of positive sample and negative sample
For 8:1 to 3:1.
According to some embodiments, with the accumulation and increase or the variation of testing situations of training sample, using new
Sample Refreshment deep learning correlation model.
It will be appreciated by those skilled in the art that realizing that all or part of the steps of above-described embodiment is implemented as being executed by CPU
Computer program.When the computer program is executed by CPU, above-mentioned function defined by the above method that the disclosure provides is executed
Energy.Program can store in a kind of computer readable storage medium, which can be read-only memory, disk or
CD etc..
Further, it should be noted that above-mentioned attached drawing is only the place according to included by the method for disclosure exemplary embodiment
Reason schematically illustrates, rather than limits purpose.It can be readily appreciated that above-mentioned processing shown in the drawings is not indicated or is limited at these
The time sequencing of reason.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
Following is embodiment of the present disclosure, can be used for executing disclosed method.For embodiment of the present disclosure
In undisclosed details, please refer to embodiments of the present disclosure.
Fig. 8 diagrammatically illustrates the frame of the deep learning device for customizing training of example embodiment according to the present invention
Figure.
As shown in figure 8, the deep learning device 800 for customizing training of example embodiment includes first according to the present invention
Preprocessing module 810, the second preprocessing module 820 and prediction module 830.
First preprocessing module 810 carries out pretreatment to student's feature to obtain pretreatment student's feature.Second pre- place
820 pairs of customization training features of reason module carry out pretreatment to obtain pretreatment customization training feature.Prediction module 830 will be located in advance
Member's feature of science and pretreatment customization training feature utilize deep learning correlation model prediction training achievement as input.Depth
Learning correlation model is such as preceding model, and details are not described herein again.
Fig. 8 shown device is corresponding with preceding method, omits the detailed description herein.
It will be appreciated by those skilled in the art that above-mentioned each module can be distributed in device according to the description of embodiment, it can also
Uniquely it is different from one or more devices of the present embodiment with carrying out corresponding change.The module of above-described embodiment can be merged into
One module, can also be further split into multiple submodule.
Fig. 9 is shown accoding to exemplary embodiment for customizing the block diagram of the electronic equipment of training.
The electronic equipment 900 of this embodiment according to the disclosure is described referring to Fig. 9.The electronics that Fig. 9 is shown
Equipment 900 is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 9, computer system 900 includes central processing unit (CPU) 901, it can be read-only according to being stored in
Program in memory (ROM) 902 or be loaded into the program in random access storage device (RAM) 903 from storage part 908 and
Execute various movements appropriate and processing.In RAM 903, it is also stored with various programs and data needed for system operatio.CPU
901, ROM 902 and RAM 903 is connected with each other by bus 904.Input/output (I/O) interface 905 is also connected to bus
904。
I/O interface 905 is connected to lower component: the importation 906 including touch screen, keyboard etc.;Including such as liquid crystal
The output par, c 907 of display (LCD) etc. and loudspeaker etc.;Storage part 908 including flash memory etc.;And including such as without
The communications portion 909 of gauze card, High_speed NIC etc..Communications portion 909 executes communication process via the network of such as internet.It drives
Dynamic device 910 is also connected to I/O interface 905 as needed.Detachable media 911, semiconductor memory, disk etc., according to
It needs to be mounted on driver 910, in order to be mounted into storage part as needed from the computer program read thereon
908。
By the description of above embodiment, those skilled in the art is it can be readily appreciated that example embodiment described herein
It can also be realized in such a way that software is in conjunction with necessary hardware by software realization.Therefore, implemented according to the disclosure
The technical solution of example can be embodied in the form of software products, which can store in a non-volatile memories
In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) or on network, including some instructions are so that a calculating equipment (can
To be personal computer, server, mobile terminal or network equipment etc.) it executes according to the method for the embodiment of the present disclosure.
The foregoing describe according to an embodiment of the present invention for customizing the method and apparatus trained and electronic equipment and medium.
By above detailed description, those skilled in the art with device it can be readily appreciated that have according to the method for the embodiment of the present invention
One or more of following advantages.
According to some embodiments of the present invention, by the method for customization training, training efficiency is improved, and then it is logical to improve examination
Cross rate.
Other embodiments according to the present invention, according to student's situation and training method, exam score is trained in prediction, thus
Training method can be adjusted according to prediction result.
The present invention includes following implementation:
Embodiment 1, it is a kind of for customize training deep learning method characterized by comprising
Student's feature is pre-processed, pretreatment student's feature is obtained;
Customization training feature is pre-processed, pretreatment customization training feature is obtained;
Using the pretreatment student feature and pretreatment customization training feature as input, it is associated with using deep learning
Model prediction training achievement,
Wherein, the deep learning correlation model includes:
First CNN network receives pretreatment student's feature, generates fisrt feature data;
2nd CNN network receives the pretreatment customization training feature, generates second feature data;
Full articulamentum receives the fisrt feature data and the second feature data, generates third feature data;
Output layer, the training achievement based on the third feature data output prediction from the full articulamentum.
Embodiment 2, deep learning method as tdescribed in embodiment 1, which is characterized in that further include:
From the student's feature and training feature of history training sample extraction row vector form;
Student's feature of row vector form and training feature are respectively converted into student's eigenmatrix and training eigenmatrix;
According to student's eigenmatrix and the training eigenmatrix training deep learning correlation model.
Embodiment 3, the deep learning method as described in embodiment 2, which is characterized in that the history training sample packet
Include positive sample and negative sample, the ratio of the positive sample and the negative sample is 1:1.
Embodiment 4, the deep learning method as described in embodiment 2, which is characterized in that the history training sample packet
Include positive sample and negative sample, the ratio of the positive sample and the negative sample is 8:1 to 3:1.
Embodiment 5, the deep learning method as described in embodiment 2, which is characterized in that further include:
Utilize deep learning correlation model described in new Sample Refreshment.
Embodiment 6, deep learning method as tdescribed in embodiment 1, which is characterized in that the first CNN network and
The 2nd CNN network respectively includes input layer, convolutional layer, activation primitive layer.
Embodiment 7, deep learning method as tdescribed in embodiment 1, which is characterized in that the customization training is customization
Driving training.
Embodiment 8, the deep learning method as described in embodiment 7, which is characterized in that under student's feature includes
State at least two in feature: age, gender, occupation type, education degree, time guarantee degree, self-description option.
Embodiment 9, the deep learning method as described in embodiment 7, which is characterized in that student's feature includes behaviour
Control class game test result.
Embodiment 10, the deep learning method as described in embodiment 7, which is characterized in that feature is trained in the customization
It include: time classification, teaching method, coach quantity, vehicle.
Embodiment 11, the deep learning method as described in embodiment 10, which is characterized in that feature is trained in the customization
It further include coach's evaluation.
Embodiment 12, the deep learning method as described in embodiment 7, which is characterized in that feature is trained in the customization
Including trainer attributes.
Embodiment 13, deep learning method as tdescribed in embodiment 1, which is characterized in that
It is described to student's feature carry out pretreatment include:
It is row vector form by student's character representation;
It is student's eigenmatrix by student's Feature Conversion of row vector form,
Described pair of customization training feature carries out pretreatment and includes:
It is row vector form by customization training character representation;
It is customization training eigenmatrix by the customization training Feature Conversion of row vector form.
Embodiment 14, it is a kind of for customize training deep learning device characterized by comprising
First preprocessing module carries out pretreatment to student's feature to obtain pretreatment student's feature;
Second preprocessing module carries out pretreatment to customization training feature to obtain pretreatment customization training feature;
Prediction module utilizes depth using the pretreatment student feature and pretreatment customization training feature as input
Degree study correlation model prediction training achievement,
Wherein, the deep learning correlation model includes:
First CNN network receives pretreatment student's feature, generates fisrt feature data;
2nd CNN network receives the pretreatment customization training feature, generates second feature data;
Full articulamentum receives the fisrt feature data and the second feature data, generates third feature data;
Output layer, the training achievement based on the third feature data output prediction from the full articulamentum.
Embodiment 15, a kind of electronic equipment characterized by comprising
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes the deep learning method as described in any in embodiment 1-13.
Embodiment 16, a kind of computer-readable medium, are stored thereon with computer program, which is characterized in that the journey
The deep learning method as described in any in embodiment 1-13 is realized when sequence is executed by processor.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the present invention
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.
Claims (10)
1. a kind of for customizing the deep learning method of training characterized by comprising
Student's feature is pre-processed, pretreatment student's feature is obtained;
Customization training feature is pre-processed, pretreatment customization training feature is obtained;
Using the pretreatment student feature and pretreatment customization training feature as input, deep learning correlation model is utilized
Prediction training achievement,
Wherein, the deep learning correlation model includes:
First CNN network receives pretreatment student's feature, generates fisrt feature data;
2nd CNN network receives the pretreatment customization training feature, generates second feature data;
Full articulamentum receives the fisrt feature data and the second feature data, generates third feature data;
Output layer, the training achievement based on the third feature data output prediction from the full articulamentum.
2. deep learning method as described in claim 1, which is characterized in that further include:
From the student's feature and training feature of history training sample extraction row vector form;
Student's feature of row vector form and training feature are respectively converted into student's eigenmatrix and training eigenmatrix;
According to student's eigenmatrix and the training eigenmatrix training deep learning correlation model.
3. deep learning method as claimed in claim 2, which is characterized in that the history training sample includes positive sample and bears
The ratio of sample, the positive sample and the negative sample is 8:1 to 3:1, such as 1:1.
4. deep learning method as claimed in claim 2, which is characterized in that further include:
Utilize deep learning correlation model described in new Sample Refreshment.
5. deep learning method as described in claim 1, which is characterized in that the first CNN network and the 2nd CNN net
Network respectively includes input layer, convolutional layer, activation primitive layer.
6. deep learning method as described in claim 1, which is characterized in that the customization training is customization driving training.
7. deep learning method as described in claim 1, which is characterized in that
It is described to student's feature carry out pretreatment include:
It is row vector form by student's character representation;
It is student's eigenmatrix by student's Feature Conversion of row vector form,
Described pair of customization training feature carries out pretreatment and includes:
It is row vector form by customization training character representation;
It is customization training eigenmatrix by the customization training Feature Conversion of row vector form.
8. a kind of for customizing the deep learning device of training characterized by comprising
First preprocessing module carries out pretreatment to student's feature to obtain pretreatment student's feature;
Second preprocessing module carries out pretreatment to customization training feature to obtain pretreatment customization training feature;
Prediction module utilizes depth using the pretreatment student feature and pretreatment customization training feature as input
Correlation model prediction training achievement is practised,
Wherein, the deep learning correlation model includes:
First CNN network receives pretreatment student's feature, generates fisrt feature data;
2nd CNN network receives the pretreatment customization training feature, generates second feature data;
Full articulamentum receives the fisrt feature data and the second feature data, generates third feature data;
Output layer, the training achievement based on the third feature data output prediction from the full articulamentum.
9. a kind of electronic equipment characterized by comprising
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now deep learning method as described in any in claim 1-7.
10. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that described program is held by processor
The deep learning method as described in any in claim 1-7 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811333128.4A CN109543841A (en) | 2018-11-09 | 2018-11-09 | Deep learning method, apparatus, electronic equipment and computer-readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811333128.4A CN109543841A (en) | 2018-11-09 | 2018-11-09 | Deep learning method, apparatus, electronic equipment and computer-readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109543841A true CN109543841A (en) | 2019-03-29 |
Family
ID=65846692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811333128.4A Pending CN109543841A (en) | 2018-11-09 | 2018-11-09 | Deep learning method, apparatus, electronic equipment and computer-readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543841A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060538A (en) * | 2019-04-08 | 2019-07-26 | 上海云之驾科技股份有限公司 | Personalized artificial based on historical data modeling intelligently drives training and practices system and method |
CN112001656A (en) * | 2020-09-01 | 2020-11-27 | 北京弘远博学科技有限公司 | Method for carrying out training course recommendation pertinently based on employee historical training information |
CN115409104A (en) * | 2022-08-25 | 2022-11-29 | 贝壳找房(北京)科技有限公司 | Method, apparatus, device, medium and program product for identifying object type |
CN116339899A (en) * | 2023-05-29 | 2023-06-27 | 内江师范学院 | Desktop icon management method and device based on artificial intelligence |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011074714A1 (en) * | 2009-12-15 | 2011-06-23 | 주식회사 아이싸이랩 | Method for intelligent personalized learning service |
CN105868317A (en) * | 2016-03-25 | 2016-08-17 | 华中师范大学 | Digital education resource recommendation method and system |
CN106408475A (en) * | 2016-09-30 | 2017-02-15 | 中国地质大学(北京) | Online course applicability evaluation method |
CN108109089A (en) * | 2017-12-15 | 2018-06-01 | 华中师范大学 | A kind of education can computational methods |
CN108171358A (en) * | 2017-11-27 | 2018-06-15 | 科大讯飞股份有限公司 | Result prediction method and device, storage medium, electronic equipment |
CN108596804A (en) * | 2018-04-28 | 2018-09-28 | 重庆玮宜电子科技有限公司 | Multithreading online education evaluation method |
-
2018
- 2018-11-09 CN CN201811333128.4A patent/CN109543841A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011074714A1 (en) * | 2009-12-15 | 2011-06-23 | 주식회사 아이싸이랩 | Method for intelligent personalized learning service |
CN105868317A (en) * | 2016-03-25 | 2016-08-17 | 华中师范大学 | Digital education resource recommendation method and system |
CN106408475A (en) * | 2016-09-30 | 2017-02-15 | 中国地质大学(北京) | Online course applicability evaluation method |
CN108171358A (en) * | 2017-11-27 | 2018-06-15 | 科大讯飞股份有限公司 | Result prediction method and device, storage medium, electronic equipment |
CN108109089A (en) * | 2017-12-15 | 2018-06-01 | 华中师范大学 | A kind of education can computational methods |
CN108596804A (en) * | 2018-04-28 | 2018-09-28 | 重庆玮宜电子科技有限公司 | Multithreading online education evaluation method |
Non-Patent Citations (1)
Title |
---|
朱柳青: "基于深度学习的课程推荐与学习预测模型研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060538A (en) * | 2019-04-08 | 2019-07-26 | 上海云之驾科技股份有限公司 | Personalized artificial based on historical data modeling intelligently drives training and practices system and method |
CN112001656A (en) * | 2020-09-01 | 2020-11-27 | 北京弘远博学科技有限公司 | Method for carrying out training course recommendation pertinently based on employee historical training information |
CN115409104A (en) * | 2022-08-25 | 2022-11-29 | 贝壳找房(北京)科技有限公司 | Method, apparatus, device, medium and program product for identifying object type |
CN116339899A (en) * | 2023-05-29 | 2023-06-27 | 内江师范学院 | Desktop icon management method and device based on artificial intelligence |
CN116339899B (en) * | 2023-05-29 | 2023-08-01 | 内江师范学院 | Desktop icon management method and device based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Su et al. | Exercise-enhanced sequential modeling for student performance prediction | |
Kruschke | Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan | |
Dicheva et al. | Gamification in education: Where are we in 2015? | |
CN109543841A (en) | Deep learning method, apparatus, electronic equipment and computer-readable medium | |
Thompson | Computer adaptive testing, big data and algorithmic approaches to education | |
US20170372628A1 (en) | Adaptive Reading Level Assessment for Personalized Search | |
US8832117B2 (en) | Apparatus, systems and methods for interactive dissemination of knowledge | |
CN109582956A (en) | text representation method and device applied to sentence embedding | |
CN111159419B (en) | Knowledge tracking data processing method, system and storage medium based on graph convolution | |
KR102506132B1 (en) | Method and device for personalized learning amount recommendation based on self-attention mechanism | |
CN108629497A (en) | Course content Grasping level evaluation method and device | |
Winn | Model-based machine learning | |
Dale et al. | The observer's observer's paradox | |
Yang et al. | Using an ANN-based computational model to simulate and evaluate Chinese students’ individualized cognitive abilities important in their English acquisition | |
Cheng | Data-mining research in education | |
KR20230101668A (en) | Method and apparatus for recommending learning amount using clustering and artificial intelligence using gaussian mixed model at the same time | |
Pistilli et al. | Guiding early and often: Using curricular and learning analytics to shape teaching, learning, and student success in gateway courses | |
Unhapipat et al. | Bayesian predictive inference for zero-inflated Poisson (ZIP) distribution with applications | |
Pillai K et al. | Technological leverage in higher education: an evolving pedagogy | |
Killeen | The structure of scientific evolution | |
CN109948930A (en) | Deep learning method and its application for driving training planning | |
Mutono et al. | An investigation of Mobile learning readiness for Post-School Education and Training in South Africa using the Technology Acceptance model | |
Chamberlain et al. | Library reorganization, chaos, and using the core competencies as a guide | |
CN110991172B (en) | Domain name recommendation method, domain name recommendation model training method and electronic equipment | |
CN116954418A (en) | Exhibition hall digital person realization method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190329 |
|
WD01 | Invention patent application deemed withdrawn after publication |