CN115830405B - Method and system for analyzing tagged user capability portrayal - Google Patents

Method and system for analyzing tagged user capability portrayal Download PDF

Info

Publication number
CN115830405B
CN115830405B CN202310102294.8A CN202310102294A CN115830405B CN 115830405 B CN115830405 B CN 115830405B CN 202310102294 A CN202310102294 A CN 202310102294A CN 115830405 B CN115830405 B CN 115830405B
Authority
CN
China
Prior art keywords
learning
training
user
label
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310102294.8A
Other languages
Chinese (zh)
Other versions
CN115830405A (en
Inventor
郑楠
曹鹏宇
杨连增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoxin Blue Bridge Education Technology Co ltd
Original Assignee
Guoxin Blue Bridge Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoxin Blue Bridge Education Technology Co ltd filed Critical Guoxin Blue Bridge Education Technology Co ltd
Priority to CN202310102294.8A priority Critical patent/CN115830405B/en
Publication of CN115830405A publication Critical patent/CN115830405A/en
Application granted granted Critical
Publication of CN115830405B publication Critical patent/CN115830405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a labeled user capability portrait analysis method and a labeled user capability portrait analysis system, which are characterized in that historical learning characteristics of a target user are extracted according to historical course data, historical learning labels of the target user are predicted, next time period learning characteristics of the target user are predicted based on the historical learning characteristics of the target user, the historical learning characteristics and the next time period learning characteristics are fused to obtain comprehensive learning characteristics, next time period learning labels of the target user are predicted based on the comprehensive learning characteristics, and the next time period learning labels are corrected based on the historical learning labels to obtain learning capability labels of the target user. The learning ability label of the target user can accurately describe and reflect the learning ability of the target user and the acceptability of the target user to related knowledge, and on the basis, the learning ability label of the target user is further optimized and adjusted, and the obtained user ability image more accurately reflects the real situation of the target user.

Description

Method and system for analyzing tagged user capability portrayal
Technical Field
The invention relates to the technical field of computers, in particular to a labeled user capability portrait analysis method and a labeled user capability portrait analysis system.
Background
User portrayal, also known as user role, is widely used in various fields as an effective tool for outlining target users, contacting user appeal and design direction. The user portrait is a labeled user model abstracted according to the information of the social attribute, life habit, behavior and the like of the user, and the user portrait comprises a plurality of user labels which are used for representing certain characteristics of the user. In the actual operation process, the attribute, the behavior and the expected data conversion of the user are often linked by the most obvious and living words, and are used as virtual representatives of the actual user.
In the age of information technology, which is developing at a high speed, many things of users are done on the internet, such as learning, shopping, etc. on the internet. The method has great social significance and economic value in recommending proper products and courses to the user according to the characteristics (user portraits) of the user.
At present, the user portrait of the user is mainly characterized by basic attribute information labels filled in by the user. In fact, however, on the one hand, the information filled by the user is not necessarily accurate and may be spurious and randomly filled. On the other hand, the basic information of the user does not necessarily accurately characterize the characteristics of the user, for example, a graduation school of a student is a school with a relatively later rank, but the student has a strong ability and a high knowledge grasping degree, and if a relatively basic course is recommended to the student according to the attribute of the school, the recommended course is not applicable to the student.
Disclosure of Invention
The invention aims to provide a labeled user capability portrait analysis method and a labeled user capability portrait analysis system, which are used for solving the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a tagged user capability portrait analysis method, including:
basic attribute information and historical course data of a target user are obtained; the basic attribute information comprises school information, professional information and large-race competition information; the historical course data includes: the method comprises the steps of selecting lessons by a target user, selecting lessons by the target user, collecting learning time of the target user and repeatedly learning times of the target user, wherein the learning time of the target user comprises a plurality of target learning time periods;
inputting the historical course data into a pre-trained user capacity portrait estimation model, wherein the user capacity portrait estimation model estimates a learning capacity label of the target user;
generating a user capability image of the target user according to the basic attribute information and the learning capability label; the user capability image comprises a plurality of user labels, and different user labels are used for representing the characteristics of a target user in different dimensions;
the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the history learning characteristics of the target user according to the history course data, and the first RNN network is used for predicting the history learning label of the target user according to the history learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the history learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning capacity label of the target user.
Optionally, the method further comprises:
and recommending courses conforming to the user capability image to the target user based on the user capability image.
Optionally, the training method of the user capability portrait estimation model includes:
the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a training user class selection subject, a training user class selection time, a training user learning time set and training user repeated learning times; the training user learning time set comprises a plurality of training learning time periods;
inputting a plurality of training subsets into a CNN (computer network), and extracting a first learning feature map of a training user based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning feature map comprises a plurality of first learning feature sequences, and each first learning feature sequence represents learning characteristics of one subject;
learning the plurality of first learning feature sequences through a first RNN network, and predicting a first training label of a training user;
predicting based on a plurality of first learning feature sequences through an LSTM network, and correspondingly obtaining a plurality of second learning feature sequences;
Forming a plurality of second learning feature sequences into a second learning feature map;
fusing the first learning feature map and the second learning feature map through a mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the plurality of third learning feature sequences through a second RNN network, and predicting a second training label of the training user;
fusion transcription is carried out on the basis of the first training label and the second training label through the CTC loss layer, so that a prediction label is obtained;
and when the difference indexes of the prediction label and the first training label are converged, determining that the training of the user capacity portrait estimation model is finished.
Optionally, the hybrid pyramid structure network comprises a first pyramid structure and a second pyramid structure; the through mixed pyramid structure fuses the first learning feature map and the second learning feature map to obtain a third learning feature map, and the through mixed pyramid structure comprises the following steps:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
and taking the second dimension reduction feature map as a kernel, and carrying out convolution operation on the first dimension reduction feature map to obtain a third learning feature map.
Optionally, the performing fusion transcription based on the first training tag and the second training tag through the CTC loss layer to obtain a predicted tag includes:
splicing the first training label and the second training label to form a fusion label with the length of M+N; m is the length of the first training label, and N is the length of the second training label;
transcription is carried out based on the fusion tag through a CTC loss function, and a prediction tag is obtained.
In a second aspect, an embodiment of the present invention further provides a labeled user capability portrait analysis system, including:
the acquisition module is used for acquiring basic attribute information and historical course data of the target user; the basic attribute information comprises school information, professional information and large-race competition information; the historical course data includes: the method comprises the steps of selecting lessons by a target user, selecting lessons by the target user, collecting learning time of the target user and repeatedly learning times of the target user, wherein the learning time of the target user comprises a plurality of target learning time periods;
the prediction module is used for inputting the historical course data into a pre-trained user capacity portrait estimation model, and the user capacity portrait estimation model estimates a learning capacity label of the target user;
The adjustment module is used for generating a user capacity image of the target user according to the basic attribute information and the learning capacity label; the user capability image comprises a plurality of user labels, and different user labels are used for representing the characteristics of a target user in different dimensions;
the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the history learning characteristics of the target user according to the history course data, and the first RNN network is used for predicting the history learning label of the target user according to the history learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the history learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning capacity label of the target user.
Optionally, the system further comprises:
and the recommending module is used for recommending courses conforming to the user capability image to the target user based on the user capability image.
The training method of the user capacity portrait estimation model comprises the following steps:
the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a training user class selection subject, a training user class selection time, a training user learning time set and training user repeated learning times; the training user learning time set comprises a plurality of training learning time periods;
inputting a plurality of training subsets into a CNN (computer network), and extracting a first learning feature map of a training user based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning feature map comprises a plurality of first learning feature sequences, and each first learning feature sequence represents learning characteristics of one subject;
learning the plurality of first learning feature sequences through a first RNN network, and predicting a first training label of a training user;
Predicting based on a plurality of first learning feature sequences through an LSTM network, and correspondingly obtaining a plurality of second learning feature sequences;
forming a plurality of second learning feature sequences into a second learning feature map;
fusing the first learning feature map and the second learning feature map through a mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the plurality of third learning feature sequences through a second RNN network, and predicting a second training label of the training user;
fusion transcription is carried out on the basis of the first training label and the second training label through the CTC loss layer, so that a prediction label is obtained;
and when the difference indexes of the prediction label and the first training label are converged, determining that the training of the user capacity portrait estimation model is finished.
Optionally, the hybrid pyramid structure network comprises a first pyramid structure and a second pyramid structure; the through mixed pyramid structure fuses the first learning feature map and the second learning feature map to obtain a third learning feature map, and the through mixed pyramid structure comprises the following steps:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
Performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
and taking the second dimension reduction feature map as a kernel, and carrying out convolution operation on the first dimension reduction feature map to obtain a third learning feature map.
Optionally, the performing fusion transcription based on the first training tag and the second training tag through the CTC loss layer to obtain a predicted tag includes:
splicing the first training label and the second training label to form a fusion label with the length of M+N; m is the length of the first training label, and N is the length of the second training label;
transcription is carried out based on the fusion tag through a CTC loss function, and a prediction tag is obtained.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
the embodiment of the invention also provides a labeled user capability portrait analysis method and a labeled user capability portrait analysis system, wherein the method comprises the following steps: basic attribute information and historical course data of a target user are obtained; the basic attribute information comprises school information, professional information and large-race competition information; the school information comprises a school name, the professional information comprises a professional name, and the large-scale competition information comprises competition time and winning conditions; the historical course data includes: the method comprises the steps of selecting lessons by a target user, selecting lessons by the target user, collecting learning time of the target user and repeatedly learning times of the target user, wherein the learning time of the target user comprises a plurality of target learning time periods; inputting the historical course data into a pre-trained user capacity portrait estimation model, wherein the user capacity portrait estimation model estimates a learning capacity label of the target user; generating a user capability image of the target user according to the basic attribute information and the learning capability label; the user capability image includes a plurality of user tags, different user tags being used to characterize the characteristics of the target user in different dimensions.
The first RNN network is used for predicting a history learning label of the target user according to the history learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the history learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning ability label of the target user, the obtained learning ability label of the target user can accurately describe and reflect the learning ability of the target user and the acceptability of the related knowledge, on the basis, a user ability image of the target user is generated according to the basic attribute information and the learning ability label, the learning ability label of the target user is further optimized and adjusted, and the obtained user ability image more accurately reflects the real situation of the target user. The user capability image comprises a plurality of learning capability labels, and the capability characteristics of the user can be characterized in multiple aspects and multiple dimensions.
Drawings
FIG. 1 is a flowchart of a method for analyzing a tagged user capability representation provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a user capability portrait estimation model according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
The marks in the figure: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; bus interface 505.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that certain terms indicating orientations or positional relationships are merely used to facilitate the description of the present invention and to simplify the description, and are not meant to indicate or imply that the devices or elements being referred to must be oriented, configured and operated in a particular orientation, and are not to be construed as limiting the invention.
In the description of the present invention, it should be noted that "connected" is to be understood in a broad sense, for example, may be a fixed connection, may be a detachable connection, or may be integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The invention will now be described in further detail by way of specific examples of embodiments in connection with the accompanying drawings.
Example 1
As shown in FIG. 1, an embodiment of the present invention provides a labeled user capability portrait analysis method, including:
s101: basic attribute information and historical course data of the target user are obtained.
The basic attribute information comprises school information, professional information and large-scale competition-participating information. The school information includes a school name. The information such as school scale, school rank and the like can be further based on the name of the school. The specialty information includes a specialty name from which ranking information of the specialty of the school may be further obtained. The large-scale competition information comprises competition time and winning conditions. The winning situation includes a losing prize, a special prize, a first class prize, a second class prize, a third class prize, and a top prize.
The historical course data comprises a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times, wherein the target user learning time set comprises a plurality of target learning duration, and each target learning duration represents the time length of each continuous learning of the target user. S102: and inputting the historical course data into a pre-trained user capacity image estimation model, and estimating a learning capacity label of the target user by the user capacity image estimation model. The learning ability label is used to characterize learning situations and learning characteristics of the target user, e.g., the learning ability label is a primary learner, a superior learner, and is used to characterize the mastery and acceptability of a certain course or class of courses by the target user.
S103: and generating a user capability image of the target user according to the basic attribute information and the learning capability label.
The user capability image comprises a plurality of user labels, and different user labels are used for representing the characteristics of target users in different dimensions. For example, the user capability image is: primary students, fee-scale institutions, and large games are all user tags representing characteristics of target users in terms of course mastering conditions, school conditions, competition conditions, and the like.
In an embodiment of the present invention, the user capability portrayal estimation model includes a CNN network, a first RNN network, a second RNN network, an LSTM network, a hybrid pyramid network, and a CTC loss layer. The CNN network is used for extracting the history learning characteristics of the target user according to the history course data, and the first RNN network is used for predicting the history learning label of the target user according to the history learning characteristics. The LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user. The mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning capacity label of the target user. As shown in FIG. 2, FIG. 2 shows a schematic diagram of a user capability portrait estimation model according to an embodiment of the present invention. In fig. 2, the data trend of the user-ability image estimation model during the use phase and the training phase is marked.
Wherein CNN is convolutional neural network (Convolutional Neural Networks, CNN), RNN recurrent neural network (Recurrent Neural Network, RNN), LSTM is Long Short-Term Memory (LSTM), and CTC loss is connection time classification function (Connectionist Temporal Classification loss function, CTC loss). By adopting the technical scheme, the CNN is used for extracting the history learning characteristics of the target user according to the history course data, and the first RNN is used for predicting the history learning label of the target user according to the history learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the history learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning ability label of the target user, the obtained learning ability label of the target user can accurately describe and reflect the learning ability of the target user and the acceptability of the related knowledge, on the basis, a user ability image of the target user is generated according to the basic attribute information and the learning ability label, the learning ability label of the target user is further optimized, and the obtained user ability image more accurately reflects the real situation of the target user. The user capability image comprises a plurality of learning capability labels, and the capability characteristics of the user can be characterized in multiple aspects and multiple dimensions.
Optionally, after S103, the tagged user capability portrait analysis method further includes:
s104: based on the user capability portraits, courses conforming to the user capability portraits are recommended to the target user.
Specifically, a course corresponding to the learning ability label in the user ability portrait may be selected from the database as a course conforming to the user ability portrait, or may be: if a certain course can simultaneously correspond to more than L learning ability labels, determining the course as a course conforming to the user ability image, wherein L is a set threshold value, L is a positive integer less than or equal to K, K represents the number of the learning ability labels included in the user ability image, and K is a positive integer greater than 0. K=1, 2,3,4,5,6,7,8,9,10.
In the embodiment of the invention, the history course data can also comprise challenge passing data, large-race match data, question making data, test evaluation data and the like. The historical course data is stored in a queue, and elements in the historical course data can be sequentially ordered and stored in the queue. The data format of the history course data and the basic attribute information may be a one-dimensional array, and elements in the history course data are stored in the one-dimensional data, for example, the history course data is [ object user course selection subject, object user course selection time, object user learning time set, object user repeated learning times ], or [ object user course selection subject, object user course selection time, object user learning time set, object user repeated learning times, challenge pass data, large-race game data, question making data, test evaluation data ]. The historical course data of the target user is in the same data format as each training data in the training set. For the absence of an element in the training data or the historical course data, the value may be 0 or null.
Thus, useful implicit information of the target user can be extracted from the historical course data through the user capability image estimation model, and the user tag and the user capability image are generated based on the implicit information, so that the user tag and the user capability image can accurately represent the capability intensity of the user.
On the basis, the curriculum matched with the capability of the target user is recommended to the target user according to the user capability image of the target user, the probability of selecting and learning the curriculum by the target user can be improved, the reliability of recommendation is improved, and the operation efficiency of the system is improved. Optionally, the training method of the user capability portrait estimation model includes:
the training method comprises the steps of obtaining a training set, wherein the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a training user course selection subject, a training user course selection time, a training user learning time set and training user repeated learning times; the training user learning time set includes a plurality of training learning durations. The training learning duration represents the length of time that the training user is continuously learning each time.
Inputting a plurality of training subsets into a CNN (computer network), and extracting a first learning feature map of a training user based on training data of a plurality of subjects in the training subsets of each training user by the CNN, wherein the first learning feature map comprises a plurality of first learning feature sequences, each first learning feature sequence represents learning characteristics of one subject, and each first learning feature sequence represents one or a plurality of learning features for describing learning characteristics of the training user on the subject.
And learning the plurality of first learning feature sequences through the first RNN network, and predicting a first training label of the training user. And predicting based on the plurality of first learning feature sequences through the LSTM network, and correspondingly obtaining a plurality of second learning feature sequences.
And forming a plurality of second learning feature sequences into a second learning feature map. In the embodiment of the invention, the second learning feature images are ordered according to the sequence numbers of subjects corresponding to the second learning feature images, namely, in the second learning feature images, the second learning feature images of courses with the first behavior subject sequence number of 1, the second learning feature images of courses with the second behavior subject sequence number of 2, and so on.
In an embodiment of the present invention, each first learning feature sequence representation includes one or more first learning features for characterizing learning characteristics of the subject by the training user. Each second learning feature sequence characterization includes one or more second learning features that characterize the learning characteristics of the training user for the subject. Each third learning feature sequence characterization includes one or more third learning features that characterize the learning characteristics of the training user for the subject.
And fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences. Specifically, the hybrid pyramid structure network includes a first pyramid structure, a second pyramid structure, and optionally a convolution structure. Fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the method comprises the following steps:
and performing dimension reduction operation on the first learning feature map through the first pyramid structure to obtain a first dimension reduction feature map.
And performing dimension reduction operation on the second learning feature map through the second pyramid structure to obtain a second dimension reduction feature map.
In an embodiment of the present invention, the dimension reduction operation may be a convolution operation.
And taking the second dimension reduction feature map as a kernel, and carrying out convolution operation on the first dimension reduction feature map to obtain a third learning feature map. Optionally, the step is implemented through a convolution structure, that is, the convolution structure is used to take the second dimension-reduction feature map as a kernel, and the convolution operation is performed on the first dimension-reduction feature map, so as to obtain a third learning feature map.
And learning the plurality of third learning feature sequences through a second RNN network, and predicting a second training label of the training user.
In the embodiment of the invention, if the dimension of the third learning feature map does not conform to the dimension of the input of the second RNN network, the dimension reduction or dimension increase operation is performed on the third learning feature map by adopting a convolution or pooling mode appropriately. And the second RNN learns the plurality of third learning feature sequences to predict a second training label of the training user.
And carrying out fusion transcription based on the first training label and the second training label through the CTC loss layer to obtain a prediction label. The method specifically comprises the following steps: splicing the first training label and the second training label to form a fusion label with the length of M+N, wherein the fusion label comprises the first training label and the second training label; m is the length of the first training label, and N is the length of the second training label; transcription is carried out based on the fusion tag through a CTC loss function, and a prediction tag is obtained. The method comprises the following steps: using a CTC penalty function, a series of tag distributions (probability of second training tag predicted by second RNN and probability of first training tag predicted by first RNN) obtained from a loop layer (RNN) are converted into final tag sequences, specifically, the tag sequence with the highest probability combination is found, which may be: in the fusion labels, the first H target labels with the maximum probability value are found to be prediction labels, the target labels are first training labels or second training labels, H is a positive integer greater than 0, and H=1, 2,3,4,5,6,7,8,9 and 10.
And when the difference index between the prediction label and the first training label is converged, determining that the training of the user capacity portrait estimation model is finished, and obtaining a trained user capacity portrait estimation model.
In the embodiment of the invention, the difference index is the Euclidean distance between the predicted tag and the first training tag, or the cosine value of the included angle between the feature vector formed by the predicted tag and the feature vector formed by the first training tag.
In the embodiment of the invention, the user capacity image of the target user is generated according to the basic attribute information and the learning capacity label, the basic attribute information is input into a pre-trained CNN network, the CNN extracts the basic condition characteristics of the target user, the basic condition label is predicted by the trained RNN based on the basic condition characteristics, the basic condition label and the learning capacity label form a label group, the final learning capacity label capable of accurately reflecting the learning characteristics (such as learning progress, learning capacity, learning degree and the like) of the target user is obtained based on the label vector group through CTC loss, and the label vector formed by the learning capacity labels is the user capacity image. The specific reference may be made to the above embodiment for obtaining the predictive label, and the method may be similar, only by replacing the input and correspondingly adjusting the dimensions of the model and data.
By adopting the scheme, the characteristic information of the target user can be obtained from multiple aspects and multiple dimensions, the deep information influencing the characteristics of the user is mined, the learning characteristics of the user can be accurately represented by extracting the user capability images under the mutual fusion, mutual coordination and mutual influence of the information in multiple aspects, for example, the learning time of the user is longer and the repeated learning times are more even though the user is graduated in schools with relatively later ranks, the selected course level is high, the learning capability of the user is higher, and the higher-level courses should be recommended to the user so as to match the real demands of the user.
Example 2
Based on the above-mentioned labeled user capability portrait analysis method, the embodiment of the invention also provides a labeled user capability portrait analysis system, which is used for executing the labeled user capability portrait analysis method, and the system comprises:
and the obtaining module is used for obtaining the basic attribute information and the historical course data of the target user. The basic attribute information comprises school information, professional information and large-race competition information. The historical course data includes: the learning method comprises the steps of selecting lessons by a target user, selecting lessons by the target user, learning by the target user, and repeatedly learning by the target user.
And the prediction module is used for inputting the historical course data into a pre-trained user capacity portrait estimation model, and the user capacity portrait estimation model estimates the learning capacity label of the target user.
And the adjustment module is used for generating a user capacity image of the target user according to the basic attribute information and the learning capacity label. The user capability image includes a plurality of user tags, different user tags being used to characterize the characteristics of the target user in different dimensions.
Optionally, the system further comprises:
and the recommending module is used for recommending courses conforming to the user capability image to the target user based on the user capability image.
The specific manner in which the various modules perform the operations in the systems of the above embodiments have been described in detail herein with respect to the embodiments of the method, and will not be described in detail herein.
An embodiment of the present invention further provides an electronic device, as shown in fig. 3, including a memory 504, a processor 502, and a computer program stored in the memory 504 and capable of running on the processor 502, where the steps of any one of the foregoing labeled user capability representation analysis methods are implemented when the processor 502 executes the program.
Where in FIG. 3 a bus architecture (represented by bus 500), bus 500 may include any number of interconnected buses and bridges, with bus 500 linking together various circuits, including one or more processors, represented by processor 502, and memory, represented by memory 504. Bus 500 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. Bus interface 505 provides an interface between bus 500 and receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, while the memory 504 may be used to store data used by the processor 502 in performing operations.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the methods of a tagged user ability portrait analysis method described above, as well as the data referred to above.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim.
The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (8)

1. A method of labelled user image analysis comprising:
basic attribute information and historical course data of a target user are obtained; the basic attribute information comprises school information, professional information and large-race competition information; the historical course data includes: the method comprises the steps of selecting lessons by a target user, selecting lessons by the target user, collecting learning time of the target user and repeatedly learning times of the target user, wherein the learning time of the target user comprises a plurality of target learning time periods;
inputting the history course data into a pre-trained user portrait estimation model, wherein the user portrait estimation model estimates a learning label of the target user;
Generating a user portrait of the target user according to the basic attribute information and the learning label; the user portrait comprises a plurality of user labels, and different user labels are used for representing the characteristics of a target user in different dimensions;
recommending courses conforming to the user portrait to the target user based on the user portrait;
the user portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the history learning characteristics of the target user according to the history course data, and the first RNN network is used for predicting the history learning label of the target user according to the history learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the history learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning label of the target user.
2. The method of claim 1, wherein the training method of the user portrait estimation model includes:
the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a training user class selection subject, a training user class selection time, a training user learning time set and training user repeated learning times; the training user learning time set comprises a plurality of training learning time periods;
inputting a plurality of training subsets into a CNN (computer network), and extracting a first learning feature map of a training user based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning feature map comprises a plurality of first learning feature sequences, and each first learning feature sequence represents learning characteristics of one subject;
learning the plurality of first learning feature sequences through a first RNN network, and predicting a first training label of a training user;
predicting based on a plurality of first learning feature sequences through an LSTM network, and correspondingly obtaining a plurality of second learning feature sequences;
Forming a plurality of second learning feature sequences into a second learning feature map;
fusing the first learning feature map and the second learning feature map through a mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the plurality of third learning feature sequences through a second RNN network, and predicting a second training label of the training user;
fusion transcription is carried out on the basis of the first training label and the second training label through the CTC loss layer, so that a prediction label is obtained;
and when the difference indexes of the prediction label and the first training label are converged, determining that the user portrait estimation model training is finished.
3. The method of claim 2, wherein the hybrid pyramid structure network comprises a first pyramid structure and a second pyramid structure; the method for fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map comprises the following steps:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
And taking the second dimension reduction feature map as a kernel, and carrying out convolution operation on the first dimension reduction feature map to obtain a third learning feature map.
4. The method of claim 2, wherein the performing fusion transcription based on the first training tag and the second training tag through the CTC loss layer to obtain the predicted tag comprises:
splicing the first training label and the second training label to form a fusion label with the length of M+N; m is the length of the first training label, and N is the length of the second training label;
transcription is carried out based on the fusion tag through a CTC loss function, and a prediction tag is obtained.
5. A tagged user image analysis system, comprising:
the acquisition module is used for acquiring basic attribute information and historical course data of the target user; the basic attribute information comprises school information, professional information and large-race competition information; the historical course data includes: the method comprises the steps of selecting lessons by a target user, selecting lessons by the target user, collecting learning time of the target user and repeatedly learning times of the target user, wherein the learning time of the target user comprises a plurality of target learning time periods;
the prediction module is used for inputting the history course data into a pre-trained user portrait estimation model, and the user portrait estimation model estimates a learning label of the target user;
The adjustment module is used for generating a user portrait of the target user according to the basic attribute information and the learning label; the user portrait comprises a plurality of user labels, and different user labels are used for representing the characteristics of a target user in different dimensions;
a recommending module, which is used for recommending courses conforming to the user portrait to the target user based on the user portrait;
the user portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the history learning characteristics of the target user according to the history course data, and the first RNN network is used for predicting the history learning label of the target user according to the history learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the history learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning characteristics and the learning characteristics of the next time period to obtain comprehensive learning characteristics; the second RNN is used for predicting a learning label of the next time period of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the history learning label to obtain the learning label of the target user.
6. The system of claim 5, wherein the training method of the user portrayal estimation model comprises:
the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a training user class selection subject, a training user class selection time, a training user learning time set and training user repeated learning times; the training user learning time set comprises a plurality of training learning time periods;
inputting a plurality of training subsets into a CNN (computer network), and extracting a first learning feature map of a training user based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning feature map comprises a plurality of first learning feature sequences, and each first learning feature sequence represents learning characteristics of one subject;
learning the plurality of first learning feature sequences through a first RNN network, and predicting a first training label of a training user;
predicting based on a plurality of first learning feature sequences through an LSTM network, and correspondingly obtaining a plurality of second learning feature sequences;
Forming a plurality of second learning feature sequences into a second learning feature map;
fusing the first learning feature map and the second learning feature map through a mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the plurality of third learning feature sequences through a second RNN network, and predicting a second training label of the training user;
fusion transcription is carried out on the basis of the first training label and the second training label through the CTC loss layer, so that a prediction label is obtained;
and when the difference indexes of the prediction label and the first training label are converged, determining that the user portrait estimation model training is finished.
7. The system of claim 6, wherein the hybrid pyramid structure network comprises a first pyramid structure and a second pyramid structure; the method for fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map comprises the following steps:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
And taking the second dimension reduction feature map as a kernel, and carrying out convolution operation on the first dimension reduction feature map to obtain a third learning feature map.
8. The system of claim 6, wherein the performing fusion transcription based on the first training tag and the second training tag through the CTC loss layer to obtain the predicted tag comprises:
splicing the first training label and the second training label to form a fusion label with the length of M+N; m is the length of the first training label, and N is the length of the second training label;
transcription is carried out based on the fusion tag through a CTC loss function, and a prediction tag is obtained.
CN202310102294.8A 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal Active CN115830405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310102294.8A CN115830405B (en) 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310102294.8A CN115830405B (en) 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal

Publications (2)

Publication Number Publication Date
CN115830405A CN115830405A (en) 2023-03-21
CN115830405B true CN115830405B (en) 2023-09-22

Family

ID=85521069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310102294.8A Active CN115830405B (en) 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal

Country Status (1)

Country Link
CN (1) CN115830405B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423442A (en) * 2017-08-07 2017-12-01 火烈鸟网络(广州)股份有限公司 Method and system, storage medium and computer equipment are recommended in application based on user's portrait behavioural analysis
KR102265573B1 (en) * 2020-09-29 2021-06-16 주식회사 팀기원매스 Method and system for reconstructing mathematics learning curriculum based on artificial intelligence
CN114722281A (en) * 2022-04-07 2022-07-08 平安科技(深圳)有限公司 Training course configuration method and device based on user portrait and user course selection behavior

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220415195A1 (en) * 2022-02-18 2022-12-29 Beijing Baidu Netcom Science Technology Co., Ltd. Method for training course recommendation model, method for course recommendation, and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423442A (en) * 2017-08-07 2017-12-01 火烈鸟网络(广州)股份有限公司 Method and system, storage medium and computer equipment are recommended in application based on user's portrait behavioural analysis
KR102265573B1 (en) * 2020-09-29 2021-06-16 주식회사 팀기원매스 Method and system for reconstructing mathematics learning curriculum based on artificial intelligence
CN114722281A (en) * 2022-04-07 2022-07-08 平安科技(深圳)有限公司 Training course configuration method and device based on user portrait and user course selection behavior

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"DenseNet-CTC: An end-to-end RNN-free architecture for context-free string recognition";Hongjian Zhan等;《Computer Vision and Image Understanding》;第204卷;全文 *
基于大数据和机器学习的大学生选课推荐模型研究;张海华;;信息系统工程(第04期);全文 *

Also Published As

Publication number Publication date
CN115830405A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN110147456B (en) Image classification method and device, readable storage medium and terminal equipment
US20210256354A1 (en) Artificial intelligence learning-based user knowledge tracing system and operating method thereof
CN112116092B (en) Interpretable knowledge level tracking method, system and storage medium
US20190354887A1 (en) Knowledge graph based learning content generation
CN111209474A (en) Online course recommendation method and device, computer equipment and storage medium
CN111369535B (en) Cell detection method
CN111460101A (en) Knowledge point type identification method and device and processor
CN109189922B (en) Comment evaluation model training method and device
CN111428448A (en) Text generation method and device, computer equipment and readable storage medium
CN114201684A (en) Knowledge graph-based adaptive learning resource recommendation method and system
Lu et al. CMKT: Concept map driven knowledge tracing
CN110222838A (en) Deep neural network and its training method, device, electronic equipment and storage medium
CN117035074B (en) Multi-modal knowledge generation method and device based on feedback reinforcement
CN115830405B (en) Method and system for analyzing tagged user capability portrayal
CN116228361A (en) Course recommendation method, device, equipment and storage medium based on feature matching
CN113609402B (en) Intelligent recommendation method for industry friend-making exchange information based on big data analysis
CN116108195A (en) Dynamic knowledge graph prediction method and device based on time sequence element learning
CN113705092B (en) Disease prediction method and device based on machine learning
CN113742591B (en) Learning partner recommendation method and device, electronic equipment and storage medium
CN113255701B (en) Small sample learning method and system based on absolute-relative learning framework
CN115631008B (en) Commodity recommendation method, device, equipment and medium
CN112231373B (en) Knowledge point data processing method, apparatus, device and computer readable medium
CN115147353A (en) Defect detection model training method, device, equipment, medium and program product
CN112597294A (en) Exercise intelligent pushing method
CN110334353A (en) Analysis method, device, equipment and the storage medium of word order recognition performance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant