CN115830405A - Tagged user ability portrait analysis method and system - Google Patents

Tagged user ability portrait analysis method and system Download PDF

Info

Publication number
CN115830405A
CN115830405A CN202310102294.8A CN202310102294A CN115830405A CN 115830405 A CN115830405 A CN 115830405A CN 202310102294 A CN202310102294 A CN 202310102294A CN 115830405 A CN115830405 A CN 115830405A
Authority
CN
China
Prior art keywords
learning
training
user
label
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310102294.8A
Other languages
Chinese (zh)
Other versions
CN115830405B (en
Inventor
郑楠
曹鹏宇
杨连增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoxin Blue Bridge Education Technology Co ltd
Original Assignee
Guoxin Blue Bridge Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoxin Blue Bridge Education Technology Co ltd filed Critical Guoxin Blue Bridge Education Technology Co ltd
Priority to CN202310102294.8A priority Critical patent/CN115830405B/en
Publication of CN115830405A publication Critical patent/CN115830405A/en
Application granted granted Critical
Publication of CN115830405B publication Critical patent/CN115830405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a labeled user ability portrait analysis method and a labeled user ability portrait analysis system. The learning ability label of the target user can accurately depict and reflect the learning ability of the target user and the acceptable degree of the relevant knowledge, and on the basis, the learning ability label of the target user is further optimized and adjusted, so that the obtained user ability portrait more accurately reflects the real situation of the target user.

Description

Tagged user ability portrait analysis method and system
Technical Field
The invention relates to the technical field of computers, in particular to a labeled user ability portrait analysis method and system.
Background
The user portrait is also called a user role and is an effective tool for delineating target users and connecting user appeal and design direction, and the user portrait is widely applied to various fields. The user representation is a tagged user model abstracted according to information such as user social attributes, living habits, behaviors and the like, and comprises a plurality of user tags which are used for representing certain characteristics of users. In the actual operation process, the most superficial and life-close words are used to link the attributes and behaviors of the user with the expected data conversion as the virtual representation of the actual user.
In the era of information technology developing at a high speed, many things of users are done on the internet, such as learning, shopping, etc. The method has great social significance and economic value for recommending proper products and courses to the user according to the characteristics of the user (the user portrait).
At present, the user portrait of the user is mainly described by basic attribute information labels filled by the user. In fact, however, on the one hand, the information filled out by the user is not necessarily accurate, and may be spurious and filled out at will. On the other hand, the basic information of the user does not necessarily accurately characterize the user, for example, a graduation school of a student is a school with a later rank, but the student has strong ability and high knowledge mastery, and if a basic course is recommended to the student according to the attribute of the school, the recommended course is not applicable to the student.
Disclosure of Invention
The present invention provides a labeled user ability portrait analysis method and system, which are used to solve the above problems in the prior art.
In a first aspect, an embodiment of the present invention provides a tagged user capability representation analysis method, including:
acquiring basic attribute information and historical course data of a target user; the basic attribute information comprises school information, professional information and competition information; the historical course data includes: the method comprises the following steps that a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times are obtained, wherein the target user learning time set comprises a plurality of target learning durations;
inputting the historical course data into a pre-trained user ability portrait estimation model, and estimating a learning ability label of the target user by the user ability portrait estimation model;
generating a user ability portrait of a target user according to the basic attribute information and the learning ability label; the user capability portrait comprises a plurality of user tags, and different user tags are used for representing the characteristics of the target user in different dimensions;
the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the historical learning characteristics of the target user according to the historical course data, and the first RNN network is used for predicting the historical learning labels of the target user according to the historical learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; and the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user.
Optionally, the method further includes:
and recommending courses which are in accordance with the user capability representation to the target user based on the user capability representation.
Optionally, the training method of the user capability portrait estimation model includes:
obtaining a training set, wherein the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a course selection subject of the training user, course selection time of the training user, a training user learning time set and repeated learning times of the training user; the training user learning time set comprises a plurality of training learning durations;
inputting a plurality of training subsets into a CNN network, extracting a first learning characteristic diagram of each training user by the CNN network based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning characteristic diagram comprises a plurality of first learning characteristic sequences, and each first learning characteristic sequence represents a learning characteristic of one subject;
learning the first learning characteristic sequences through a first RNN (radio network) to predict a first training label of a training user;
predicting through an LSTM network based on a plurality of first learning feature sequences to correspondingly obtain a plurality of second learning feature sequences;
forming a second learning feature map by using the plurality of second learning feature sequences;
fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the third learning characteristic sequences through a second RNN to predict a second training label of the training user;
performing fusion transcription on the basis of the first training label and the second training label through a CTC loss layer to obtain a prediction label;
and when the difference index of the predicted label and the first training label is converged, determining that the user ability portrait estimation model is trained to be finished.
Optionally, the hybrid pyramid structure network includes a first pyramid structure and a second pyramid structure; fusing the first learning feature map and the second learning feature map by the hybrid pyramid structure to obtain a third learning feature map, including:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
and performing convolution operation on the first dimension reduction feature map by taking the second dimension reduction feature map as a kernel to obtain a third learning feature map.
Optionally, the obtaining a predicted tag by performing fusion transcription on the CTC loss layer based on the first training tag and the second training tag includes:
splicing the first training label and the second training label to form a fusion label with the length of M + N; m is the length of the first training label, and N is the length of the second training label;
and (4) transcribing based on the fusion tag through a CTC loss function to obtain a prediction tag.
In a second aspect, an embodiment of the present invention further provides a tagged user ability representation analysis system, including:
the acquisition module is used for acquiring basic attribute information and historical course data of a target user; the basic attribute information comprises school information, professional information and competition information; the historical course data includes: the method comprises the following steps that a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times are obtained, wherein the target user learning time set comprises a plurality of target learning durations;
the prediction module is used for inputting the historical curriculum data into a pre-trained user ability portrait estimation model, and the user ability portrait estimation model estimates a learning ability label of the target user;
the adjusting module is used for generating a user ability portrait of a target user according to the basic attribute information and the learning ability label; the user capability portrait comprises a plurality of user tags, and different user tags are used for representing the characteristics of the target user in different dimensions;
the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the historical learning characteristics of the target user according to the historical course data, and the first RNN network is used for predicting the historical learning labels of the target user according to the historical learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; and the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user.
Optionally, the system further includes:
and the recommending module is used for recommending courses which accord with the user capability portrait to the target user based on the user capability portrait.
The training method of the user ability portrait estimation model comprises the following steps:
obtaining a training set, wherein the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a course selection subject of the training user, course selection time of the training user, a training user learning time set and repeated learning times of the training user; the training user learning time set comprises a plurality of training learning durations;
inputting a plurality of training subsets into a CNN network, extracting a first learning characteristic diagram of each training user by the CNN network based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning characteristic diagram comprises a plurality of first learning characteristic sequences, and each first learning characteristic sequence represents a learning characteristic of one subject;
learning the first learning characteristic sequences through a first RNN (radio network) to predict a first training label of a training user;
predicting through an LSTM network based on the first learning feature sequences to correspondingly obtain second learning feature sequences;
forming a second learning feature map by using the plurality of second learning feature sequences;
fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the third learning characteristic sequences through a second RNN to predict a second training label of the training user;
performing fusion transcription on the basis of the first training label and the second training label through a CTC loss layer to obtain a prediction label;
and when the difference index of the prediction label and the first training label is converged, determining that the user ability portrait estimation model training is finished.
Optionally, the hybrid pyramid structure network includes a first pyramid structure and a second pyramid structure; fusing the first learning feature map and the second learning feature map by the hybrid pyramid structure to obtain a third learning feature map, including:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
and performing convolution operation on the first dimension reduction feature map by taking the second dimension reduction feature map as a kernel to obtain a third learning feature map.
Optionally, the obtaining a prediction label by performing fusion transcription on the CTC loss layer based on the first training label and the second training label includes:
splicing the first training label and the second training label to form a fusion label with the length of M + N; m is the length of the first training label, and N is the length of the second training label;
and (4) transcribing based on the fusion tag through a CTC loss function to obtain a prediction tag.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
the embodiment of the invention also provides a labeled user ability portrait analysis method and a labeled user ability portrait analysis system, wherein the method comprises the following steps: obtaining basic attribute information and historical course data of a target user; the basic attribute information comprises school information, professional information and competition information; the school information comprises a school name, the professional information comprises a professional name, and the competition information comprises competition time and prize winning conditions; the historical course data includes: the method comprises the steps that a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times are obtained, wherein the target user learning time set comprises a plurality of target learning durations; inputting the historical course data into a pre-trained user ability portrait estimation model, and estimating a learning ability label of the target user by the user ability portrait estimation model; generating a user ability portrait of a target user according to the basic attribute information and the learning ability label; the user capability representation includes a plurality of user tags, with different user tags being used to characterize the target user in different dimensions.
The method comprises the steps that a CNN (network communication network) network is used for extracting historical learning characteristics of a target user according to historical course data, and a first RNN network is used for predicting historical learning labels of the target user according to the historical learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user, the obtained learning ability label of the target user can accurately depict and reflect the learning ability of the target user and the acceptable degree of related knowledge, on the basis, the user ability portrait of the target user is generated according to the basic attribute information and the learning ability label, the learning ability label of the target user is further optimized and adjusted, and the obtained user ability portrait more accurately reflects the real situation of the target user. The user ability portrait comprises a plurality of learning ability labels and can describe the ability characteristics of the user from multiple aspects and multiple dimensions.
Drawings
FIG. 1 is a flow chart of a tagged user capability representation analysis method provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a user ability portrait estimation model according to an embodiment of the present invention;
fig. 3 is a schematic block structure diagram of an electronic device according to an embodiment of the present invention.
The labels in the figure are: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that certain terms of orientation or positional relationship are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, it should be noted that "connected" is to be understood broadly, for example, it may be fixed, detachable, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The invention is described in further detail below by means of specific embodiments and with reference to the attached drawings.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a labeled user ability portrait analysis method, including:
s101: and obtaining basic attribute information and historical course data of the target user.
The basic attribute information comprises school information, professional information and competition information. The school information includes a school name. The information such as school scale, school rank and the like can be further obtained according to the school name. The professional information comprises a professional name, and ranking information of the professional of the school can be further obtained according to the professional name. The competition information of the big competition comprises competition time and prize winning condition. The winning condition comprises a non-winning prize, a special prize, a first-class prize, a second-class prize, a third-class prize and a superior prize.
The historical course data comprises course selection subjects of the target users, course selection time of the target users, a target user learning time set and repeated learning times of the target users, wherein the target user learning time set comprises a plurality of target learning durations, and each target learning duration represents the time length of each continuous learning of the target users. S102: inputting historical course data into a pre-trained user ability portrait estimation model, and estimating the learning ability label of the target user by the user ability portrait estimation model. The learning ability labels are used for representing the learning situation and the learning characteristics of the target user, for example, the learning ability labels are a primary student and a senior student and are used for representing the mastery degree and the acceptability degree of the target user for a certain course or a certain class of courses.
S103: and generating a user capability portrait of the target user according to the basic attribute information and the learning capability label.
The user capability representation comprises a plurality of user tags, and different user tags are used for representing characteristics of target users in different dimensions. For example, the user capability representation is: the first-class student, the fee-scale institution and the big-match first-class prize are all user tags and represent the characteristics of the target user in the aspects of course mastering conditions, school conditions, match conditions and the like.
In the embodiment of the invention, the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer. The CNN network is used for extracting historical learning characteristics of a target user according to the historical course data, and the first RNN network is used for predicting historical learning labels of the target user according to the historical learning characteristics. The LSTM network is used for predicting the next time period learning characteristics of the target user based on the historical learning characteristics of the target user. The mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; and the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user. As shown in FIG. 2, FIG. 2 is a schematic diagram illustrating a structure of a user capability representation estimation model provided by an embodiment of the present invention. In FIG. 2, the data trends of the user capability representation estimation model during the use phase and the training phase are plotted.
The CNN is a Convolutional Neural Network (CNN), an RNN Recurrent Neural Network (RNN), the LSTM is a Long Short-Term Memory (LSTM), and the CTC loss is a connection time Classification function (CTC loss). By adopting the technical scheme, the CNN network is used for extracting the historical learning characteristics of the target user according to the historical course data, and the first RNN network is used for predicting the historical learning label of the target user according to the historical learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user, the obtained learning ability label of the target user can accurately depict and reflect the learning ability of the target user and the acceptable degree of related knowledge, on the basis, the user ability portrait of the target user is generated according to the basic attribute information and the learning ability label, the learning ability label of the target user is further optimized, and the obtained user ability portrait more accurately reflects the real situation of the target user. The user ability portrait comprises a plurality of learning ability labels, and can describe the ability characteristics of the user from multiple aspects and multiple dimensions.
Optionally, after S103, the labeled user ability portrait analysis method further includes:
s104: and recommending courses which are in accordance with the user capability representation to the target user based on the user capability representation.
Specifically, the lesson corresponding to the learning ability label in the user ability representation may be selected from the database as the lesson corresponding to the user ability representation, and the lesson may further include: if a certain course can correspond to more than L learning ability labels simultaneously, the course is determined to be a course which accords with the user ability portrait, wherein L is a set threshold value, L is a positive integer less than or equal to K, K represents the number of the plurality of learning ability labels included in the user ability portrait, and K is a positive integer more than 0. K =1,2,3,4,5,6,7,8,9,10.
In an embodiment of the present invention, the historical curriculum data may further include challenge pass data, tournament data, question making data, test evaluation data, and the like. The historical curriculum data is stored in a queue, and the elements in the historical curriculum data can be sequentially sorted into the queue. The data format of the historical course data and the basic attribute information may be a one-dimensional array, and elements in the historical course data are stored in the one-dimensional data, for example, the historical course data is [ subject of course selection of the target user, time of course selection of the target user, set of learning time of the target user, number of repeated learning times of the target user ], or [ subject of course selection of the target user, time of course selection of the target user, set of learning time of the target user, number of repeated learning times of the target user, challenge pass data, match data, question data, and test examination data ]. The historical lesson data of the target user is in the same data format as each training data in the training set. For the absence of a certain element in the training data or the historical course data, the value may be 0 or null.
In this way, useful implicit information of a target user can be extracted from historical course data through the user ability portrait estimation model, and a user label and a user ability portrait are generated based on the implicit information, so that the user label and the user ability portrait can accurately represent the ability strength of the user.
On the basis, the course matching the ability of the target user is recommended to the target user according to the user ability image of the target user, so that the probability of the target user selecting and learning the course can be improved, the recommendation reliability is improved, and the system operation efficiency is improved. Optionally, the training method of the user capability portrait estimation model includes:
obtaining a training set, wherein the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a subject selected by the training user, course selection time of the training user, a learning time set of the training user and repeated learning times of the training user; the training user learning time set comprises a plurality of training learning time periods. The training learning duration represents the length of time each time the user is trained to continue learning.
Inputting a plurality of training subsets into a CNN network, extracting a first learning feature map of each training user based on training data of a plurality of subjects in the training subset of each training user by the CNN network, wherein the first learning feature map comprises a plurality of first learning feature sequences, each first learning feature sequence represents a learning characteristic of one subject, and each first learning feature sequence represents one or more learning characteristics used for describing the learning characteristic of the training user on the subject.
And learning the plurality of first learning characteristic sequences through the first RNN network, and predicting a first training label of the training user. And predicting through the LSTM network based on the first learning feature sequences to correspondingly obtain second learning feature sequences.
And forming a second learning feature map by using the plurality of second learning feature sequences. In the embodiment of the present invention, the courses corresponding to the second learning feature sequence are sorted into the second learning feature map according to the serial number of the subject corresponding to the second learning feature sequence, that is, in the second learning feature map, the first learning feature sequence of the course with the subject serial number of 1 is used as the first behavior, the second learning feature sequence of the course with the subject serial number of 2 is used as the second behavior, and so on.
In an embodiment of the present invention, each first learning feature sequence representation includes one or more first learning features, and the first learning features are used for characterizing the learning characteristics of the training user for the subject. Each second learning feature sequence representation comprises one or more second learning features, and the second learning features are used for describing the learning characteristics of the training user for the subject. Each third learning feature sequence representation comprises one or more third learning features, and the third learning features are used for describing the learning characteristics of the training user for the subject.
And fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences. Specifically, the hybrid pyramid structure network includes a first pyramid structure, a second pyramid structure, and optionally a convolution structure. Fusing the first learning feature map and the second learning feature map through the hybrid pyramid structure to obtain a third learning feature map, including:
and performing dimension reduction operation on the first learning feature map through the first pyramid structure to obtain a first dimension reduction feature map.
And performing dimension reduction operation on the second learning feature map through the second pyramid structure to obtain a second dimension reduction feature map.
In an embodiment of the present invention, the dimension reduction operation may be a convolution operation.
And performing convolution operation on the first dimension reduction feature map by taking the second dimension reduction feature map as a kernel to obtain a third learning feature map. Optionally, this step is implemented by a convolution structure, that is, the convolution operation is performed on the first dimension reduction feature map by using the second dimension reduction feature map as a kernel through the convolution structure, so as to obtain a third learning feature map.
And learning the third learning characteristic sequences through the second RNN, and predicting a second training label of the training user.
In the embodiment of the invention, if the dimension of the third learning feature map does not conform to the dimension of the input of the second RNN network, the dimension reduction or dimension increase operation is performed on the third learning feature map in a convolution or pooling mode as appropriate. And enabling the second RNN to learn the third learning feature sequences to predict a second training label of the training user.
And performing fusion transcription on the CTC loss layer based on the first training label and the second training label to obtain a prediction label. The method specifically comprises the following steps: splicing the first training label and the second training label to form a fusion label with the length of M + N, wherein the fusion label comprises the first training label and the second training label; m is the length of the first training label, and N is the length of the second training label; and (4) transcribing based on the fusion tag through a CTC loss function to obtain a prediction tag. The method specifically comprises the following steps: using the CTC loss function, a series of label distributions (the probability of the second training label predicted by the second RNN and the probability of the first training label predicted by the first RNN) obtained from the loop layer (RNN) are converted into a final label sequence, specifically, a label sequence with the highest probability combination is found, which may be: and in the fusion labels, finding the first H target labels with the maximum probability value as prediction labels, wherein the target labels are first training labels or second training labels, H is a positive integer greater than 0, and H =1,2,3,4,5,6,7,8,9,10.
And when the difference index of the prediction label and the first training label is converged, determining that the user ability portrait estimation model is trained to be finished, and obtaining the trained user ability portrait estimation model.
In the embodiment of the present invention, the difference index is an euclidean distance between the predicted tag and the first training tag, or a cosine value of an included angle between the feature vector formed by the predicted tag and the feature vector formed by the first training tag.
In the embodiment of the invention, the user ability portrait of the target user is generated according to the basic attribute information and the learning ability tags, the basic attribute information can be input into a pre-trained CNN network, the CNN extracts the basic situation characteristics of the target user, the basic situation tags are predicted based on the basic situation characteristics through the trained RNN, the basic situation tags and the learning ability tags form a tag group, the final learning ability tags capable of accurately reflecting the learning characteristics (such as learning progress, learning ability, learning degree and the like) of the target user are obtained based on the tag vector group through CTC loss, and the tag vectors formed by the learning ability tags are the user ability portrait. Specifically, reference may be made to the above-described embodiment of obtaining a prediction tag, and the method used may be similar, and only the dimension of the input and the corresponding adjustment model and data need to be replaced.
By adopting the scheme, the characteristic information of the target user can be obtained from multiple aspects and multiple dimensions, deep information influencing the characteristics of the user is mined, the user capability portrait can be extracted under the condition that the information from the multiple aspects is mutually fused, coordinated and influenced, and the learning characteristics of the user can be accurately represented.
Example 2
Based on the above labeled user ability portrait analysis method, an embodiment of the present invention further provides a labeled user ability portrait analysis system, configured to execute the above labeled user ability portrait analysis method, where the system includes:
and the obtaining module is used for obtaining the basic attribute information and the historical course data of the target user. The basic attribute information comprises school information, professional information and competition information. The historical course data includes: the system comprises a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times, wherein the target user learning time set comprises a plurality of target learning durations.
And the prediction module is used for inputting the historical course data into a pre-trained user ability portrait estimation model, and the user ability portrait estimation model estimates the learning ability label of the target user.
And the adjusting module is used for generating the user ability portrait of the target user according to the basic attribute information and the learning ability label. The user capability representation includes a plurality of user tags, with different user tags being used to characterize the target user in different dimensions.
Optionally, the system further includes:
and the recommending module is used for recommending courses which accord with the user capability portrait to the target user based on the user capability portrait.
The specific manner in which the respective modules perform operations has been described in detail in the embodiments related to the method, and will not be elaborated upon here.
An embodiment of the present invention further provides an electronic device, as shown in fig. 3, including a memory 504, a processor 502, and a computer program stored on the memory 504 and executable on the processor 502, where the processor 502 implements the steps of any one of the labeled user capability representation analysis methods described above when executing the program.
Where in fig. 3 a bus architecture (represented by bus 500) is shown, bus 500 may include any number of interconnected buses and bridges, and bus 500 links together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the above-mentioned labeled user capability representation analyzing methods and related data as described above.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system is apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed to reflect the intent: rather, the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim.
The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A labeled user ability image analysis method is characterized by comprising the following steps:
acquiring basic attribute information and historical course data of a target user; the basic attribute information comprises school information, professional information and competition information; the historical lesson data includes: the method comprises the following steps that a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times are obtained, wherein the target user learning time set comprises a plurality of target learning durations;
inputting the historical course data into a pre-trained user ability portrait estimation model, and estimating a learning ability label of the target user by the user ability portrait estimation model;
generating a user ability portrait of a target user according to the basic attribute information and the learning ability label; the user capability portrait comprises a plurality of user tags, and different user tags are used for representing the characteristics of the target user in different dimensions;
the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting historical learning characteristics of a target user according to the historical course data, and the first RNN network is used for predicting a historical learning label of the target user according to the historical learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; and the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user.
2. The method of claim 1, further comprising:
and recommending courses which are in accordance with the user capability representation to the target user based on the user capability representation.
3. The method of claim 1, wherein the method for training the user capability representation estimation model comprises:
obtaining a training set, wherein the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a course selection subject of the training user, course selection time of the training user, a training user learning time set and repeated learning times of the training user; the training user learning time set comprises a plurality of training learning durations;
inputting a plurality of training subsets into a CNN network, extracting a first learning characteristic diagram of each training user by the CNN network based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning characteristic diagram comprises a plurality of first learning characteristic sequences, and each first learning characteristic sequence represents a learning characteristic of one subject;
learning the first learning characteristic sequences through a first RNN (radio network) to predict a first training label of a training user;
predicting through an LSTM network based on a plurality of first learning feature sequences to correspondingly obtain a plurality of second learning feature sequences;
forming a second learning feature map by using the plurality of second learning feature sequences;
fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the plurality of third learning characteristic sequences through a second RNN (radio network) to predict a second training label of the training user;
performing fusion transcription on the basis of the first training label and the second training label through a CTC loss layer to obtain a prediction label;
and when the difference index of the prediction label and the first training label is converged, determining that the user ability portrait estimation model training is finished.
4. The method of claim 3, wherein the hybrid pyramid structure network comprises a first pyramid structure and a second pyramid structure; fusing the first learning feature map and the second learning feature map by the hybrid pyramid structure to obtain a third learning feature map, including:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
and performing convolution operation on the first dimension reduction feature map by taking the second dimension reduction feature map as a kernel to obtain a third learning feature map.
5. The method of claim 3, wherein said performing a fusion transcription based on a first training signature and a second training signature through a CTC loss layer to obtain a predictive signature comprises:
splicing the first training label and the second training label to form a fusion label with the length of M + N; m is the length of the first training label, and N is the length of the second training label;
and (4) transcribing based on the fusion tag through a CTC loss function to obtain a prediction tag.
6. A tagged user capability image analysis system, comprising:
the acquisition module is used for acquiring basic attribute information and historical course data of a target user; the basic attribute information comprises school information, professional information and competition information; the historical lesson data includes: the method comprises the following steps that a target user course selection subject, target user course selection time, a target user learning time set and target user repeated learning times are obtained, wherein the target user learning time set comprises a plurality of target learning durations;
the prediction module is used for inputting the historical course data into a pre-trained user ability portrait estimation model, and the user ability portrait estimation model estimates a learning ability label of the target user;
the adjusting module is used for generating a user ability portrait of a target user according to the basic attribute information and the learning ability label; the user capability portrait comprises a plurality of user tags, and different user tags are used for representing the characteristics of the target user in different dimensions;
the user capability portrait estimation model comprises a CNN network, a first RNN network, a second RNN network, an LSTM network, a mixed pyramid structure network and a CTC loss layer; the CNN network is used for extracting the historical learning characteristics of the target user according to the historical course data, and the first RNN network is used for predicting the historical learning labels of the target user according to the historical learning characteristics; the LSTM network is used for predicting the learning characteristics of the target user in the next time period based on the historical learning characteristics of the target user; the mixed pyramid structure is used for fusing the historical learning features and the learning features of the next time period to obtain comprehensive learning features; the second RNN is used for predicting a next time period learning label of the target user based on the comprehensive learning characteristics; and the CTC loss layer is used for correcting the learning label of the next time period based on the historical learning label to obtain the learning ability label of the target user.
7. The system of claim 6, further comprising:
and the recommending module is used for recommending courses which accord with the user capability portrait to the target user based on the user capability portrait.
8. The system of claim 6, wherein the method for training the user capability representation estimation model comprises:
obtaining a training set, wherein the training set comprises a plurality of training subsets corresponding to a plurality of training users, each user corresponds to one training subset, each training subset comprises training data of a plurality of subjects, and each training data comprises a course selection subject of the training user, course selection time of the training user, a training user learning time set and repeated learning times of the training user; the training user learning time set comprises a plurality of training learning time lengths;
inputting a plurality of training subsets into a CNN network, extracting a first learning characteristic diagram of each training user by the CNN network based on training data of a plurality of subjects in the training subsets of each training user, wherein the first learning characteristic diagram comprises a plurality of first learning characteristic sequences, and each first learning characteristic sequence represents a learning characteristic of one subject;
learning the first learning characteristic sequences through a first RNN (radio network) to predict a first training label of a training user;
predicting through an LSTM network based on a plurality of first learning feature sequences to correspondingly obtain a plurality of second learning feature sequences;
forming a second learning feature map by using the plurality of second learning feature sequences;
fusing the first learning feature map and the second learning feature map through the mixed pyramid structure to obtain a third learning feature map, wherein the third learning feature map comprises a plurality of third learning feature sequences;
learning the plurality of third learning characteristic sequences through a second RNN (radio network) to predict a second training label of the training user;
performing fusion transcription on the basis of the first training label and the second training label through a CTC loss layer to obtain a prediction label;
and when the difference index of the prediction label and the first training label is converged, determining that the user ability portrait estimation model training is finished.
9. The system of claim 8, wherein the hybrid pyramid structure network comprises a first pyramid structure and a second pyramid structure; fusing the first learning feature map and the second learning feature map by the hybrid pyramid structure to obtain a third learning feature map, including:
performing dimension reduction operation on the first learning feature map through a first pyramid structure to obtain a first dimension reduction feature map;
performing dimension reduction operation on the second learning feature map through a second pyramid structure to obtain a second dimension reduction feature map;
and performing convolution operation on the first dimension reduction feature map by taking the second dimension reduction feature map as a kernel to obtain a third learning feature map.
10. The system of claim 8, wherein said performing a fused transcription based on a first training signature and a second training signature through a CTC loss layer to obtain a predictive signature comprises:
splicing the first training label and the second training label to form a fusion label with the length of M + N; m is the length of the first training label, and N is the length of the second training label;
and (4) transcribing based on the fusion tag through a CTC loss function to obtain a prediction tag.
CN202310102294.8A 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal Active CN115830405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310102294.8A CN115830405B (en) 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310102294.8A CN115830405B (en) 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal

Publications (2)

Publication Number Publication Date
CN115830405A true CN115830405A (en) 2023-03-21
CN115830405B CN115830405B (en) 2023-09-22

Family

ID=85521069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310102294.8A Active CN115830405B (en) 2023-02-13 2023-02-13 Method and system for analyzing tagged user capability portrayal

Country Status (1)

Country Link
CN (1) CN115830405B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423442A (en) * 2017-08-07 2017-12-01 火烈鸟网络(广州)股份有限公司 Method and system, storage medium and computer equipment are recommended in application based on user's portrait behavioural analysis
KR102265573B1 (en) * 2020-09-29 2021-06-16 주식회사 팀기원매스 Method and system for reconstructing mathematics learning curriculum based on artificial intelligence
CN114722281A (en) * 2022-04-07 2022-07-08 平安科技(深圳)有限公司 Training course configuration method and device based on user portrait and user course selection behavior
US20220415195A1 (en) * 2022-02-18 2022-12-29 Beijing Baidu Netcom Science Technology Co., Ltd. Method for training course recommendation model, method for course recommendation, and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423442A (en) * 2017-08-07 2017-12-01 火烈鸟网络(广州)股份有限公司 Method and system, storage medium and computer equipment are recommended in application based on user's portrait behavioural analysis
KR102265573B1 (en) * 2020-09-29 2021-06-16 주식회사 팀기원매스 Method and system for reconstructing mathematics learning curriculum based on artificial intelligence
US20220415195A1 (en) * 2022-02-18 2022-12-29 Beijing Baidu Netcom Science Technology Co., Ltd. Method for training course recommendation model, method for course recommendation, and apparatus
CN114722281A (en) * 2022-04-07 2022-07-08 平安科技(深圳)有限公司 Training course configuration method and device based on user portrait and user course selection behavior

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONGJIAN ZHAN等: ""DenseNet-CTC: An end-to-end RNN-free architecture for context-free string recognition"", 《COMPUTER VISION AND IMAGE UNDERSTANDING》, vol. 204 *
张海华;: "基于大数据和机器学习的大学生选课推荐模型研究", 信息系统工程, no. 04 *

Also Published As

Publication number Publication date
CN115830405B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN110147456B (en) Image classification method and device, readable storage medium and terminal equipment
CN110147551B (en) Multi-category entity recognition model training, entity recognition method, server and terminal
CN112508334B (en) Personalized paper grouping method and system integrating cognition characteristics and test question text information
CN110765882B (en) Video tag determination method, device, server and storage medium
CN111259647A (en) Question and answer text matching method, device, medium and electronic equipment based on artificial intelligence
CN111369535B (en) Cell detection method
CN111666416A (en) Method and apparatus for generating semantic matching model
CN111460101A (en) Knowledge point type identification method and device and processor
CN108228684A (en) Training method, device, electronic equipment and the computer storage media of Clustering Model
CN112905750A (en) Generation method and device of optimization model
CN111523604A (en) User classification method and related device
CN111161238A (en) Image quality evaluation method and device, electronic device, and storage medium
CN115830405B (en) Method and system for analyzing tagged user capability portrayal
CN115631008B (en) Commodity recommendation method, device, equipment and medium
CN114170484B (en) Picture attribute prediction method and device, electronic equipment and storage medium
CN113705092B (en) Disease prediction method and device based on machine learning
CN116228361A (en) Course recommendation method, device, equipment and storage medium based on feature matching
CN112231373B (en) Knowledge point data processing method, apparatus, device and computer readable medium
CN109918486B (en) Corpus construction method and device for intelligent customer service, computer equipment and storage medium
CN113255701A (en) Small sample learning method and system based on absolute-relative learning framework
CN110334353A (en) Analysis method, device, equipment and the storage medium of word order recognition performance
CN112529009B (en) Image feature mining method and device, storage medium and electronic equipment
CN117593613B (en) Multitasking learning method and device, storage medium and electronic equipment
CN113837910B (en) Test question recommending method and device, electronic equipment and storage medium
CN117058498B (en) Training method of segmentation map evaluation model, and segmentation map evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant