CN112465543A - User portrait generation method, equipment and computer storage medium - Google Patents

User portrait generation method, equipment and computer storage medium Download PDF

Info

Publication number
CN112465543A
CN112465543A CN202011338249.5A CN202011338249A CN112465543A CN 112465543 A CN112465543 A CN 112465543A CN 202011338249 A CN202011338249 A CN 202011338249A CN 112465543 A CN112465543 A CN 112465543A
Authority
CN
China
Prior art keywords
information
target object
dimensional
interactive
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011338249.5A
Other languages
Chinese (zh)
Inventor
李佳乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Jieti Education Technology Co ltd
Original Assignee
Ningbo Jieti Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Jieti Education Technology Co ltd filed Critical Ningbo Jieti Education Technology Co ltd
Priority to CN202011338249.5A priority Critical patent/CN112465543A/en
Publication of CN112465543A publication Critical patent/CN112465543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Accounting & Taxation (AREA)
  • Educational Administration (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Technology (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a user portrait generation method, which is applied to a server and comprises the following steps: collecting first multi-dimensional information corresponding to a first target object through an information collector; analyzing the first multi-dimensional information according to a first multi-dimensional analysis model to obtain a first feature tag corresponding to the first multi-dimensional information; the method comprises the steps of obtaining interactive behavior information corresponding to a second target object and a first target object, and determining a first interactive label corresponding to the interactive behavior information based on the first target object; and generating a first user portrait corresponding to the first target object according to the first characteristic tag and the first interactive tag, so that the effect of generating the user portrait for teachers or students according to behavior data generated by activities of a teacher and student group in the campus is achieved.

Description

User portrait generation method, equipment and computer storage medium
Technical Field
The invention relates to the field of smart campuses, in particular to a user portrait generation method, user portrait generation equipment and a computer storage medium.
Background
With the development of science and technology and the coming of big data era, the big data serving as one of the most important data sets of the era is compared with coal mines with energy, and the reasonable application of the big data is the key for gaining competition for many industries, and schools are no exception. With the activities of teachers and student groups in the campus, more and more behavior data are generated, a multi-dimensional big data system from identity information, attendance rate, one-card consumption to internet surfing and the like is gradually formed, how to reasonably apply the data in campus management to generate an information graph for teachers or students is achieved, a foundation is laid for accurate management and service of teachers or students, and the campus management system becomes a higher challenge for schools in management.
Disclosure of Invention
The embodiment of the invention provides a user portrait generation method, user portrait generation equipment and a computer storage medium, which have the function of generating user portraits for teachers or students according to behavior data generated by activities of teacher and student groups in campuses, and lay a foundation for accurate management and service of the teachers or students.
An embodiment of the present invention provides a user portrait generation method, which is applied to a server, and the method includes: collecting first multi-dimensional information corresponding to a first target object through an information collector; analyzing the first multi-dimensional information according to a first multi-dimensional analysis model to obtain a first feature tag corresponding to the first multi-dimensional information; the method comprises the steps of obtaining interactive behavior information corresponding to a second target object and a first target object, and determining a first interactive label corresponding to the interactive behavior information based on the first target object; and generating a first user portrait corresponding to the first target object according to the first characteristic label and the first interactive label.
In an implementation manner, the obtaining of the interaction behavior information corresponding to the second target object and the first target object includes: collecting second multi-dimensional information corresponding to a second target object through an information collector; and matching according to the second multi-dimensional information and the first multi-dimensional information to obtain the interactive behavior information corresponding to the second target object and the first target object.
In one embodiment, generating a first user representation corresponding to the first target object based on the first feature tag and the first interaction tag includes: analyzing the second multi-dimensional information according to a second multi-dimensional analysis model to obtain a second feature tag corresponding to the second multi-dimensional information; and generating a first user portrait corresponding to the first target object according to the first characteristic label, the second characteristic label and the first interactive label.
In an embodiment, the method further comprises: determining a second interactive label corresponding to the interactive behavior information based on the second target object; and generating a second user portrait corresponding to the second target object according to the first characteristic label, the second characteristic label and the second interactive label.
In an implementation manner, analyzing the first multidimensional information according to a multidimensional analysis model to obtain a first feature tag corresponding to the first multidimensional information includes: classifying the first multi-dimensional information according to a first dimensional frame corresponding to the first target object, and determining first classification information; and analyzing the first classification information through the first dimension frame to obtain a first feature tag.
In one embodiment, the first classification information includes at least two of the following information: first classification information for characterizing commuting behavior of the first target object; second classification information for characterizing network behavior of the first target object; third classification information for characterizing the first target object's daily work and rest; fourth classification information for characterizing the consumption list of the first target object; and fifth classification information used for representing the teaching evaluation of the first target object.
In an implementation manner, analyzing the second multidimensional information according to a second multidimensional analysis model to obtain a second feature tag corresponding to the second multidimensional information includes: classifying the second multi-dimensional information according to a second dimensional frame corresponding to the second target object, and determining second classification information; and analyzing the second classification information through the second dimension frame to obtain a second feature tag.
In one embodiment, the second classification information includes at least two of the following information: sixth classification information for characterizing commuting behavior of the second target object; seventh classification information for characterizing network behavior of the second target object; eighth classification information for characterizing the second target object's daily work and rest; ninth classification information for characterizing the second target object consumption manifest; and tenth type information used for representing the result evaluation of the second target object.
Another aspect of an embodiment of the present invention provides a user representation generating device, where the device includes:
the acquisition module is used for acquiring first multi-dimensional information corresponding to the first target object through the information acquisition device; the obtaining module is used for analyzing the first multi-dimensional information according to a first multi-dimensional analysis model to obtain a first feature tag corresponding to the first multi-dimensional information; the first determining module is used for acquiring interactive behavior information corresponding to a second target object and a first target object, and determining a first interactive tag corresponding to the interactive behavior information based on the first target object; and the generating module is used for generating a first user portrait corresponding to the first target object according to the first characteristic label and the first interactive label.
In one embodiment, the first determining module comprises: the acquisition submodule is used for acquiring second multi-dimensional information corresponding to a second target object through the information acquisition device; and the obtaining submodule is used for matching according to the second multi-dimensional information and the first multi-dimensional information to obtain the interaction behavior information corresponding to the second target object and the first target object.
In one embodiment, the generating module comprises: the first obtaining submodule analyzes the second multi-dimensional information according to a second multi-dimensional analysis model to obtain a second feature tag corresponding to the second multi-dimensional information; and the generation submodule is used for generating a first user portrait corresponding to the first target object according to the first characteristic label, the second characteristic label and the first interactive label.
In an embodiment, the apparatus further comprises: a second determining module, configured to determine, based on the second target object, a second interaction tag corresponding to the interaction behavior information; the generating module is further configured to generate a second user representation corresponding to the second target object according to the first feature tag, the second feature tag, and the second interaction tag.
In one embodiment, the obtaining module comprises: the determining submodule is used for classifying the first multi-dimensional information according to a first dimensional frame corresponding to the first target object and determining first classification information; and the second obtaining submodule analyzes the first classification information through the first dimension frame to obtain a first feature tag.
In one embodiment, the first obtaining submodule includes: the determining unit is used for classifying the second multi-dimensional information according to a second dimensional frame corresponding to the second target object and determining second classification information; and the obtaining unit is used for analyzing the second classification information through the second dimension frame to obtain a second feature label.
Another aspect of embodiments of the present invention provides a computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform any of the user representation generation methods described above.
According to the user portrait generation method, the user portrait generation equipment and the computer storage medium, the first multi-dimensional information corresponding to the first target object is collected according to the information collector, then the first multi-dimensional information is analyzed according to the first multi-dimensional analysis model, the first feature tag corresponding to the first multi-dimensional information is obtained, then the interactive behavior information corresponding to the second target object and the first target object is obtained, the first interactive tag is determined according to the interactive behavior information, finally the first user portrait corresponding to the first target object is generated according to the first feature tag and the first interactive tag, the effect of generating the user portrait for the teacher or the student according to behavior data generated by activities of the teacher and the student group in the campus is achieved, and a foundation is laid for accurate management and service of the teacher or the student.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic flow chart illustrating an implementation of a user representation generation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a first feature tag determination process of a user representation generation method according to an embodiment of the present invention;
FIG. 3 is a schematic view illustrating a flow chart of determining interaction behavior information of a user representation generation method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a first user representation generation flow of a user representation generation method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a second feature tag determination process of a user representation generation method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a second user representation generation flow of a user representation generation method according to an embodiment of the present invention;
FIG. 7 is a block diagram of a user representation generation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a schematic flow chart illustrating an implementation of a user portrait generation method according to an embodiment of the present invention.
Referring to fig. 1, an aspect of the present invention provides a method for generating a user portrait, applied to a server, the method including: operation 101, acquiring first multi-dimensional information corresponding to a first target object through an information acquisition device; operation 102, analyzing the first multidimensional information according to the first multidimensional analysis model to obtain a first feature tag corresponding to the first multidimensional information; operation 103, acquiring interaction behavior information corresponding to the first target object and the second target object, and determining a first interaction tag corresponding to the interaction behavior information based on the first target object; at operation 104, a first user representation corresponding to the first target object is generated based on the first feature tag and the first interaction tag.
The embodiment of the invention aims to provide a user portrait generation method for a user, which is mainly applied to the field of smart campuses and is used for generating behavior data according to activities of teachers and student groups in the campuses, such as: consumption, life, attendance, networking, class activity data, etc., to generate a user representation for a teacher or student. The user portrait can be applied to the management of teachers or students, and a foundation is laid for the accurate management and service of the teachers or students.
Specifically explaining the user portrait generating method in combination with the operation process, in operation 101, the server may acquire, by the information collector, first multi-dimensional information corresponding to a first target object, where the first multi-dimensional information may be one or more kinds of dimensional information corresponding to individual behaviors of a teacher or a student, and in an implementable case, when the first target object is a teacher or a student, the first multi-dimensional information may include commuting behavior information, network behavior information, life work information, consumption list information, classroom behavior information, and the like. In addition, when the first target object is a teacher, the first multi-dimensional information may further include teaching evaluation information for the teacher, and when the first target object is a student, the first multi-dimensional information may further include achievement evaluation information for the student. Further, in order to better obtain the first multidimensional information, the information collector may be a device capable of collecting dimensional information corresponding to individual behaviors of a teacher and/or a student, and the number of the information collector may be one or more, and the type of the information collector may be changed along with the change of the first multidimensional information, in an implementable case, the information collector may be one or more of devices capable of obtaining the first multidimensional information, such as a commute card writer, a camera, a one-card swipe card writer, and the like, and the number of each information collector may be multiple, specifically, when the information collector includes the commute card writer, the first multidimensional information may include commute behavior information, for example: the commute time. When the information collector comprises a card swiping machine, the first multi-dimensional information can comprise consumption list information, such as: catering consumption, learning consumption information and the like. When the information collector comprises a camera, the first multi-dimensional information may further comprise learning or classroom behavior information of students and/or teachers, such as: listening, reading and silence behavior information of students in a classroom, multimedia use of teachers, observation of students, activity routes, teaching behavior information and the like.
After the first multidimensional information is obtained, in operation 102, the server may further analyze the first multidimensional information according to the first multidimensional analysis model to obtain a first feature tag corresponding to the first multidimensional information. The first characteristic label is an analysis result obtained by analyzing the first multi-dimensional analysis model according to the dimensional information corresponding to the personal behavior of the first target object. In one implementation, the analysis result may be a set of information obtained by abstractly summarizing the first multi-dimensional information of the first target object.
Further, in a case that the first target object is a student, the first feature tag may include an analysis result obtained by analyzing learning, life, consumption, commute and network behaviors of the student, and it is to be understood that, in an implementable case, the embodiment of the present invention does not limit the type and number of the first feature tag, and in an implementable case, the first feature tag may be scoring information obtained by scoring the learning, life, consumption, commute and network behaviors of the student by the first multidimensional analysis model one by one, and at this time, the scoring information may be used as a basis for generating a five-dimensional map or a six-dimensional map corresponding to the user. In another practical case, the first feature tag may also be a tag word obtained by summarizing the learning, life, consumption, commute and network behaviors of the student, and the tag word may be one or more of tag words used for indicating whether a classroom of the student is active, indicating whether the life and rest of the student are regular, indicating the consumption level of the student, and the like, for example, a tag word used for indicating that the student is painfully studied may be obtained through information such as the learning time of the student in a classroom, the concentration degree of classroom learning, and the like. Correspondingly, the first multidimensional analysis model may be a set of multiple models, which may include multiple models that are respectively analyzed for different dimensional information such as a life rule, consumption, commute, learning, score, and network behavior, and the first multidimensional analysis model may also be a model that is capable of performing an overall analysis for different dimensions such as a life rule, consumption, commute, learning, score, and network behavior.
In an implementable case, the first multidimensional analysis model may be a set of human resource models respectively aiming at activities such as the teacher's life law, commute, consumption, network and teaching, and may be obtained by performing classification training and grading training on corresponding models through a large amount of information related to the life law, commute, consumption, network and teaching activities. For example, the consumption model can be obtained by performing consumption classification training and consumption grading training on a large amount of consumption information. When applied, the consumption model may determine dimension information corresponding to the consumption information in the first multi-dimension information, such as: the consumption level is determined according to the information, and finally, corresponding label words are obtained according to the consumption level, such as high consumption level or low consumption level, under an implementable condition, corresponding weight can be given to the dimension information corresponding to the consumption information, and the consumption level is determined after comprehensive operation is carried out. In another implementable situation, the first multidimensional analysis model can also be a model, and can be obtained by performing overall analysis training on the model through a large amount of information related to life laws, commutes, consumption, networks and teaching behaviors.
In one implementation, when the first multidimensional analysis model is a set of multiple models and the first multidimensional information includes commuting behavior information and network behavior information, the first multidimensional analysis model includes a model that can be used to derive a commuting demand label from the commuting behavior information and a model that can be used to derive a network demand label from the network behavior. The first feature tags obtained by analyzing the first multidimensional information by using the first multidimensional analysis model may include commuting requirement tags and network requirement tags, such as tags indicating that a class or a work is on time and/or a surfing on the internet is favored.
In operation 103, the server obtains interaction behavior information of the second target object corresponding to the first target object, and determines a first interaction tag corresponding to the interaction behavior information based on the first target object. The second target object is a person who has an interactive behavior with the first target object in the campus, and the interactive behavior information is information generated by the interactive behavior of the second target object and the first target object, such as teaching interaction of a teacher and students in a classroom, communication between the teacher and the students after class, and the like. And the first interactive label is information which is determined by the server according to the interactive behavior information and corresponds to the first target object, and the main body of the first interactive label is the first target object. When the interactive behavior information includes post-lesson communication behavior of the teacher and the students, the first interactive tag may include a tag word for indicating a post-lesson answering index of the teacher, such as: questions and answers after class, etc. In another implementation case, when the first target object is a student, the first interaction tag may include a tag word representing an index of learning interaction of the student, such as that the learning atmosphere of the student is active in class. In addition, it should be clear that, the embodiment of the present invention also does not limit the types and the numbers of the interactive behavior information and the first interactive labels, and only needs to satisfy the requirement that the generation of the user portrait is not affected.
In operation 104, a first user representation corresponding to the first target object may be generated according to the first feature tag and the first interactive tag for reference by a system administrator, so as to achieve a purpose of laying a foundation for accurate management and service of teachers or students. Specifically, when the first feature tag and the first interactive tag are both scoring information, the first user portrait may be a mathematical image drawn by the first feature tag and the first interactive tag, such as a five-dimensional graph and a six-dimensional graph, and when the first feature tag and the first interactive tag are both used for representing tag words, the first user portrait may be a tag graph generated according to the tag words.
FIG. 2 is a schematic diagram illustrating a first feature tag determination process of a user representation generation method according to an embodiment of the present invention.
Referring to fig. 2, in an implementation, the analyzing the first multidimensional information according to the multidimensional analysis model to obtain a first feature tag corresponding to the first multidimensional information in operation 102 includes: operation 201, classifying the first multi-dimensional information according to a first dimensional frame corresponding to the first target object, and determining first classification information; in operation 202, the first classification information is analyzed through the first dimension frame to obtain a first feature tag.
In operation 201, in order to enable the first multidimensional analysis model to perform accurate analysis on the first multidimensional information, the first multidimensional information may be classified according to a first dimensional frame corresponding to the first target object, and first classification information is determined, where the first multidimensional frame may be an information table preset in the model for classifying the first multidimensional information and determining a label, and in an implementable case, the first multidimensional frame may include information indicating that different pieces of dimensional information are divided, such as: the first multi-dimensional framework may be used to instruct the partitioning of network consumption information and catering consumption information into consumption list categories and personal habits into lifestyle and rest categories, etc. Further, after the first multidimensional information is classified by the first multidimensional frame, the first classification information may be determined according to a classification condition of the first multidimensional frame.
In operation 202, the first classification information may be analyzed through a first dimension framework to obtain a first feature tag. In an implementation case, the first classification information may include information for representing a consumption list of the first target object, and further, the first multidimensional framework may determine a consumption requirement level of the first target object according to the consumption list information of the first target object, and further determine a corresponding first feature tag according to the consumption requirement level of the first target object. Specifically, when the total amount of the consumption list of the first target object satisfies a preset threshold, the first multidimensional framework may generate a tag word for indicating the consumption level of the first target object, such as: the consumption level is high. When the first target object is a student, the first multi-dimensional framework can also pass through an information collector, such as: image information in the classroom that the camera was gathered, first multidimension degree frame carries out feature recognition to this image information, confirms corresponding student's in the image number of times of holding hands, and when the student held hands the number of times and satisfied the preset threshold value, first multidimension degree frame also can generate and be used for expressing the extreme, or the label word of concentration degree of student classroom volume: such as serious learning attitude, active classroom speech, high classroom concentration, etc.
In one embodiment, the first classification information includes at least two of the following information: first classification information for characterizing commuting behavior of the first target object; second classification information for characterizing network behavior of the first target object; third classification information for characterizing the first target object's daily work and rest; fourth classification information for characterizing the consumption list of the first target object; and fifth classification information used for representing the teaching evaluation of the first target object.
In an implementation case, when the first target object is a teacher, the first classification information may be attendance checking and card punching conditions of the teacher, and the second classification information may be internet access records of the teacher. The third classification information may be life regularity and personal taste information of the teacher. The fourth classification information can be online shopping information and campus consumption information of teachers. The fifth classification information may be information for teaching evaluation to the teacher. It should be clear that the embodiment of the present invention does not limit the type and amount of the first classification information, and only needs to satisfy the requirement that the generation of the user portrait of the first target object is not affected.
FIG. 3 is a schematic view illustrating a flow chart of determining interaction behavior information of a user portrait generation method according to an embodiment of the present invention.
Referring to fig. 3, in an implementation, acquiring interaction behavior information corresponding to a second target object and a first target object includes: operation 301, acquiring, by an information collector, second multidimensional information corresponding to a second target object; and operation 302, performing matching according to the second multi-dimensional information and the first multi-dimensional information, and acquiring interaction behavior information corresponding to the second target object and the first target object.
Specifically, in operation 103, in order to obtain an interaction behavior between the second target object and the first target object, in operation 301, first, second multidimensional information corresponding to the second target object may be collected by the information collector, where the second multidimensional information may also be one or more kinds of dimensional information corresponding to an individual behavior of a teacher or a student, and correspondingly, when the second target object may be a teacher or a student, the embodiment of the present invention does not limit the identity of the second target object, and only the interaction behavior between the second target object and the first target object needs to be satisfied.
In operation 302, the server may perform matching according to the second multi-dimensional information and the first multi-dimensional information, and obtain interaction behavior information corresponding to the second target object and the first target object. Specifically, the server matches the second multidimensional information with the first multidimensional information to determine whether an interactive behavior exists between the second target object and the first target object, and then determines interactive behavior information.
FIG. 4 is a schematic diagram illustrating a first user representation generation flow of a user representation generation method according to an embodiment of the present invention.
Referring to FIG. 4, in an embodiment, generating a first user representation corresponding to a first target object from a first feature tag and a first interaction tag in operation 104 includes: operation 401, analyzing the second multidimensional information according to the second multidimensional analysis model to obtain a second feature tag corresponding to the second multidimensional information; operation 402 generates a first user representation corresponding to the first target object based on the first feature tag, the second feature tag, and the first interaction tag.
Specifically, in operation 401, the second multidimensional information is analyzed according to the second multidimensional analysis model, and a second feature tag corresponding to the second multidimensional information is obtained. The second multidimensional analysis model may also be a set of multiple models, and may include multiple models that are respectively analyzed for different dimensional information such as a life rule, consumption, commute, learning, score, and network behavior. And the second multi-dimensional analysis model can also be a model which can carry out overall analysis on different dimensions such as life regularity, consumption, commute, study, score and network behavior. It should be clear that the second multidimensional analysis model may be the same as or different from the first multidimensional analysis model, the second multidimensional analysis model may be the same as the first multidimensional analysis model in a case where the first target object and the second target object are both teachers or both students, and the second multidimensional analysis model may be different from the first multidimensional analysis model in a case where the first target object and the second target object are teachers and students, respectively. Also, in one implementable case, the second multidimensional analysis model may be the same as the training method of the first multidimensional analysis model.
And then, the second characteristic label is an analysis result obtained by analyzing the second multi-dimensional analysis model according to the dimensional information corresponding to the personal behavior of the second target object. In one implementation case, the analysis result may be an information set obtained by abstractly summarizing the second multi-dimensional information of the second target object.
In an implementation case, when the second target object is a teacher, the second multi-dimensional information may also include an analysis result obtained by analyzing behaviors of teaching, life, consumption, commute, network, and the like of the teacher. And further, analyzing the second multi-dimensional information according to a second multi-dimensional analysis model to obtain a second feature label. In an implementation case, the second feature tag may be scoring information obtained by scoring the teacher by the second multidimensional analysis model one by one for teaching, life, consumption, commute, logistics and network behaviors, and generating a six-dimensional graph corresponding to the teacher. In another implementation case, the second feature tag may be a tag word obtained by summarizing the teaching, life, consumption, commute and network behaviors of the teacher, and the tag word may be used to indicate whether the teacher teaches in classroom detail, whether the teacher's life and rest are regular, and the like. Further, in the case that the second multidimensional information includes a blackboard writing of the teacher, an example demonstration of the teacher, and observation of students by the teacher and multimedia operation information of the teacher, the second multidimensional analysis model may further generate a tag word for representing class quality of the teacher, such as: good classroom quality, etc.
Further, operation 402 is performed to generate a first user representation corresponding to the first target object based on the first feature tag, the second feature tag, and the first interaction tag. In an implementable case, when the second target object is a student, the second feature tag may include tag words for indicating the attendance rate, the classroom concentration and the classroom activity of the student, and, in a case where the first target object is a teacher, the part of the second feature tag for indicating the attendance rate, the classroom concentration, the classroom confusion and the classroom activity of the student may also be used as a basis for teaching evaluation of the teacher, so that the second feature tag may also be used for generating a user representation of the first target object. Thus, in generating the first user representation, the first user representation may be generated from the first feature tag, the first interaction tag, and the second feature tag in order to obtain a more accurate first user representation. Further, there may be multiple methods for generating the first user representation, and in an implementation case, generating the first user representation according to the first feature tag, the first interactive tag, and the second feature tag may include filtering the second feature tag, determining the second feature tag that may be used for evaluating the first target object, and then generating the first user representation according to the second feature tag that may be used for evaluating the first target object, the first feature tag, and the first interactive tag.
Specifically, when the first target object is a teacher and the second target object is a student, the first feature tag may include: good teaching and learning, serious work, normal work and rest rules and network behaviors, and the like. The first interactive labels can comprise high teacher teaching interactive index, frequent instruction to students after class, and the like, and correspondingly, the second characteristic labels can also comprise good learning attitude, good living habit, correct consumption concept, high class concentration degree, and the like of the students. In one implementation, the server may first filter the second feature tags to filter out second feature tags that may be used to evaluate the first target object when generating the first user representation, such as: the classroom concentration is high, and further, a first user portrait corresponding to the first target object is generated according to the second feature tag for evaluating the first target object, the first feature tag and the first interactive tag, and at this time, the first user portrait may include the following tags: the teaching learning is good, the work is serious, the life work and rest rule, the network behavior is normal, the classroom concentration degree is high, the teaching interaction index is high, and the students are frequently instructed after class. In another implementation case, a label for characterizing the teaching level of the first target object may also be generated according to the first interactive label and the second feature label for evaluating the first target object, such as: and generating a label for indicating that the teaching level of the teacher is high according to the label for indicating that the teaching interaction index is high and the label for indicating that the classroom concentration of the student is high. Finally, a first user representation is generated based on the teach horizontal label and the first feature label for characterizing the first target object, where the first user representation may include the following labels: the teaching learning is good, the work is serious, the life work and rest rule and the network behavior are normal, and the teaching level is high.
FIG. 5 is a schematic diagram illustrating a second feature tag determination process of a user representation generation method according to an embodiment of the present invention.
In an implementation manner, the analyzing, by operation 401, the second multidimensional information according to the second multidimensional analysis model to obtain a second feature tag corresponding to the second multidimensional information includes: in operation 501, classifying the second multi-dimensional information according to a second dimensional frame corresponding to the second target object, and determining second classification information; operation 502, the second classification information is analyzed through the second dimension frame to obtain a second feature tag.
In operation 501, the second multi-dimensional information may be classified according to a second dimensional frame corresponding to a second target object, and second classification information may be determined; the second multidimensional framework can also be an information table preset in the model and used for classifying the second multidimensional information and determining the labels, the second multidimensional framework can include information indicating that the second multidimensional information is classified, and further, after the second multidimensional framework completes classification of the second multidimensional information, the second classification information can be determined according to the classification condition of the second multidimensional framework.
In operation 502, the second classification information is analyzed through the second dimension framework, and finally a second feature tag is obtained. In an implementation case, the second classification information may include schedule information of the second target object, and further, the second dimensional frame may determine the life information of the second target object according to the schedule information of the second target object, and further determine and summarize the corresponding second feature tag according to the life information of the second target object. Such as: the daily work and rest are irregular, the daily stay is the same as the first characteristic label, the type and the number of the second characteristic labels are not limited, and the generation of the portrait of the user is not influenced.
In one embodiment, the second classification information includes at least two of the following information: sixth classification information for characterizing commuting behavior of the second target object; seventh classification information for characterizing network behavior of the second target object; eighth classification information for characterizing the second target object's daily work and rest; ninth classification information for characterizing the second target object consumption manifest; and tenth type information used for representing the result evaluation of the second target object.
In an implementation case, when the second target object is a student, the sixth classification information may be the early-quit, absence and commute condition of the student, and the seventh classification information may be the network operation condition of the student, such as game, web browsing and the like. The eighth classification information may be a life schedule of the student, the ninth classification information may be consumption information of the student in the school, and the tenth classification information may be learning condition and achievement condition of the student.
FIG. 6 is a schematic diagram of a second user representation generation flow of a user representation generation method according to an embodiment of the present invention.
Referring to fig. 6, in an embodiment, the method further comprises: operation 601, determining a second interactive label corresponding to the interactive behavior information based on a second target object; in operation 602, a second user representation corresponding to a second target object is generated based on the first feature tag, the second feature tag, and the second interaction tag.
Specifically, after obtaining the first user representation, the method may further include an operation 601 of determining a second interactive tag corresponding to the interactive behavior information based on a second target object, where the second interactive tag is information corresponding to the second target object determined by the server according to the interactive behavior information, and a main body of the second interactive tag is the second target object. In an implementation case, when the interactive activity information includes the classroom interactive activity of the teacher and the student corresponding to the first interactive tag, the second interactive tag may include a tag word for indicating the classroom answering aggressiveness of the student, such as: students actively answer questions, etc.
Further, in operation 602, a second user representation corresponding to a second target object may be generated based on the first feature tag, the second feature tag, and the second interaction tag. Specifically, the first feature tag may include a teaching tendency index for evaluating a teaching level of the teacher, such as individual guidance or rating of the teacher to students, and when the second target object is a student, guidance or rating of the teacher to the student can also be an important basis for evaluating the student, so that the second user representation corresponding to the second target object can be generated according to the first feature tag, the second feature tag, and the second interactive tag.
In one embodiment, a user representation generation method is provided that has the capability of generating a user representation for a teacher or a student based on behavioral data generated from activities of a teacher and a student community in a campus. Specifically, in the application scenario provided in the embodiment of the present invention, the information collector may be a camera, a commute card reader, a one-card swipe card reader, and the like, the first multidimensional information may include information of a teacher or a student in multiple aspects of learning, life, educational administration, logistics, network, commute, and the like, and the first multidimensional analysis model may be a set of multiple models respectively presenting different dimensional requirements such as a life rule, consumption, commute, learning, score, network behavior, and the like, and is used to generate a corresponding tag for the first multidimensional information. Further, according to the user portrait generation method provided by the embodiment of the present invention, first multidimensional information of a teacher, such as a life rule, consumption, attendance, learning, score, network behavior, and the like, may be collected by a camera, a card reader, and then, the first multidimensional information is analyzed according to a first multidimensional analysis model to obtain a first feature tag corresponding to the first multidimensional information. When the first target object is a teacher, the first feature tags may include tags for characterizing commuting behavior, network behavior, daily work and rest, consumption lists, and teaching evaluations of the first target object. Further, second multi-dimensional information of students with interactive behaviors with the teacher is obtained, on one hand, when the first target object is the teacher and the second target object is the student, matching is carried out according to the second multi-dimensional information and the first multi-dimensional information, after interactive behavior information of the teacher and the student is obtained, a first interactive label and a second interactive label can be obtained according to the interactive behavior information, under the condition that implementation can be achieved, the first interactive label can be used as a teaching interactive index of the teacher, and the second interactive label can be used as a learning interactive index of the student. On the other hand, the second multidimensional information is analyzed by the second multidimensional analysis model to obtain a second label, specifically, the second multidimensional analysis model can classify the second multidimensional information to determine second classification information, and further analyze the second classification information to determine a second feature label. And finally, screening the second feature tags, determining the second feature tags which can be used for evaluating the first target object, and further generating the first user portrait according to the second feature tags which can be used for evaluating the first target object, the first feature tags and the first interactive tags.
FIG. 7 is a block diagram of a user representation generation apparatus according to an embodiment of the present invention.
With reference to FIG. 7, another aspect of the present invention provides a user representation generation apparatus, the apparatus comprising: an acquisition module 701, configured to acquire, by an information acquisition device, first multi-dimensional information corresponding to a first target object; an obtaining module 705, configured to analyze the first multidimensional information according to the first multidimensional analysis model, and obtain a first feature tag corresponding to the first multidimensional information; a first determining module 704, configured to obtain interaction behavior information corresponding to the first target object and the second target object, and determine, based on the first target object, a first interaction tag corresponding to the interaction behavior information; the generating module 703 generates a first user portrait corresponding to the first target object according to the first feature tag and the first interactive tag.
In one embodiment, the first determining module 704 includes: the acquisition submodule 7041 is configured to acquire, by the information acquisition device, second multidimensional information corresponding to a second target object; the obtaining sub-module 7042 is configured to perform matching according to the second multi-dimensional information and the first multi-dimensional information, and obtain interaction behavior information corresponding to the second target object and the first target object.
In one embodiment, the generating module 703 includes: the first obtaining sub-module 7032 is configured to analyze the second multidimensional information according to the second multidimensional analysis model to obtain a second feature tag corresponding to the second multidimensional information; the generating submodule 7031 is configured to generate a first user portrait corresponding to the first target object according to the first feature tag, the second feature tag, and the first interactive tag.
In one embodiment, the apparatus further comprises: a second determining module 702, configured to determine, based on the second target object, a second interaction tag corresponding to the interaction behavior information; the generating module 703 is further configured to generate a second user representation corresponding to the second target object according to the first feature tag, the second feature tag, and the second interaction tag.
In one implementation, the obtaining module 705 includes: the determining submodule 7051 is configured to classify the first multi-dimensional information according to a first dimensional frame corresponding to the first target object, and determine first classification information; the second obtaining sub-module 7052 analyzes the first classification information through the first dimension frame to obtain a first feature tag.
In one possible embodiment, first obtaining submodule 7032 includes: a determining unit 70321, configured to classify the second multi-dimensional information according to a second dimensional frame corresponding to the second target object, and determine second classification information; an obtaining unit 70322, configured to analyze the second classification information through the second dimensional framework, and obtain a second feature tag.
Another aspect of embodiments of the present invention provides a computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform any of the user representation generation methods described above.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A user representation generation method, applied to a server, the method comprising:
collecting first multi-dimensional information corresponding to a first target object through an information collector;
analyzing the first multi-dimensional information according to a first multi-dimensional analysis model to obtain a first feature tag corresponding to the first multi-dimensional information;
the method comprises the steps of obtaining interactive behavior information corresponding to a second target object and a first target object, and determining a first interactive label corresponding to the interactive behavior information based on the first target object;
and generating a first user portrait corresponding to the first target object according to the first characteristic label and the first interactive label.
2. The method according to claim 1, wherein the obtaining of the interaction behavior information of the second target object corresponding to the first target object comprises:
collecting second multi-dimensional information corresponding to a second target object through an information collector;
and matching according to the second multi-dimensional information and the first multi-dimensional information to obtain the interactive behavior information corresponding to the second target object and the first target object.
3. The method of claim 1, wherein generating a first user representation corresponding to the first target object from the first feature tag and the first interaction tag comprises:
analyzing the second multi-dimensional information according to a second multi-dimensional analysis model to obtain a second feature tag corresponding to the second multi-dimensional information;
and generating a first user portrait corresponding to the first target object according to the first characteristic label, the second characteristic label and the first interactive label.
4. The method of claim 1, further comprising:
determining a second interactive label corresponding to the interactive behavior information based on the second target object;
and generating a second user portrait corresponding to the second target object according to the first characteristic label, the second characteristic label and the second interactive label.
5. The method of claim 1, wherein analyzing the first multi-dimensional information according to a multi-dimensional analysis model to obtain a first feature tag corresponding to the first multi-dimensional information comprises:
classifying the first multi-dimensional information according to a first dimensional frame corresponding to the first target object, and determining first classification information;
and analyzing the first classification information through the first dimension frame to obtain a first feature tag.
6. The method of claim 5, wherein the first classification information comprises at least two of the following information: first classification information for characterizing commuting behavior of the first target object; second classification information for characterizing network behavior of the first target object; third classification information for characterizing the first target object's daily work and rest; fourth classification information for characterizing the consumption list of the first target object; and fifth classification information used for representing the teaching evaluation of the first target object.
7. The method of claim 3, wherein analyzing the second multidimensional information according to a second multidimensional analysis model to obtain a second feature tag corresponding to the second multidimensional information comprises:
classifying the second multi-dimensional information according to a second dimensional frame corresponding to the second target object, and determining second classification information;
and analyzing the second classification information through the second dimension frame to obtain a second feature tag.
8. The method of claim 7, wherein the second classification information comprises at least two of the following information: sixth classification information for characterizing commuting behavior of the second target object; seventh classification information for characterizing network behavior of the second target object; eighth classification information for characterizing the second target object's daily work and rest; ninth classification information for characterizing the second target object consumption manifest; and tenth type information used for representing the result evaluation of the second target object.
9. A user representation generation apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring first multi-dimensional information corresponding to the first target object through the information acquisition device;
the obtaining module is used for analyzing the first multi-dimensional information according to a first multi-dimensional analysis model to obtain a first feature tag corresponding to the first multi-dimensional information;
the first determining module is used for acquiring interactive behavior information corresponding to a second target object and a first target object, and determining a first interactive tag corresponding to the interactive behavior information based on the first target object;
and the generating module is used for generating a first user portrait corresponding to the first target object according to the first characteristic label and the first interactive label.
10. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the user representation generation method of any of claims 1-8.
CN202011338249.5A 2020-11-25 2020-11-25 User portrait generation method, equipment and computer storage medium Pending CN112465543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011338249.5A CN112465543A (en) 2020-11-25 2020-11-25 User portrait generation method, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011338249.5A CN112465543A (en) 2020-11-25 2020-11-25 User portrait generation method, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112465543A true CN112465543A (en) 2021-03-09

Family

ID=74798909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011338249.5A Pending CN112465543A (en) 2020-11-25 2020-11-25 User portrait generation method, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112465543A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862582A (en) * 2021-02-18 2021-05-28 深圳无域科技技术有限公司 User portrait generation system and method based on financial wind control
CN113783709A (en) * 2021-08-31 2021-12-10 深圳市易平方网络科技有限公司 Conference system-based participant monitoring and processing method and device and intelligent terminal
CN116739387A (en) * 2023-08-14 2023-09-12 广东南方电信规划咨询设计院有限公司 Method and device for multidimensional analysis of data and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895026A (en) * 2017-11-17 2018-04-10 联奕科技有限公司 A kind of implementation method of campus user portrait
CN109766000A (en) * 2018-12-25 2019-05-17 重庆和贯科技有限公司 A kind of wisdom education system and method based on virtual reality
CN110910038A (en) * 2019-12-02 2020-03-24 成都中医药大学 Method for constructing teacher teaching portrait model based on network course

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895026A (en) * 2017-11-17 2018-04-10 联奕科技有限公司 A kind of implementation method of campus user portrait
CN109766000A (en) * 2018-12-25 2019-05-17 重庆和贯科技有限公司 A kind of wisdom education system and method based on virtual reality
CN110910038A (en) * 2019-12-02 2020-03-24 成都中医药大学 Method for constructing teacher teaching portrait model based on network course

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862582A (en) * 2021-02-18 2021-05-28 深圳无域科技技术有限公司 User portrait generation system and method based on financial wind control
CN112862582B (en) * 2021-02-18 2024-03-22 深圳无域科技技术有限公司 User portrait generation system and method based on financial wind control
CN113783709A (en) * 2021-08-31 2021-12-10 深圳市易平方网络科技有限公司 Conference system-based participant monitoring and processing method and device and intelligent terminal
CN113783709B (en) * 2021-08-31 2024-03-19 重庆市易平方科技有限公司 Conference participant monitoring and processing method and device based on conference system and intelligent terminal
CN116739387A (en) * 2023-08-14 2023-09-12 广东南方电信规划咨询设计院有限公司 Method and device for multidimensional analysis of data and computer storage medium
CN116739387B (en) * 2023-08-14 2024-01-12 广东南方电信规划咨询设计院有限公司 Method and device for multidimensional analysis of data and computer storage medium

Similar Documents

Publication Publication Date Title
Ferguson et al. Exploring the state of science stereotypes: Systematic review and meta‐analysis of the Draw‐A‐Scientist Checklist
CN112465543A (en) User portrait generation method, equipment and computer storage medium
Kuo et al. A creative thinking approach to enhancing the web-based problem solving performance of university students
Tervakari et al. Usefulness of information visualizations based on educational data
John et al. Devices and desires: Subject subcultures, pedagogical identity and the challenge of information and communications technology
CN112184500A (en) Extraclass learning tutoring system based on deep learning and knowledge graph and implementation method
Jena Predicting students’ learning style using learning analytics: a case study of business management students from India
Maraza-Quispe et al. A predictive model implemented in knime based on learning analytics for timely decision making in virtual learning environments
Maaliw III Classification of learning styles in virtual learning environment using data mining: A basis for adaptive course design
Al-Alwani Mood extraction using facial features to improve learning curves of students in e-learning systems
Dimić et al. Association analysis of moodle e‐tests in blended learning educational environment
CN112733059A (en) Intelligent reading tracking evaluation method, system, terminal and storage medium
Yathongchai et al. Learner classification based on learning behavior and performance
CN110223202A (en) A kind of method and system of teaching stage property identification and scoring
Qu et al. Enhancing the Intelligence of the Adaptive Learning Software through an AI assisted Data Analytics on Students Learning Attributes with Unequal Weight
Kickmeier-Rust et al. Competence-based knowledge space theory: Options for the 21st century classroom
KR101996247B1 (en) Method and apparatus of diagnostic test
Maaliw III et al. Comparative analysis of data mining techniques for classification of student’s learning styles
CN110807060A (en) Education big data analysis system
Fasihuddin et al. A Framework to Personalise Open Learning Environments by Adapting to Learning Styles.
CN108053193A (en) Educational information is analyzed and querying method and system
Fan et al. Personalized recommendation algorithm for curriculum-and politics-oriented hybrid teaching resources
Atmojo et al. The Level of Classroom Teacher Digital Literacy in the Technology Dimension of the Instant Digital Competence Assessment (IDCA).
Raghavjee et al. Learning analytics in higher education
Nimy et al. Web-based Clustering Application for Determining and Understanding Student Engagement Levels in Virtual Learning Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210309