CN113284256A - MR mixed reality three-dimensional scene material library generation method and system - Google Patents

MR mixed reality three-dimensional scene material library generation method and system Download PDF

Info

Publication number
CN113284256A
CN113284256A CN202110571672.8A CN202110571672A CN113284256A CN 113284256 A CN113284256 A CN 113284256A CN 202110571672 A CN202110571672 A CN 202110571672A CN 113284256 A CN113284256 A CN 113284256A
Authority
CN
China
Prior art keywords
dimensional scene
obtaining
scene matching
matching
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110571672.8A
Other languages
Chinese (zh)
Other versions
CN113284256B (en
Inventor
吕云
张赐
何林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Weiai New Economic And Technological Research Institute Co ltd
Original Assignee
Chengdu Weiai New Economic And Technological Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Weiai New Economic And Technological Research Institute Co ltd filed Critical Chengdu Weiai New Economic And Technological Research Institute Co ltd
Priority to CN202110571672.8A priority Critical patent/CN113284256B/en
Publication of CN113284256A publication Critical patent/CN113284256A/en
Application granted granted Critical
Publication of CN113284256B publication Critical patent/CN113284256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a system for generating an MR mixed reality three-dimensional scene material library, which are used for obtaining historical data of three-dimensional scene matching actions corresponding to a first material; constructing a three-dimensional scene matching evaluation model; obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene matching evaluation model; performing emotion analysis on the first evaluation result to obtain a first emotion score; and obtaining a first guide matching direction; optimizing according to the first guide matching direction and the first three-dimensional scene matching action to obtain a second three-dimensional scene matching action; obtaining an Nth evaluation result and a corresponding mapping relation according to the first material, the Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model in sequence; and constructing a first mixed reality three-dimensional scene material state sub-library according to the first material and the Nth mapping relation. The method solves the technical problem that the material matching of the MR mixed reality three-dimensional scene is not intelligent and accurate enough in the prior art.

Description

MR mixed reality three-dimensional scene material library generation method and system
Technical Field
The invention relates to the related field of mixed reality material library construction, in particular to a method and a system for generating an MR mixed reality three-dimensional scene material library.
Background
With the increasingly frequent crossing of AR/VR with the field of Internet of things, the mixed reality is the smart combination of AR/VR and Internet of things trend technology: the virtual world and real world impacts industry have a common space where digital objects and real objects coexist, and their data can interact with each other. MR technology will reduce the necessary media and barriers between our perception, understanding and information, layer by layer, constantly affecting the emotional relationships between us and the modes of operation. Mixed reality is a new visual environment generated by combining real and virtual worlds, and is a further development of virtual reality technology, which is used for enhancing the reality sense of user experience by presenting virtual scene information in a real scene and building an interactive feedback information loop among the real world, the virtual world and a user.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
the technical problem that the material matching of the MR mixed reality three-dimensional scene is not intelligent and accurate enough exists in the prior art.
Disclosure of Invention
The embodiment of the application provides a method and a system for generating an MR mixed reality three-dimensional scene material library, solves the technical problem that material matching of the MR mixed reality three-dimensional scene is not intelligent and accurate enough in the prior art, achieves the technical effects of intelligently building a mixed reality three-dimensional scene material state sub-library according to scene information and deeply combining scenes, and improving material matching accuracy.
In view of the foregoing problems, the present application provides a method and a system for generating an MR mixed reality three-dimensional scene material library.
In a first aspect, the present application provides a method for generating an MR mixed reality three-dimensional scene material library, wherein the method includes: obtaining a first material; obtaining historical data of three-dimensional scene matching actions corresponding to the first material based on big data; training a neural network model according to the first material and historical data of the three-dimensional scene matching action corresponding to the first material, and constructing a three-dimensional scene matching evaluation model; obtaining a first three-dimensional scene matching action; obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene matching evaluation model; performing emotion analysis on the first evaluation result to obtain a first emotion score; obtaining a first guidance matching direction according to the first emotion score; optimizing according to the first guide matching direction and the first three-dimensional scene matching action to obtain a second three-dimensional scene matching action; based on the evaluation result, obtaining an Nth evaluation result according to the first material, the Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model in sequence, wherein N is a positive integer; obtaining an Nth mapping relation of the first material and an Nth three-dimensional scene matching action; and constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation.
In another aspect, the present application further provides a system for generating an MR mixed reality three-dimensional scene material library, where the system includes: a first obtaining unit configured to obtain a first material; a second obtaining unit, configured to obtain, based on big data, history data of a three-dimensional scene matching action corresponding to the first material; the first construction unit is used for training a neural network model according to the first material and historical data of three-dimensional scene matching actions corresponding to the first material to construct a three-dimensional scene matching evaluation model; a third obtaining unit configured to obtain a first three-dimensional scene matching action; a fourth obtaining unit, configured to obtain a first evaluation result according to the first material, the first three-dimensional scene matching action, and the three-dimensional scene matching evaluation model; a fifth obtaining unit, configured to perform emotion analysis on the first evaluation result to obtain a first emotion score; a sixth obtaining unit, configured to obtain a first guidance matching direction according to the first emotion score; a seventh obtaining unit, configured to perform optimization according to the first guidance matching direction and the first three-dimensional scene matching action, and obtain a second three-dimensional scene matching action; an eighth obtaining unit, configured to obtain an nth evaluation result in sequence according to the first material, the nth three-dimensional scene matching action, and the three-dimensional scene matching evaluation model based on the eighth obtaining unit, where N is a positive integer; a ninth obtaining unit, configured to obtain an nth mapping relationship between the first material and an nth three-dimensional scene matching action; and the second construction unit is used for constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation.
In a third aspect, the present invention provides a system for generating a MR mixed reality three-dimensional scene material library, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the method comprises the steps of obtaining a first material, obtaining three-dimensional scene matching action historical data corresponding to the first material based on big data, training a neural network model according to the first material and the historical data of the three-dimensional scene matching action corresponding to the first material, constructing a three-dimensional scene matching evaluation model, obtaining a first three-dimensional scene matching action, obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene evaluation model, carrying out emotion analysis on the first evaluation result, obtaining a first emotion score, obtaining a first guidance matching direction based on the first emotion score, optimizing based on the first guidance matching direction and the first three-dimensional scene matching action, obtaining a second three-dimensional scene matching action, and obtaining a first material, a second three-dimensional scene matching action based on the first guidance matching direction and the first three-dimensional scene matching action And an Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model are used for obtaining an Nth evaluation result, wherein N is a positive integer, a corresponding mapping relation is constructed based on the first material and the matching action of different three-dimensional scenes, a first mixed reality three-dimensional scene material state transition sub-library of the first material is constructed based on the first material and the mapping relation, and the material is intelligently matched based on the first mixed reality three-dimensional scene material state sub-library, so that the technical effect of more accurate matching is achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flowchart of a method for generating an MR mixed reality three-dimensional scene material library according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a system for generating an MR mixed reality three-dimensional scene material library according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a first constructing unit 13, a third obtaining unit 14, a fourth obtaining unit 15, a fifth obtaining unit 16, a sixth obtaining unit 17, a seventh obtaining unit 18, an eighth obtaining unit 19, a ninth obtaining unit 20, a second constructing unit 21, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides a method and a system for generating an MR mixed reality three-dimensional scene material library, solves the technical problem that material matching of the MR mixed reality three-dimensional scene is not intelligent and accurate enough in the prior art, achieves the technical effects of intelligently building a mixed reality three-dimensional scene material state sub-library according to scene information and deeply combining scenes, and improving material matching accuracy. Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
With the increasingly frequent crossing of AR/VR with the field of Internet of things, the mixed reality is the smart combination of AR/VR and Internet of things trend technology: the virtual world and real world impacts industry have a common space where digital objects and real objects coexist, and their data can interact with each other. MR technology will reduce the necessary media and barriers between our perception, understanding and information, layer by layer, constantly affecting the emotional relationships between us and the modes of operation. Mixed reality is a new visual environment generated by combining real and virtual worlds, and is a further development of virtual reality technology, which is used for enhancing the reality sense of user experience by presenting virtual scene information in a real scene and building an interactive feedback information loop among the real world, the virtual world and a user. The technical problem that the material matching of the MR mixed reality three-dimensional scene is not intelligent and accurate enough exists in the prior art.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a method for generating an MR mixed reality three-dimensional scene material library, wherein the method comprises the following steps: obtaining a first material; obtaining historical data of three-dimensional scene matching actions corresponding to the first material based on big data; training a neural network model according to the first material and historical data of the three-dimensional scene matching action corresponding to the first material, and constructing a three-dimensional scene matching evaluation model; obtaining a first three-dimensional scene matching action; obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene matching evaluation model; performing emotion analysis on the first evaluation result to obtain a first emotion score; obtaining a first guidance matching direction according to the first emotion score; optimizing according to the first guide matching direction and the first three-dimensional scene matching action to obtain a second three-dimensional scene matching action; based on the evaluation result, obtaining an Nth evaluation result according to the first material, the Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model in sequence, wherein N is a positive integer; obtaining an Nth mapping relation of the first material and an Nth three-dimensional scene matching action; and constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a method for generating an MR mixed reality three-dimensional scene material library, where the method includes:
step S100: obtaining a first material;
step S200: obtaining historical data of three-dimensional scene matching actions corresponding to the first material based on big data;
specifically, the first material is a material raw material for building a three-dimensional scene, which is collected, unprocessed, perceptual and dispersed raw material, the three-dimensional scene matching action refers to data of matching action once applied to the first material in the mixed reality three-dimensional scene, the matching action applied to the scene of the first material is obtained in the process of building the three-dimensional scene, and historical data of the three-dimensional scene matching action corresponding to the first material is obtained according to the obtaining result. Through the acquisition of the historical data, a foundation is tamped for subsequently constructing a verified matching evaluation model of the three scenes.
Step S300: training a neural network model according to the first material and historical data of the three-dimensional scene matching action corresponding to the first material, and constructing a three-dimensional scene matching evaluation model;
specifically, the three-dimensional scene matching evaluation model is a model for three-dimensional scene evaluation, the model is an intelligent model based on a neural network, and the model can be used for more accurately judging input data through training of a large amount of basic data. Further, the training process includes input data and supervised data, the learning process is essentially a supervised learning process, the three-dimensional scene matching evaluation model is continuously self-corrected and adjusted through the supervised data, and the supervised learning of the model is ended until the three-dimensional scene matching evaluation model is in a convergence state, so that the technical effect of obtaining a more accurate output result is achieved.
Step S400: obtaining a first three-dimensional scene matching action;
step S500: obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene matching evaluation model;
specifically, the first three-dimensional scene matching action is used as a certain action in the history data of the three-dimensional scene matching action, according to the first three-dimensional scene matching action, the first material obtains an evaluation result of the first three-dimensional scene matching action and the first material, the evaluation result is an evaluation result of a matching degree of the first three-dimensional scene matching action and the first material, and the evaluation result is obtained by inputting the first material and the first three-dimensional scene matching action into the three-dimensional scene matching evaluation model to obtain the first evaluation result.
Step S600: performing emotion analysis on the first evaluation result to obtain a first emotion score;
step S700: obtaining a first guidance matching direction according to the first emotion score;
specifically, the emotion analysis is a score evaluation result of the first evaluation result, and refers to a process of identifying and extracting subjective information in the evaluation result in the original material by methods such as natural language processing, text mining, and computer language, so as to obtain an emotion evaluation result of the first evaluation result, where the emotion evaluation result includes the first emotion score. And obtaining the first guidance matching direction based on the first emotion score, further to say, constructing a corresponding matching direction set according to the emotion score, obtaining a direction corresponding to the emotion score based on the obtained emotion score, and adjusting the three-dimensional scene matching action corresponding to the evaluation result based on the direction.
Step S800: optimizing according to the first guide matching direction and the first three-dimensional scene matching action to obtain a second three-dimensional scene matching action;
step S900: based on the evaluation result, obtaining an Nth evaluation result according to the first material, the Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model in sequence, wherein N is a positive integer;
step S1000: obtaining an Nth mapping relation of the first material and an Nth three-dimensional scene matching action;
step S1100: and constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation.
Specifically, the first three-dimensional scene matching action is optimized according to the first guidance matching direction, and further, the first three-dimensional scene matching action and the first material are sequentially performed according to the first guidance matching direction; matching the first material and the second three-dimensional scene; and constructing a mapping relation between a first material and an Nth three-dimensional scene matching action to obtain a first mapping relation, a second mapping relation and an … Nth mapping relation, wherein N is a positive integer, constructing a state sub-library of the first material and the first mixed reality three-dimensional scene material according to the construction result, and intelligently matching the material based on the first mixed reality three-dimensional scene material state sub-library to achieve the technical effect of more accurate matching.
Further, the embodiment of the present application further includes:
step 1210: obtaining a first expected matching state of the first material;
step S1220: inputting the first expected matching state into the first mixed reality three-dimensional scene material state sub-library for searching to obtain an Mth state and a first probability of the Mth state;
step S1230: determining whether the first probability satisfies the first expected probability;
step S1240: and if the first probability meets the first expected probability, obtaining an Mth three-dimensional scene matching action corresponding to the Mth state.
Specifically, the first expected matching state is a set matching state, the first mixed-reality three-dimensional scene material state sub-library includes a markov chain state sequence, which is a set of all state chains, that is, a "state space", the first mixed-reality three-dimensional scene material state sub-library is searched according to the first expected matching state, a set satisfying the first expected matching state is obtained, an occurrence probability corresponding to each state and each state in the set is obtained, a first probability corresponding to the first state and the first state is obtained, whether the first probability satisfies a first expected probability is judged, and when the first probability satisfies the first expected probability, a first three-dimensional scene matching action corresponding to the first state is obtained. Further, the probability matching calculation is performed on each state in the set meeting the requirement, and a three-dimensional scene matching action corresponding to the mth state and the mth state meeting the first expected probability is obtained. Through the calculation and matching of the probability, the obtained three-dimensional scene matching action is more fit with the requirement, and the technical effect of intelligently matching scene materials is further achieved.
Further, the embodiment of the present application further includes:
step 1310: obtaining a material data set based on the big data;
step S1320: obtaining a first characteristic, a second characteristic and a third characteristic of material data in the material data set;
step S1330: constructing a decision tree of the material data set according to the first characteristic, the second characteristic and the third characteristic;
step S1340: and classifying and storing all material data in the material data set through a decision tree of the material data set to construct a three-dimensional scene material library.
Specifically, the decision tree is an algorithm for classification and regression, and the algorithm is tested and used by training data, analyzing data and preparing data, so that a more accurate classification result is obtained. The method comprises the steps of obtaining a set of material data through big data, obtaining a first feature, a second feature and a third feature of the material data in the material data set, wherein the first feature can be an attribute feature of the material data, the second feature can be a color feature of the material data, and the third feature can be a shape feature of the material data, evaluating according to a feature which can represent characteristics of a material in the material data, building a decision tree of the material data set based on feature information obtained through evaluation, and classifying and storing all the material data in the material data set through the decision tree of the material data set to build a three-dimensional scene material library. The Decision Tree (Decision Tree) is a graphical method for intuitively applying probability analysis by constructing the Decision Tree to obtain the probability that the expected value of the net present value is greater than or equal to zero on the basis of the known occurrence probability of various conditions, evaluating the risk of the project and judging the feasibility of the project, and the classifier can provide correct classification for newly-appeared objects and consists of a root node, an internal node and leaf nodes. The first feature, the second feature and the third feature can be used as internal nodes of the decision tree, the features with the minimum entropy value can be classified preferentially by calculating the information entropy of the internal nodes, the decision tree is constructed recursively by the method until the final feature leaf node cannot be subdivided, and the classification is finished, so that the decision tree of the material data set is formed. Through the three-dimensional scene material library constructed based on the decision tree, corresponding characteristics are input into the decision tree, a more appropriate material scheme can be quickly and accurately matched, and the technical effect that the matching of materials is more intelligent and accurate is achieved.
Further, the step S1130 of constructing a decision tree of the material data set according to the first feature, the second feature, and the third feature further includes:
step S1131: performing information theory encoding operation on the first characteristic to obtain a first characteristic information entropy, performing information theory encoding operation on the second characteristic to obtain a second characteristic information entropy, and performing information theory encoding operation on the third characteristic to obtain a third characteristic information entropy;
step S1132: inputting the first feature information entropy, the second feature information entropy and the third feature information entropy into a sequential ordering model to obtain first root node feature information;
step S1133: and constructing a decision tree of the material data set based on the first root node characteristic information and the material data set.
Specifically, based on the first feature, the second feature, and the third feature, a computation of an information coding theory is performed to obtain a feature information entropy corresponding to corresponding feature information, in order to construct the multi-level care decision tree specifically, an operation of an information entropy may be performed on the first feature, the second feature, and the third feature, respectively, that is, a shannon formula in an information theory code is used to perform a specific computation of an information entropy value, so as to obtain the corresponding first feature information entropy, the second feature information entropy, and the third feature information entropy, further, based on the order ranking model, a comparison of magnitude values is performed on the first feature information entropy, the second feature information entropy, and the third feature information entropy, so as to obtain a feature with a minimum entropy value, that is, a first root node feature information, by preferentially classifying the feature with the minimum entropy value, and then sequentially classifying the features by a recursion algorithm according to the sequence of entropy values from small to large, and finally constructing a decision tree of the material data set, thereby realizing the specific construction of the decision tree of the material data set.
Further, the embodiment of the present application further includes:
step S1410: obtaining a first voice material;
step S1420: performing semantic extraction on the first voice material to obtain first semantic feature information;
step S1430: and taking the first semantic feature information as a convolution feature, performing convolution comparison in the three-dimensional scene material library to obtain a second material, wherein the second material is a material meeting the requirements of the first voice material in the three-dimensional scene material library.
Specifically, the first speech material is a required speech material, that is, the first speech material has required information about the material, including detailed requirements. The method comprises the steps of extracting semantic features of a first voice material to obtain first semantic feature information of the first voice material, taking the first semantic feature information as convolution features based on the first semantic feature information, carrying out convolution comparison in a three-dimensional scene material library to obtain convolution results of the first semantic features in a unit scene material library, sorting and screening the convolution results according to the numerical value of the convolution results, namely, the larger the convolution result is, the closer the material features are to the first semantic feature information is, and obtaining the material which meets the requirements of the first voice material in the three-dimensional scene material library according to the sorting result. And analyzing the first voice material, extracting the characteristics of the voice material, traversing the convolution characteristics of the three-dimensional scene material library based on the extraction result of the characteristics, and screening the characteristic matching conditions according to the traversal result of the convolution characteristics so as to obtain a more accurate material matching result with the first semantic characteristics.
Further, the embodiment of the present application further includes:
step S1440: obtaining a first voice mapping relation according to the first voice material and the second material, wherein the first voice mapping relation is a mapping relation between the first voice material and the second material;
step S1450: and inputting a voice three-dimensional scene material conversion model by taking all material data in the three-dimensional scene material library as training data and the first voice mapping relation as supervision data to obtain a voice three-dimensional scene material library.
Specifically, mapping refers to a relationship that elements in sets of two elements are mutually 'corresponding', a mapping relationship between a first voice material and a second material is constructed according to an obtained first voice material and the second material, and further, according to the mapping logical relationship between the voice material and the corresponding material, the logical relationship is used as supervision data for supervising and learning the voice three-dimensional scene material conversion model, all material data in the three-dimensional scene material library are used as training data, supervised learning of the material data is performed, and then label correction is performed on the material with the mapping relationship, so that a more accurate voice three-dimensional scene material library is obtained.
Further, the obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action, and the three-dimensional scene matching evaluation model, in step S500 in this embodiment of the present application, further includes:
step S510: inputting the first material and the first three-dimensional scene matching action as input data into the three-dimensional scene matching evaluation model;
step S520: the three-dimensional scene matching evaluation model is obtained through training of multiple groups of data, and each group of data in the multiple groups of data comprises the first material, the first three-dimensional scene matching action and identification information for identifying the first evaluation result;
step S530: and obtaining output information of the three-dimensional scene matching evaluation model, wherein the output information comprises the first evaluation result.
Specifically, the three-dimensional scene matching evaluation model is a neural network model in machine learning, can be continuously learned and adjusted, and is a highly complex nonlinear dynamical learning system. In brief, the three-dimensional scene matching evaluation model is a mathematical model, and after the three-dimensional scene matching evaluation model is trained to a convergence state through training of a large amount of training data, the first output result can be obtained through analysis of the three-dimensional scene matching evaluation model based on input data.
Furthermore, the training process further includes a supervised learning process, each group of supervised data includes the first material, the first three-dimensional scene matching action and identification information for identifying the first evaluation result, the first material and the first three-dimensional scene matching action are input into a neural network model, the supervised learning is performed on the three-dimensional scene matching evaluation model according to the identification information for identifying the first evaluation result, so that output data of the three-dimensional scene matching evaluation model is consistent with the supervised data, continuous self-correction and adjustment are performed through the neural network model until the obtained output result is consistent with the identification information, the group of data supervised learning is ended, and the next group of data supervised learning is performed; and when the neural network model is in a convergence state, finishing the supervised learning process. Through supervised learning of the model, the model can process the input information more accurately, and a more accurate and reasonable first evaluation result is obtained.
To sum up, the method and the system for generating the MR mixed reality three-dimensional scene material library provided by the embodiment of the application have the following technical effects:
1. the method comprises the steps of obtaining a first material, obtaining three-dimensional scene matching action historical data corresponding to the first material based on big data, training a neural network model according to the first material and the historical data of the three-dimensional scene matching action corresponding to the first material, constructing a three-dimensional scene matching evaluation model, obtaining a first three-dimensional scene matching action, obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene evaluation model, carrying out emotion analysis on the first evaluation result, obtaining a first emotion score, obtaining a first guidance matching direction based on the first emotion score, optimizing based on the first guidance matching direction and the first three-dimensional scene matching action, obtaining a second three-dimensional scene matching action, and obtaining a first material, a second three-dimensional scene matching action based on the first guidance matching direction and the first three-dimensional scene matching action And an Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model are used for obtaining an Nth evaluation result, wherein N is a positive integer, a corresponding mapping relation is constructed based on the first material and the matching action of different three-dimensional scenes, a first mixed reality three-dimensional scene material state transition sub-library of the first material is constructed based on the first material and the mapping relation, and the material is intelligently matched based on the first mixed reality three-dimensional scene material state sub-library, so that the technical effect of more accurate matching is achieved.
2. Due to the fact that the three-dimensional scene material library constructed based on the decision tree is adopted, corresponding characteristics are input into the decision tree, a proper material scheme can be matched quickly and accurately, and the technical effect that the matching of materials is more intelligent and accurate is achieved.
Example two
Based on the same inventive concept as the method for generating the MR mixed reality three-dimensional scene material library in the foregoing embodiment, the present invention further provides a system for generating the MR mixed reality three-dimensional scene material library, as shown in fig. 2, the system includes:
a first obtaining unit 11, wherein the first obtaining unit 11 is used for obtaining a first material;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain, based on big data, history data of a three-dimensional scene matching action corresponding to the first material;
the first construction unit 13 is configured to train a neural network model according to the first material and historical data of three-dimensional scene matching actions corresponding to the first material, and construct a three-dimensional scene matching evaluation model;
a third obtaining unit 14, wherein the third obtaining unit 14 is configured to obtain a first three-dimensional scene matching action;
a fourth obtaining unit 15, where the fourth obtaining unit 15 is configured to obtain a first evaluation result according to the first material, the first three-dimensional scene matching action, and the three-dimensional scene matching evaluation model;
a fifth obtaining unit 16, where the fifth obtaining unit 16 is configured to perform emotion analysis on the first evaluation result to obtain a first emotion score;
a sixth obtaining unit 17, where the sixth obtaining unit 17 is configured to obtain a first guidance matching direction according to the first emotion score;
a seventh obtaining unit 18, where the seventh obtaining unit 18 is configured to perform optimization according to the first guidance matching direction and the first three-dimensional scene matching action, and obtain a second three-dimensional scene matching action;
an eighth obtaining unit 19, configured to obtain, based on the obtained evaluation result, an nth evaluation result according to the first material, the nth three-dimensional scene matching action, and the three-dimensional scene matching evaluation model in sequence, where N is a positive integer;
a ninth obtaining unit 20, where the ninth obtaining unit 20 is configured to obtain an nth mapping relationship between the first material and an nth three-dimensional scene matching action;
a second constructing unit 21, where the second constructing unit 21 is configured to construct a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the nth mapping relationship.
Further, the system further comprises:
a tenth obtaining unit, configured to obtain a first expected matching state of the first material;
an eleventh obtaining unit, configured to input the first expected matching state into the first mixed reality three-dimensional scene material state sub-library for searching, and obtain an mth state and a first probability of occurrence of the mth state;
a first judging unit configured to judge whether the first probability satisfies the first expected probability;
a twelfth obtaining unit, configured to obtain an mth three-dimensional scene matching action corresponding to the mth state if the first probability satisfies the first expected probability.
Further, the system further comprises:
a thirteenth obtaining unit configured to obtain a material data set based on the big data;
a fourteenth obtaining unit, configured to obtain a first feature, a second feature, and a third feature of the material data in the material data set;
a third construction unit, configured to construct a decision tree of the material data set according to the first feature, the second feature, and the third feature;
and the fourth construction unit is used for classifying and storing all the material data in the material data set through a decision tree of the material data set to construct a three-dimensional scene material library.
Further, the system further comprises:
a fifteenth obtaining unit, configured to perform information theory encoding operation on the first feature to obtain a first feature information entropy, perform information theory encoding operation on the second feature to obtain a second feature information entropy, and perform information theory encoding operation on the third feature to obtain a third feature information entropy;
a sixteenth obtaining unit, configured to input the first feature information entropy, the second feature information entropy, and the third feature information entropy into a sequential ordering model, and obtain first root node feature information;
a fifth construction unit, configured to construct a decision tree of the material data set based on the first root node feature information and the material data set.
Further, the system further comprises:
a seventeenth obtaining unit configured to obtain a first speech material;
an eighteenth obtaining unit, configured to perform semantic extraction on the first voice material to obtain first semantic feature information;
a nineteenth obtaining unit, configured to perform convolution comparison in the three-dimensional scene material library by using the first semantic feature information as a convolution feature, so as to obtain a second material, where the second material is a material in the three-dimensional scene material library that meets the requirement of the first speech material.
Further, the system further comprises:
a twentieth obtaining unit, configured to obtain a first voice mapping relationship according to the first voice material and the second material, where the first voice mapping relationship is a mapping relationship between the first voice material and the second material;
a twenty-first obtaining unit, configured to obtain a first voice mapping relationship according to the first voice material and the second material, where the first voice mapping relationship is a mapping relationship between the first voice material and the second material;
and the first input unit is used for inputting a voice three-dimensional scene material conversion model by taking all material data in the three-dimensional scene material library as training data and the first voice mapping relation as supervision data to obtain a voice three-dimensional scene material library.
Further, the system further comprises:
a second input unit configured to input the first material and the first three-dimensional scene matching action as input data into the three-dimensional scene matching evaluation model;
a twenty-second obtaining unit, configured to train the three-dimensional scene matching evaluation model to obtain through multiple groups of data, where each group of data in the multiple groups of data includes the first material, the first three-dimensional scene matching action, and identification information for identifying the first evaluation result;
a twenty-third obtaining unit configured to obtain output information of the three-dimensional scene matching evaluation model, where the output information includes the first evaluation result.
Various changes and specific examples of the method for generating the MR mixed-reality three-dimensional scene material library in the first embodiment of fig. 1 are also applicable to the system for generating the MR mixed-reality three-dimensional scene material library in the present embodiment, and through the foregoing detailed description of the method for generating the MR mixed-reality three-dimensional scene material library, those skilled in the art can clearly know the method for implementing the system for generating the MR mixed-reality three-dimensional scene material library in the present embodiment, so for the brevity of the description, detailed description is not repeated here.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to fig. 3.
Fig. 3 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of the method for generating the MR mixed reality three-dimensional scene material library in the foregoing embodiment, the invention further provides a system for generating the MR mixed reality three-dimensional scene material library, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the methods for generating the MR mixed reality three-dimensional scene material library.
Where in fig. 3 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
The embodiment of the invention provides a method for generating an MR mixed reality three-dimensional scene material library, wherein the method comprises the following steps: obtaining a first material; obtaining historical data of three-dimensional scene matching actions corresponding to the first material based on big data; training a neural network model according to the first material and historical data of the three-dimensional scene matching action corresponding to the first material, and constructing a three-dimensional scene matching evaluation model; obtaining a first three-dimensional scene matching action; obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene matching evaluation model; performing emotion analysis on the first evaluation result to obtain a first emotion score; obtaining a first guidance matching direction according to the first emotion score; optimizing according to the first guide matching direction and the first three-dimensional scene matching action to obtain a second three-dimensional scene matching action; based on the evaluation result, obtaining an Nth evaluation result according to the first material, the Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model in sequence, wherein N is a positive integer; obtaining an Nth mapping relation of the first material and an Nth three-dimensional scene matching action; and constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation. The technical problem that material matching of the MR mixed reality three-dimensional scene is not intelligent and accurate enough in the prior art is solved, the problem that the material state sub-library of the mixed reality three-dimensional scene is built intelligently according to scene information and depth combination scenes is achieved, and the technical effect of improving the material matching accuracy is achieved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A method for generating a MR mixed reality three-dimensional scene material library, wherein the method comprises:
obtaining a first material;
obtaining historical data of three-dimensional scene matching actions corresponding to the first material based on big data;
training a neural network model according to the first material and historical data of the three-dimensional scene matching action corresponding to the first material, and constructing a three-dimensional scene matching evaluation model;
obtaining a first three-dimensional scene matching action;
obtaining a first evaluation result according to the first material, the first three-dimensional scene matching action and the three-dimensional scene matching evaluation model;
performing emotion analysis on the first evaluation result to obtain a first emotion score;
obtaining a first guidance matching direction according to the first emotion score;
optimizing according to the first guide matching direction and the first three-dimensional scene matching action to obtain a second three-dimensional scene matching action;
based on the evaluation result, obtaining an Nth evaluation result according to the first material, the Nth three-dimensional scene matching action and the three-dimensional scene matching evaluation model in sequence, wherein N is a positive integer;
obtaining an Nth mapping relation of the first material and an Nth three-dimensional scene matching action;
and constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation.
2. The method of claim 1, wherein the method comprises:
obtaining a first expected matching state of the first material;
inputting the first expected matching state into the first mixed reality three-dimensional scene material state sub-library for searching to obtain an Mth state and a first probability of the Mth state;
determining whether the first probability satisfies the first expected probability;
and if the first probability meets the first expected probability, obtaining an Mth three-dimensional scene matching action corresponding to the Mth state.
3. The method of claim 1, wherein the method comprises:
obtaining a material data set based on the big data;
obtaining a first characteristic, a second characteristic and a third characteristic of material data in the material data set;
constructing a decision tree of the material data set according to the first characteristic, the second characteristic and the third characteristic;
and classifying and storing all material data in the material data set through a decision tree of the material data set to construct a three-dimensional scene material library.
4. The method of claim 3, wherein the constructing a decision tree for a material data set based on the first feature, the second feature and the third feature comprises:
performing information theory encoding operation on the first characteristic to obtain a first characteristic information entropy, performing information theory encoding operation on the second characteristic to obtain a second characteristic information entropy, and performing information theory encoding operation on the third characteristic to obtain a third characteristic information entropy;
inputting the first feature information entropy, the second feature information entropy and the third feature information entropy into a sequential ordering model to obtain first root node feature information;
and constructing a decision tree of the material data set based on the first root node characteristic information and the material data set.
5. The method of claim 1, wherein the method comprises:
obtaining a first voice material;
performing semantic extraction on the first voice material to obtain first semantic feature information;
and taking the first semantic feature information as a convolution feature, performing convolution comparison in the three-dimensional scene material library to obtain a second material, wherein the second material is a material meeting the requirements of the first voice material in the three-dimensional scene material library.
6. The method of claim 5, wherein the method comprises:
obtaining a first voice mapping relation according to the first voice material and the second material, wherein the first voice mapping relation is a mapping relation between the first voice material and the second material;
and inputting a voice three-dimensional scene material conversion model by taking all material data in the three-dimensional scene material library as training data and the first voice mapping relation as supervision data to obtain a voice three-dimensional scene material library.
7. The method of claim 1, wherein said obtaining a first evaluation result based on said first material, said first three-dimensional scene matching action and said three-dimensional scene matching evaluation model comprises:
inputting the first material and the first three-dimensional scene matching action as input data into the three-dimensional scene matching evaluation model;
the three-dimensional scene matching evaluation model is obtained through training of multiple groups of data, and each group of data in the multiple groups of data comprises the first material, the first three-dimensional scene matching action and identification information for identifying the first evaluation result;
and obtaining output information of the three-dimensional scene matching evaluation model, wherein the output information comprises the first evaluation result.
8. An MR mixed reality three-dimensional scene material library generation system, wherein the system comprises:
a first obtaining unit configured to obtain a first material;
a second obtaining unit, configured to obtain, based on big data, history data of a three-dimensional scene matching action corresponding to the first material;
the first construction unit is used for training a neural network model according to the first material and historical data of three-dimensional scene matching actions corresponding to the first material to construct a three-dimensional scene matching evaluation model;
a third obtaining unit configured to obtain a first three-dimensional scene matching action;
a fourth obtaining unit, configured to obtain a first evaluation result according to the first material, the first three-dimensional scene matching action, and the three-dimensional scene matching evaluation model;
a fifth obtaining unit, configured to perform emotion analysis on the first evaluation result to obtain a first emotion score;
a sixth obtaining unit, configured to obtain a first guidance matching direction according to the first emotion score;
a seventh obtaining unit, configured to perform optimization according to the first guidance matching direction and the first three-dimensional scene matching action, and obtain a second three-dimensional scene matching action;
an eighth obtaining unit, configured to obtain an nth evaluation result in sequence according to the first material, the nth three-dimensional scene matching action, and the three-dimensional scene matching evaluation model based on the eighth obtaining unit, where N is a positive integer;
a ninth obtaining unit, configured to obtain an nth mapping relationship between the first material and an nth three-dimensional scene matching action;
and the second construction unit is used for constructing a first mixed reality three-dimensional scene material state sub-library of the first material according to the first material and the Nth mapping relation.
9. An MR mixed reality three-dimensional scene material library generation system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-7 when executing the program.
CN202110571672.8A 2021-05-25 2021-05-25 MR (magnetic resonance) mixed reality three-dimensional scene material library generation method and system Active CN113284256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110571672.8A CN113284256B (en) 2021-05-25 2021-05-25 MR (magnetic resonance) mixed reality three-dimensional scene material library generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110571672.8A CN113284256B (en) 2021-05-25 2021-05-25 MR (magnetic resonance) mixed reality three-dimensional scene material library generation method and system

Publications (2)

Publication Number Publication Date
CN113284256A true CN113284256A (en) 2021-08-20
CN113284256B CN113284256B (en) 2023-10-31

Family

ID=77281716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110571672.8A Active CN113284256B (en) 2021-05-25 2021-05-25 MR (magnetic resonance) mixed reality three-dimensional scene material library generation method and system

Country Status (1)

Country Link
CN (1) CN113284256B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752540A (en) * 2011-12-30 2012-10-24 新奥特(北京)视频技术有限公司 Automatic categorization method based on face recognition technology
US20170177175A1 (en) * 2015-12-21 2017-06-22 Ming-Chang Lai System and method for editing and generating multimedia contents according to digital playbooks
CN109887095A (en) * 2019-01-22 2019-06-14 华南理工大学 A kind of emotional distress virtual reality scenario automatic creation system and method
CN110020200A (en) * 2019-03-15 2019-07-16 微梦创科网络科技(中国)有限公司 A kind of personalized recommendation method and system based on history material
CN111063037A (en) * 2019-12-30 2020-04-24 北京中网易企秀科技有限公司 Three-dimensional scene editing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752540A (en) * 2011-12-30 2012-10-24 新奥特(北京)视频技术有限公司 Automatic categorization method based on face recognition technology
US20170177175A1 (en) * 2015-12-21 2017-06-22 Ming-Chang Lai System and method for editing and generating multimedia contents according to digital playbooks
CN109887095A (en) * 2019-01-22 2019-06-14 华南理工大学 A kind of emotional distress virtual reality scenario automatic creation system and method
CN110020200A (en) * 2019-03-15 2019-07-16 微梦创科网络科技(中国)有限公司 A kind of personalized recommendation method and system based on history material
CN111063037A (en) * 2019-12-30 2020-04-24 北京中网易企秀科技有限公司 Three-dimensional scene editing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗江林 等: "基于动态模板库的表情动画生成方法", 《中国新技术新产品》, vol. 20, pages 19 - 21 *

Also Published As

Publication number Publication date
CN113284256B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
US10558195B2 (en) Methods and apparatus for machine learning predictions of manufacture processes
US20170330078A1 (en) Method and system for automated model building
CN108563703A (en) A kind of determination method of charge, device and computer equipment, storage medium
CN113361680A (en) Neural network architecture searching method, device, equipment and medium
CN111127246A (en) Intelligent prediction method for transmission line engineering cost
CN115797606B (en) 3D virtual digital human interaction action generation method and system based on deep learning
CN103324954A (en) Image classification method based on tree structure and system using same
CN110737805B (en) Method and device for processing graph model data and terminal equipment
CN111178399A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN112036483B (en) AutoML-based object prediction classification method, device, computer equipment and storage medium
Xu et al. Fusing complete monotonic decision trees
US20230351655A1 (en) Automatic design-creating artificial neural network device and method, using ux-bits
Madhavi et al. Multivariate deep causal network for time series forecasting in interdependent networks
Zhong et al. Construction project risk prediction model based on EW-FAHP and one dimensional convolution neural network
CN116302088B (en) Code clone detection method, storage medium and equipment
CN113284256B (en) MR (magnetic resonance) mixed reality three-dimensional scene material library generation method and system
Xu et al. Enhancement economic system based-graph neural network in stock classification
CN116503158A (en) Enterprise bankruptcy risk early warning method, system and device based on data driving
US20230047145A1 (en) Quantum simulation
CN114969511A (en) Content recommendation method, device and medium based on fragments
CN111797989B (en) Intelligent process recommendation method based on knowledge
CN114092057A (en) Project model construction method and device, terminal equipment and storage medium
CN113065321A (en) User behavior prediction method and system based on LSTM model and hypergraph
CN114254199A (en) Course recommendation method based on bipartite graph projection and node2vec
CN112307288A (en) User clustering method for multiple channels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant