CN111814812A - Modeling method, modeling device, storage medium, electronic device and scene recognition method - Google Patents

Modeling method, modeling device, storage medium, electronic device and scene recognition method Download PDF

Info

Publication number
CN111814812A
CN111814812A CN201910282194.1A CN201910282194A CN111814812A CN 111814812 A CN111814812 A CN 111814812A CN 201910282194 A CN201910282194 A CN 201910282194A CN 111814812 A CN111814812 A CN 111814812A
Authority
CN
China
Prior art keywords
scene
parameter
relation
ontologies
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282194.1A
Other languages
Chinese (zh)
Inventor
何明
陈仲铭
黄粟
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282194.1A priority Critical patent/CN111814812A/en
Publication of CN111814812A publication Critical patent/CN111814812A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The embodiment of the application discloses a modeling method, a modeling device, a storage medium, electronic equipment and a scene identification method. The method comprises the following steps: acquiring data of multiple dimensions; extracting a plurality of parameter ontologies from the data of a plurality of dimensions, and constructing a first relation among the parameter ontologies; constructing a second relation between the plurality of parameter ontologies and the preset scene categories; and performing model training according to the multiple parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories. The scheme can comprehensively utilize multi-dimensional information, so that the finally constructed scene category has stronger robustness and higher accuracy; meanwhile, the personalized panoramic view recognition model constructed by the knowledge map technology can enable the finally recognized scene category to better reflect the preference of the user, so that the finally recognized scene category is more personalized, and the method is beneficial to providing more personalized intelligent service for the user.

Description

Modeling method, modeling device, storage medium, electronic device and scene recognition method
Technical Field
The present application relates to the field of electronic devices, and in particular, to a modeling method, apparatus, storage medium, electronic device, and scene recognition method.
Background
With the development of electronic technology, electronic devices such as smart phones have become more and more intelligent. The electronic device may perform data processing through various algorithmic models to provide various functions to the user. For example, the electronic device may learn behavior characteristics of the user according to the algorithm model, thereby providing personalized services to the user.
Disclosure of Invention
The embodiment of the application provides a modeling method, a modeling device, a storage medium, an electronic device and a scene recognition method, and can improve the switching efficiency of split screen application programs.
In a first aspect, an embodiment of the present application provides a modeling method, including:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
extracting a plurality of parameter ontologies from the data of the plurality of dimensions, and constructing a first relation among the parameter ontologies;
constructing a second relation between the parameter ontologies and a preset scene category;
and performing model training according to the plurality of parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories.
In a second aspect, an embodiment of the present application further provides a scene identification method, including:
acquiring data of multiple dimensions in a current scene;
and processing the data of the multiple dimensions according to a pre-trained scene recognition model so as to recognize the current scene category, wherein the scene recognition model is obtained by performing model training according to a reference body, a first relation among parameter bodies, a preset scene category and a second relation between the parameter bodies and the preset scene category in the data of the multiple dimensions under different scenes.
In a third aspect, an embodiment of the present application further provides a modeling apparatus, including:
an obtaining module, configured to obtain data of multiple dimensions, where the data of multiple dimensions at least includes: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
the first construction module is used for extracting a plurality of parameter ontologies from the data of the plurality of dimensions and constructing a first relation among the parameter ontologies;
the second construction module is used for constructing a second relation between the parameter ontologies and the preset scene categories;
and the processing module is used for carrying out model training according to the parameter ontologies, the preset scene type, the first relation and the second relation to obtain a scene identification model for identifying the scene type.
In a fourth aspect, embodiments of the present application further provide a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the modeling method or the scene recognition method.
In a fifth aspect, an embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the modeling method or the scene recognition method when executing the program.
According to the modeling method provided by the embodiment of the application, data of multiple dimensions are obtained; extracting a plurality of parameter ontologies from the data of a plurality of dimensions, and constructing a first relation among the parameter ontologies; constructing a second relation between the plurality of parameter ontologies and the preset scene categories; and performing model training according to the multiple parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories. The scheme can comprehensively utilize multi-dimensional information, so that the finally constructed scene category has stronger robustness and higher accuracy; meanwhile, the personalized panoramic view recognition model constructed by the knowledge map technology can enable the finally recognized scene category to better reflect the preference of the user, so that the finally recognized scene category is more personalized, and the method is beneficial to providing more personalized intelligent service for the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic view of a panoramic sensing architecture provided in an embodiment of the present application.
Fig. 2 is a first flowchart of a modeling method according to an embodiment of the present application.
Fig. 3 is a second flowchart of a modeling method according to an embodiment of the present application.
Fig. 4 is a third flowchart illustration of a modeling method provided in the embodiment of the present application.
Fig. 5 is a schematic view of a scene architecture of a modeling method according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a modeling apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic view of a panoramic sensing architecture provided in an embodiment of the present application. The modeling method is applied to electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panoramic sensing architecture is an integration of hardware and software for implementing the modeling method in an electronic device.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a cyclic neural network.
The embodiment of the application provides a modeling method which can be applied to electronic equipment. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch, glasses, a helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 2, fig. 2 is a first flowchart of a modeling method provided in an embodiment of the present application. Wherein, the modeling method comprises the following steps:
acquiring data of multiple dimensions 110, wherein the data of the multiple dimensions at least comprises: environmental data of the device, device operational data, user behavior data, and user representation data.
In the embodiment of the present application, the data of multiple dimensions may be panoramic data, that is, data information of different dimensional parameters in a certain scene.
The environment data may be natural environment data detected by the electronic device through a sensor, such as weather, temperature, sound, and the like. The device operation data is operation data inside the electronic device, such as Central Processing Unit (CPU) operation information, power information, memory information, and the like. The user behavior data may be data for describing user behavior habits, such as music preference habits, application use habits, screen operation habits, and the like. The user image data may be user image data such as gender, age, and school calendar.
In embodiments of the present application, the data of the multiple dimensions may also include other data related to the user or the device, such as device resource data, social context data, and the like.
In some embodiments, the retrieved panoramic data may be stored in a database having a particular data structure to facilitate later data recall when performing the task. For example, the database may be a Structured Query Language (SQL) based database.
And 120, extracting a plurality of parameter ontologies from the data of the plurality of dimensions, and constructing a first relation between the parameter ontologies.
The parameter ontology refers to a substance having an actual meaning, a concept having a physical meaning, and the like.
For the environmental data: the environment is a body, such as temperature, humidity and illumination, and the body can also be used. In addition, temperature, humidity, and illumination can be regarded as attributes of the environment, as they can be used to characterize certain aspects of the environment. Such as temperature, may in turn include attributes, both in degrees fahrenheit and in degrees celsius, for temperature.
For the device operation data, the ontology mainly aims at the data inside the device, such as CPU, electric quantity, time, and the like. In addition, data obtained by sensors, such as light, temperature, etc., may also be used.
For user behavior data: such as songs, games, applications, news, etc. may be ontologies. The attributes of the songs are conventional, such as types, singers, duration and the like; the attributes of the game are conventionally type, developer, size, etc.; the attributes of the application may be function, developer, size, version number, etc.; the attributes of the news are category, number of words, author, etc.
For user portrait data: the user, place, academic calendar, etc. may be the ontology, and the place and academic calendar are also the attributes of the user. The main attributes of the user are location, academic calendar, gender, hobby, age, etc.; the attributes of the place include a place of departure, a work unit address, an address and the like; the attributes of the academic calendar include school, graduation time, academic calendar level and the like.
In practical applications, there are various ways of extracting the parameter ontology and constructing the relationship between the parameter ontologies. In some embodiments, the step of "extracting a plurality of parameter ontologies from the panoramic data and constructing a first relationship between the parameter ontologies" may be performed according to an existing ontology relationship construction manner, and may include the following procedures:
extracting a plurality of parameter ontologies from the panoramic data by adopting a knowledge graph technology;
and deducing a first relation between parameter ontologies by adopting a knowledge reasoning technology.
The Knowledge Graph (also called scientific Knowledge Graph) is a series of different graphs for displaying the relation between the Knowledge development process and the structure, describes Knowledge resources and carriers thereof by using a visualization technology, and excavates, analyzes, constructs, draws and displays Knowledge and the mutual relation between the Knowledge resources and the carriers.
The knowledge reasoning technology mainly solves the problems of logic relation between the preconditions and the conclusions in the reasoning process and the transmission of uncertainty in non-accurate reasoning. According to different classification standards, the inference method mainly has the following three classification modes:
the method can be divided into deductive reasoning and inductive reasoning;
from the certainty, the method can be divided into accurate reasoning and inaccurate reasoning;
from monotonicity, the method can be divided into monotone reasoning and non-monotone reasoning.
In the embodiment of the application, the knowledge reasoning technology can be used for deducing the relationship among the parameter ontologies in various ways. For example, the relationship between two parameter ontologies can be considered to be calibrated by means of labeling by those skilled in the art.
In addition, an ontology relationship inference method based on the feature vector may be used. That is, in some embodiments, the step of "inferring the first relationship between the parameter ontologies using knowledge inference techniques" may include the following steps:
constructing a binary set among the plurality of parameter ontologies;
constructing a feature vector for each binary set;
and carrying out classification processing on the feature vector to determine a first relation between two elements in the binary set.
In particular, a series of binary sets between parameter ontologies can be constructed. Taking user portrait data as an example, if the ontology includes a user, a place and a study, three binary groups can be constructed, namely { user, place }, { user, study } and { place, study }.
Firstly, a feature vector is constructed for each binary group, and the construction method of the feature vector can be collected from a large text pre-material library, and is mainly used in some technologies of natural language processing. Then, the feature vectors can be classified by using classification algorithms such as bayesian models and deep neural networks. Two categories of relationships are assumed: the term "rank" means that the relationship between the two is equal and parallel, and the term "belongs to" means that the relationship between the two is subordinated. Then the category of { user, place } is "belonging" from the three duplets of { user, place }, { user, academic } and { place, academic }, i.e. the place belongs to one aspect of the user; { user, calendar } relationship is also "belongs to"; and the { place, calendar } is in "flat" relationship because the two are in peer-to-peer, parallel relationship and not in dependency relationship.
And 130, constructing a second relation between the plurality of parameter ontologies and the preset scene categories.
Specifically, a large amount of ontology and category data can be collected, and the mapping relationship between the sample ontology and the sample category is established in advance. Then, based on the mapping relation, matching corresponding preset scene categories for each parameter ontology, and constructing a relation between the two.
140, performing model training according to the plurality of parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories.
In some embodiments, the scene recognition model may also be referred to as a panoramic view recognition model, and is suitable for scenes with complex scenes and involving more data dimensions.
In some embodiments, referring to fig. 3, fig. 3 is a second flowchart of a modeling method provided by the embodiments of the present application.
In some embodiments, the attribute of the parameter ontology may be added to the scene recognition model to further enrich the scene recognition model. That is, before the preset algorithm model is used to process the plurality of parameter ontologies, the preset scene categories, the first relationships and the second relationships to obtain the scene recognition model, the following process may be further included:
150, extracting attributes of a plurality of parameter ontologies from data of a plurality of dimensions;
and 160, constructing a third relation between the attribute and the parameter ontology to which the attribute belongs.
The attribute is data which can be used for describing a parameter ontology. For example, for the device operation data, the extracted attributes mainly refer to local attributes, such as the core number of the CPU, the operation frequency, and the like. It should be noted that the third relationship between the attribute and the parameter ontology is a dependency relationship.
With reference to fig. 3, the step of processing the multiple parameter ontologies, the preset scene categories, the first relationships, and the second relationships by using the preset algorithm model to obtain a scene recognition model for recognizing the scene categories may include:
141, processing the multiple parameter ontologies, the preset scene categories, the attributes, the first relations, the second relations and the third relations by using a preset algorithm model to obtain a scene recognition model for recognizing the scene categories.
The preset algorithm model may be a random walk model.
Specifically, after the first relation, the second relation and the third relation are obtained through construction, a plurality of parameter ontologies are preset with scene categories, the first relation and the second relation through the random walk model, so that relationship networks among the parameter ontologies, between the parameter ontologies and the scene categories, and between the scene categories and the scene categories are obtained, and a scene recognition model for recognizing the scene categories is obtained.
In practical application, the relationship between the parameter ontologies and the attribute owned by the parameter ontologies are equivalent to a relationship graph, the nodes are the parameter ontologies and the attribute, and the edges are the relationship between the nodes.
In some embodiments, the relationship between the partial scene categories and the parameter ontology may be marked artificially, for example, the parameter ontology used for the scenes of the office category is mainly an office, an office application, lighting, and the like.
And then combining the parameter ontology, the attributes, the scene categories and the constructed relations, which is equivalent to a larger relation graph, namely adding the scene categories to nodes on the graph and adding the relations between the scene categories and the ontology, thereby enriching the panoramic view identification model.
In some embodiments, reference is made to fig. 4 and 5. Fig. 4 is a third flowchart illustration of the modeling method provided in the embodiment of the present application, and fig. 4 is a scene architecture illustration of the modeling method provided in the embodiment of the present application.
In some embodiments, the step of processing the multiple parameter ontologies, the preset scene categories, the attributes, the first relations, the second relations, and the third relations by using the preset algorithm model to obtain the scene identification model for identifying the scene categories may include the following steps:
1411, performing matrixing processing on a plurality of parameter ontologies, preset scene types, attributes, a first relation, a second relation and a third relation according to a preset rule to obtain an information matrix;
and 1412, processing the information matrix by adopting a preset algorithm model to obtain a scene recognition model for recognizing the scene category.
In some embodiments, the step of matrixing the plurality of parameter ontologies, the preset scene types, the attributes, the first relation, the second relation, and the third relation according to a preset rule to obtain the information matrix may include the following steps:
setting the scene type as a row of a matrix, setting a parameter body and the attribute thereof as a column of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix; alternatively, the first and second electrodes may be,
setting the parameter ontology and the attributes thereof as rows of a matrix, setting the scene category as columns of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix.
Specifically, the obtained relational graph can be expressed as a matrix, the behavior scene category of the matrix is provided, the columns of the matrix are parameter ontology and attribute, and the elements of the matrix are edge types; or the behavior parameter ontology and the attribute of the matrix, the column of the matrix is the scene category, and the element of the matrix is the type of the edge.
Specifically, a random walk model may be used to learn and adjust the elements of the matrix. Based on the fact that a large number of elements are 0 in the obtained matrix, namely, no edge exists between the scene category and the body or attribute of the matrix, the random walk model can learn and adjust the elements in the matrix by adopting a random gradient descent learning method until convergence, so that the converged matrix is obtained. That is, in some embodiments, the step "processing the information matrix by using a preset algorithm model to obtain the panoramic view identification model" may include the following steps:
and performing iterative processing on elements of the matrix until convergence by adopting a random gradient descent learning method through the preset algorithm model to obtain a scene recognition model for recognizing the scene category.
Specifically, the random walk iterates and learns the relationship between the edges of the nodes on a relational graph, and finally obtains a new graph which is more accurate and has more comprehensive measurement relationship. After the obtained converged matrix is obtained, the relationship between the scene category and the parameter ontology or attribute is equivalently relearned and constructed. The converged matrix can be regarded as a new personalized panoramic view.
It is to be understood that the terms "first," "second," and the like in the embodiments of the present application are used merely for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order, such that the described elements may be interchanged under appropriate circumstances.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
As can be seen from the above, in the modeling method provided in the embodiment of the present application, by acquiring panoramic data, the panoramic data at least includes: environment data of the equipment, equipment operation data, user behavior data and user portrait data; extracting a plurality of parameter ontologies from the panoramic data, and constructing a first relation among the parameter ontologies; constructing a second relation between the parameter ontologies and a preset scene category; and processing the parameter ontologies, the preset scene categories, the first relation and the second relation by adopting a preset algorithm model to obtain a panoramic view identification model capable of identifying the scene categories. The method and the device can comprehensively utilize the multi-dimensional information, so that the finally constructed scene category has stronger robustness and higher accuracy. Meanwhile, the personalized panoramic view recognition model constructed by the knowledge map technology can enable the finally recognized scene category to better reflect the preference of the user, so that the finally recognized scene category is more personalized, and the method is beneficial to providing more personalized intelligent service for the user.
An embodiment of the present application further provides a scene identification method, including:
acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene recognition model to recognize the current scene category, wherein the scene recognition model performs model training according to a reference body, a first relation among parameter bodies, a preset scene category and a second relation among the parameter bodies and the preset scene category in the data of the multiple dimensions under different scenes to obtain the data of the multiple dimensions
In practical application, based on the modeling method provided by the embodiment of the application, the information perception layer acquires panoramic data of multiple dimensions in a current scene through a sensor in the electronic device, such as environment data of the device, device operation data, user behavior data, user portrait data and the like. And then, the information perception layer provides the acquired panoramic data to the data processing layer for data cleaning, data integration, data transformation, data reduction and other processing. Then, the data processing layer provides the processed data to the feature extraction layer, the feature extraction layer performs feature extraction on the panoramic data from the data processing layer as data needing feature extraction based on the feature extraction method provided by the embodiment of the application to obtain a plurality of parameter ontologies from the data of a plurality of dimensions, and constructs features such as a first relation among the parameter ontologies and a second relation between the parameter ontologies and a preset scene type. The scene modeling layer carries out modeling based on the characteristics from the characteristic extraction layer, trains the appointed model by using the obtained characteristics, and obtains the trained scene recognition model for recognizing the scene category. Finally, the intelligent service layer identifies scene types of scenes containing different panoramic data according to the model constructed by the scene modeling layer; for example, data for multiple dimensions may be acquired: the environment volume is weak, the air temperature is almost kept unchanged, the equipment use frequency is low, the equipment movement amount is almost zero, and the scene with the current time point being the working time point of the user is an office scene.
The embodiment of the application also provides a modeling device. The modeling means may be integrated in an electronic device. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch, glasses, a helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a modeling apparatus provided in an embodiment of the present application. The modeling apparatus 200 may include: an obtaining module 201, a first constructing module 202, a second constructing module 203, and a processing module 204, wherein:
an obtaining module 201, configured to obtain data of multiple dimensions, where the multiple dimensions at least include: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
a first construction module 202, configured to extract multiple parameter ontologies from data of multiple dimensions, and construct a first relationship between the parameter ontologies;
a second constructing module 203, configured to construct a second relationship between the multiple parameter ontologies and a preset scene category;
the processing module 204 is configured to perform model training according to the multiple parameter ontologies, the preset scene categories, the first relationships, and the second relationships to obtain a scene recognition model for recognizing the scene categories.
In some embodiments, the modeling apparatus 200 may further include:
the third construction module is used for extracting attributes of the parameter ontologies from the data of the dimensions before model training is carried out according to the parameter ontologies, the preset scene type, the first relation and the second relation to obtain a scene recognition model for recognizing the scene type, and constructing a third relation between the attributes and the parameter ontologies;
the processing module 204 may be configured to process the plurality of parameter ontologies, the preset scene categories, the attributes, the first relationship, the second relationship, and the third relationship by using a preset algorithm model, so as to obtain a scene identification model for identifying the scene categories.
In some embodiments, the processing module 205 may include:
the first processing submodule is used for matrixing the multiple parameter ontologies, the preset scene types, the attributes, the first relation, the second relation and the third relation according to a preset rule to obtain an information matrix;
and the second processing submodule is used for processing the information matrix by adopting a preset algorithm model to obtain a scene recognition model for recognizing the scene category.
In some embodiments, the first processing sub-module may be to:
setting the scene type as a row of a matrix, setting the parameter ontology and the attribute thereof as a column of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix; alternatively, the first and second electrodes may be,
setting the parameter ontology and the attributes thereof as rows of a matrix, setting the scene category as columns of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix.
In some embodiments, the second processing sub-module may be to:
and performing iterative processing on elements of the matrix until convergence by adopting a random gradient descent learning method through the preset algorithm model to obtain a scene recognition model for recognizing the scene category.
In some embodiments, the first building module 202 may be configured to: extracting a plurality of parameter ontologies from the data of the plurality of dimensions by adopting a knowledge graph technology; and deducing a first relation between parameter ontologies by adopting a knowledge reasoning technology.
In some embodiments, in inferring the first relationship between the parameter ontologies using knowledge inference techniques, the first building module 202 is further operable to:
constructing a binary set among a plurality of parameter ontologies; constructing a feature vector for each binary set; the feature vectors are classified to determine a first relationship between two elements in the binary set.
As can be seen from the above, the modeling apparatus 200 provided in the embodiment of the present application acquires data of multiple dimensions; extracting a plurality of parameter ontologies from the data of a plurality of dimensions, and constructing a first relation among the parameter ontologies; constructing a second relation between the plurality of parameter ontologies and the preset scene categories; and performing processing model training according to the multiple parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories. The method and the device can comprehensively utilize the multi-dimensional information, so that the finally constructed scene category has stronger robustness and higher accuracy. Meanwhile, the personalized panoramic view recognition model constructed by the knowledge map technology can enable the finally recognized scene category to better reflect the preference of the user, so that the finally recognized scene category is more personalized, and the method is beneficial to providing more personalized intelligent service for the user.
The embodiment of the application also provides the electronic equipment. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch, glasses, a helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 7, fig. 7 is a schematic view of a first structure of an electronic device 300 according to an embodiment of the present disclosure. Electronic device 300 includes, among other things, a processor 301 and a memory 302. The processor 301 is electrically connected to the memory 302.
The processor 301 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 301 in the electronic device 300 loads instructions corresponding to one or more processes of the computer program into the memory 302 according to the following steps, and the processor 301 runs the computer program stored in the memory 302, so as to implement various functions:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
extracting a plurality of parameter ontologies from the data of the plurality of dimensions, and constructing a first relation among the parameter ontologies;
constructing a second relation between the plurality of parameter ontologies and the preset scene categories;
and performing model training according to the multiple parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories.
In some embodiments, before training according to the plurality of parameter ontologies, the preset scene category, the first relationship and the second relationship to obtain the scene recognition model for recognizing the scene category, the processor 301 performs the following steps:
extracting attributes of a plurality of parameter ontologies from the data of the plurality of dimensions;
constructing a third relation between the attribute and the parameter ontology;
when training is performed according to the multiple parameter ontologies, the preset scene categories, the first relationship and the second relationship to obtain a scene recognition model for recognizing the scene categories, the processor 301 executes the following steps:
and processing the parameter ontologies, the preset scene categories, the attributes, the first relation, the second relation and the third relation by adopting a preset algorithm model to obtain a scene identification model for identifying the scene categories.
In some embodiments, when the plurality of parameter ontologies, the preset scene type, the attribute, the first relationship, the second relationship, and the third relationship are processed by using a preset algorithm model to obtain a scene identification model for identifying the scene type, the processor 301 further performs the following steps:
performing matrixing processing on the multiple parameter ontologies, the preset scene type, the attribute, the first relation, the second relation and the third relation according to a preset rule to obtain an information matrix;
and processing the information matrix by adopting a preset algorithm model to obtain a scene identification model for identifying the scene category.
In some embodiments, when the plurality of parameter ontologies, the preset scene type, the attribute, the first relationship, the second relationship, and the third relationship are matriculated according to a preset rule to obtain the information matrix, the processor 301 executes the following steps:
setting the scene type as a row of a matrix, setting a parameter body and the attribute thereof as a column of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix; alternatively, the first and second electrodes may be,
setting the parameter ontology and the attributes thereof as rows of a matrix, setting the scene category as columns of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix.
In some embodiments, when the information matrix is processed by using a preset algorithm model to obtain the panorama view recognition model, the processor 301 performs the following steps:
and performing iterative processing on elements of the matrix until convergence by adopting a random gradient descent learning method through a preset algorithm model to obtain a scene recognition model for recognizing the scene category.
In some embodiments, when extracting a plurality of parameter ontologies from the panoramic data and constructing a first relationship between the parameter ontologies and each other, the processor 301 performs the following steps:
extracting a plurality of parameter ontologies from the panoramic data by adopting a knowledge graph technology;
and deducing a first relation between parameter ontologies by adopting a knowledge reasoning technology.
In some embodiments, when a knowledge inference technique is used to infer a first relationship between parameter ontologies, processor 301 performs the following steps:
constructing a binary set among a plurality of parameter ontologies;
constructing a feature vector for each binary set;
and carrying out classification processing on the feature vectors to determine a first relation between two elements in the binary set.
In this embodiment, the processor 301 in the electronic device 300 may further load instructions corresponding to processes of one or more computer programs into the memory 302 according to the following steps, and the processor 301 executes the computer programs stored in the memory 302, so as to implement the following functions:
acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene recognition model to recognize the current scene category, wherein the scene recognition model performs model training according to a reference body, a first relation among parameter bodies, a preset scene category and a second relation among the parameter bodies and the preset scene category in the data of the multiple dimensions under different scenes to obtain the data of the multiple dimensions
Memory 302 may be used to store computer programs and data. The memory 302 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. The processor 301 executes various functional applications and data processing by calling a computer program stored in the memory 302.
In some embodiments, referring to fig. 8, fig. 8 is a schematic diagram of a second structure of an electronic device 300 according to an embodiment of the present disclosure.
Wherein, the electronic device 300 further comprises: a display 303, a control circuit 304, an input unit 305, a sensor 306, and a power supply 307. The processor 301 is electrically connected to the display 303, the control circuit 304, the input unit 305, the sensor 306, and the power source 307.
The display screen 303 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 304 is electrically connected to the display 303, and is configured to control the display 303 to display information.
The input unit 305 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. Wherein, the input unit 305 may include a fingerprint recognition module.
The sensor 306 is used to collect information of the electronic device itself or information of the user or external environment information. For example, the sensor 306 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a heart rate sensor, and the like.
The power supply 307 is used to power the various components of the electronic device 300. In some embodiments, the power supply 307 may be logically coupled to the processor 301 through a power management system, such that functions of managing charging, discharging, and power consumption are performed through the power management system.
Although not shown in fig. 8, the electronic device 300 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, an embodiment of the present application provides an electronic device, where the electronic device performs the following steps: acquiring data of a plurality of dimensions, wherein the data of the plurality of dimensions at least comprises: environment data of the equipment, equipment operation data, user behavior data and user portrait data; extracting a plurality of parameter ontologies from the data of the plurality of dimensions, and constructing a first relation among the parameter ontologies; constructing a second relation between the plurality of parameter ontologies and the preset scene categories; and performing model training according to the multiple parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model. The method and the device can comprehensively utilize the multi-dimensional information, so that the finally constructed scene category has stronger robustness and higher accuracy. Meanwhile, the personalized panoramic view recognition model constructed by the knowledge map technology can enable the finally recognized scene category to better reflect the preference of the user, so that the finally recognized scene category is more personalized, and the method is beneficial to providing more personalized intelligent service for the user.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the modeling method according to any one of the above embodiments.
For example, in some embodiments, when the computer program is run on a computer, the computer performs the steps of:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
extracting a plurality of parameter ontologies from the data of the plurality of dimensions, and constructing a first relation among the parameter ontologies;
constructing a second relation between the parameter ontologies and a preset scene category;
and performing model training according to the plurality of parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories.
For another example, in some embodiments, when the computer program is run on a computer, the computer performs the steps of:
acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene recognition model to recognize the current scene category, wherein the scene recognition model performs model training according to a reference body, a first relation among parameter bodies, a preset scene category and a second relation among the parameter bodies and the preset scene category in the data of the multiple dimensions under different scenes to obtain the data of the multiple dimensions
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The modeling method, the modeling device, the storage medium, the electronic device, and the scene recognition method provided by the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A modeling method, characterized in that the modeling method comprises:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
extracting a plurality of parameter ontologies from the data of the plurality of dimensions, and constructing a first relation among the parameter ontologies;
constructing a second relation between the parameter ontologies and a preset scene category;
and performing model training according to the plurality of parameter ontologies, the preset scene categories, the first relation and the second relation to obtain a scene recognition model for recognizing the scene categories.
2. The modeling method of claim 1, wherein before training according to the plurality of parameter ontologies, the preset scene categories, the first relationship and the second relationship to obtain a scene recognition model for recognizing the scene categories, the method further comprises:
extracting attributes of a plurality of parameter ontologies from the data of the plurality of dimensions;
constructing a third relation between the attribute and the parameter ontology;
the processing according to the multiple parameter ontologies, the preset scene category, the first relation and the second relation to obtain a scene recognition model for recognizing the scene category includes:
and processing the parameter ontologies, the preset scene type, the attributes, the first relation, the second relation and the third relation by adopting a preset algorithm model to obtain a scene identification model for identifying the scene type.
3. The modeling method according to claim 2, wherein the processing the plurality of parameter ontologies, the preset scene categories, the attributes, the first relationship, the second relationship, and the third relationship by using a preset algorithm model to obtain a scene recognition model for recognizing the scene categories includes:
performing matrixing processing on the multiple parameter ontologies, the preset scene type, the attribute, the first relation, the second relation and the third relation according to a preset rule to obtain an information matrix;
and processing the information matrix by adopting a preset algorithm model to obtain a scene recognition model for recognizing the scene category.
4. The modeling method according to claim 3, wherein the matrixing the plurality of parameter ontologies, the preset scene type, the attribute, the first relationship, the second relationship, and the third relationship according to a preset rule to obtain an information matrix includes:
setting the preset scene type as a row of a matrix, setting the parameter body and the attribute thereof as a column of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix; alternatively, the first and second electrodes may be,
setting the parameter body and the attribute thereof as a row of a matrix, setting the preset scene type as a column of the matrix, and setting the first relation, the second relation and the third relation as elements of the matrix.
5. The modeling method according to claim 3, wherein the processing the information matrix by using a preset algorithm model to obtain a scene recognition model for recognizing a scene category comprises:
and performing iterative processing on the elements of the matrix until convergence by adopting a random gradient descent learning method through the preset algorithm model to obtain a scene recognition model for recognizing the scene category.
6. The modeling method of claim 1, wherein the extracting a plurality of parameter ontologies from the data of the plurality of dimensions and constructing a first relationship between the parameter ontologies and each other comprises:
extracting a plurality of parameter ontologies from the data of the plurality of dimensions by adopting a knowledge graph technology;
and deducing a first relation between parameter ontologies by adopting a knowledge reasoning technology.
7. The modeling method of claim 6, wherein inferring the first relationship between the parameter ontologies using knowledge inference techniques comprises:
constructing a binary set among the plurality of parameter ontologies;
constructing a feature vector for each binary set;
and carrying out classification processing on the feature vector to determine a first relation between two elements in the binary set.
8. A scene recognition method, characterized in that the scene recognition method comprises:
acquiring data of multiple dimensions in a current scene;
and processing the data of the multiple dimensions according to a pre-trained scene recognition model so as to recognize the current scene category, wherein the scene recognition model is obtained by performing model training according to a reference body, a first relation among parameter bodies, a preset scene category and a second relation between the parameter bodies and the preset scene category in the data of the multiple dimensions under different scenes.
9. A modeling apparatus, characterized in that the modeling apparatus comprises:
an obtaining module, configured to obtain data of multiple dimensions, where the data of multiple dimensions at least includes: environment data of the equipment, equipment operation data, user behavior data and user portrait data;
the first construction module is used for extracting a plurality of parameter ontologies from the data of the plurality of dimensions and constructing a first relation among the parameter ontologies;
the second construction module is used for constructing a second relation between the parameter ontologies and the preset scene categories;
and the processing module is used for carrying out model training according to the parameter ontologies, the preset scene type, the first relation and the second relation to obtain a scene identification model for identifying the scene type.
10. The modeling apparatus of claim 9, further comprising:
a third construction module, configured to extract attributes of the multiple parameter ontologies from the multiple dimensions of data and construct a third relationship between the attributes and the parameter ontologies before performing model training according to the multiple parameter ontologies, the preset scene type, the first relationship and the second relationship to obtain a scene identification model for identifying the scene type;
the processing module is configured to process the plurality of parameter ontologies, the preset scene type, the attributes, the first relation, the second relation, and the third relation by using a preset algorithm model to obtain a scene identification model for identifying the scene type.
11. A storage medium having stored thereon a computer program for performing the steps of the method according to any one of claims 1 to 7 or for performing the steps of the method according to claim 8 when the computer program is executed by a processor.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-7 or for performing the steps of the method according to claim 8 when executing the program.
CN201910282194.1A 2019-04-09 2019-04-09 Modeling method, modeling device, storage medium, electronic device and scene recognition method Pending CN111814812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282194.1A CN111814812A (en) 2019-04-09 2019-04-09 Modeling method, modeling device, storage medium, electronic device and scene recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282194.1A CN111814812A (en) 2019-04-09 2019-04-09 Modeling method, modeling device, storage medium, electronic device and scene recognition method

Publications (1)

Publication Number Publication Date
CN111814812A true CN111814812A (en) 2020-10-23

Family

ID=72843595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282194.1A Pending CN111814812A (en) 2019-04-09 2019-04-09 Modeling method, modeling device, storage medium, electronic device and scene recognition method

Country Status (1)

Country Link
CN (1) CN111814812A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537387A (en) * 2021-08-04 2021-10-22 北京思特奇信息技术股份有限公司 Model design method and device for Internet online operation activities and computer equipment
CN113946222A (en) * 2021-11-17 2022-01-18 杭州逗酷软件科技有限公司 Control method, electronic device and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127811A1 (en) * 2005-12-07 2007-06-07 Trw Automotive U.S. Llc Virtual reality scene generator for generating training images for a pattern recognition classifier
CN107391577A (en) * 2017-06-20 2017-11-24 中国科学院计算技术研究所 A kind of works label recommendation method and system based on expression vector
CN109086742A (en) * 2018-08-27 2018-12-25 Oppo广东移动通信有限公司 scene recognition method, scene recognition device and mobile terminal
CN109101931A (en) * 2018-08-20 2018-12-28 Oppo广东移动通信有限公司 A kind of scene recognition method, scene Recognition device and terminal device
CN109166170A (en) * 2018-08-21 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for rendering augmented reality scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127811A1 (en) * 2005-12-07 2007-06-07 Trw Automotive U.S. Llc Virtual reality scene generator for generating training images for a pattern recognition classifier
CN107391577A (en) * 2017-06-20 2017-11-24 中国科学院计算技术研究所 A kind of works label recommendation method and system based on expression vector
CN109101931A (en) * 2018-08-20 2018-12-28 Oppo广东移动通信有限公司 A kind of scene recognition method, scene Recognition device and terminal device
CN109166170A (en) * 2018-08-21 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for rendering augmented reality scene
CN109086742A (en) * 2018-08-27 2018-12-25 Oppo广东移动通信有限公司 scene recognition method, scene recognition device and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱小英: "结合深度学习与稀疏表示的室内场景识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 02 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537387A (en) * 2021-08-04 2021-10-22 北京思特奇信息技术股份有限公司 Model design method and device for Internet online operation activities and computer equipment
CN113946222A (en) * 2021-11-17 2022-01-18 杭州逗酷软件科技有限公司 Control method, electronic device and computer storage medium

Similar Documents

Publication Publication Date Title
CN112434721B (en) Image classification method, system, storage medium and terminal based on small sample learning
CN111797858A (en) Model training method, behavior prediction method, device, storage medium and equipment
WO2022016556A1 (en) Neural network distillation method and apparatus
CN113704388A (en) Training method and device for multi-task pre-training model, electronic equipment and medium
CN111914113A (en) Image retrieval method and related device
CN111814475A (en) User portrait construction method and device, storage medium and electronic equipment
CN111797854B (en) Scene model building method and device, storage medium and electronic equipment
CN115131698B (en) Video attribute determining method, device, equipment and storage medium
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111798259A (en) Application recommendation method and device, storage medium and electronic equipment
CN111797288A (en) Data screening method and device, storage medium and electronic equipment
CN113515669A (en) Data processing method based on artificial intelligence and related equipment
CN111797851A (en) Feature extraction method and device, storage medium and electronic equipment
CN111814812A (en) Modeling method, modeling device, storage medium, electronic device and scene recognition method
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111797856B (en) Modeling method and device, storage medium and electronic equipment
CN116935188A (en) Model training method, image recognition method, device, equipment and medium
CN111797862A (en) Task processing method and device, storage medium and electronic equipment
CN111709473A (en) Object feature clustering method and device
CN115033700A (en) Cross-domain emotion analysis method, device and equipment based on mutual learning network
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN111797261A (en) Feature extraction method and device, storage medium and electronic equipment
CN111797289A (en) Model processing method and device, storage medium and electronic equipment
CN113434722B (en) Image classification method, device, equipment and computer readable storage medium
CN114265948A (en) Image pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination