CN111797856B - Modeling method and device, storage medium and electronic equipment - Google Patents

Modeling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111797856B
CN111797856B CN201910282120.8A CN201910282120A CN111797856B CN 111797856 B CN111797856 B CN 111797856B CN 201910282120 A CN201910282120 A CN 201910282120A CN 111797856 B CN111797856 B CN 111797856B
Authority
CN
China
Prior art keywords
data
entities
entity
logic unit
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910282120.8A
Other languages
Chinese (zh)
Other versions
CN111797856A (en
Inventor
何明
陈仲铭
王雪雪
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282120.8A priority Critical patent/CN111797856B/en
Publication of CN111797856A publication Critical patent/CN111797856A/en
Application granted granted Critical
Publication of CN111797856B publication Critical patent/CN111797856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks

Abstract

The embodiment of the application discloses a modeling method, a modeling device, a storage medium, electronic equipment and a scene category inference method. The method comprises the following steps: acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data; extracting a plurality of entities and entity features from the data in a plurality of dimensions; based on a plurality of entities and entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information; generating a plurality of scene categories; model training is performed with a plurality of scene categories based on the plurality of logical unit information to construct a scene category inference model for inferring scene categories. According to the scheme, the object-oriented technology is introduced, so that the flexibility of modeling the panoramic view is greatly improved, and the later maintenance cost and convenience are reduced.

Description

Modeling method and device, storage medium and electronic equipment
Technical Field
The application relates to the field of electronic equipment, in particular to a modeling method, a modeling device, a storage medium, electronic equipment and a scene category deducing method.
Background
With the development of electronic technology, electronic devices such as smartphones are becoming more and more intelligent. The electronic device may perform data processing through a variety of algorithmic models to provide various functions to the user. For example, the electronic device may learn behavior features of the user according to an algorithmic model to provide personalized services to the user.
Disclosure of Invention
The embodiment of the application provides a modeling method, a modeling device, a storage medium, electronic equipment and a scene category deducing method, which can improve the quality of intelligent service.
In a first aspect, an embodiment of the present application provides a modeling method, including:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data;
extracting a plurality of entities and entity features from the data of the plurality of dimensions;
based on the entities and the entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information;
generating a plurality of scene categories;
model training is performed with the plurality of scene categories according to the plurality of logic unit information to construct a scene category inference model for inferring scene categories.
In a second aspect, an embodiment of the present application further provides a scenario category inference method, including:
acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene category inference model to infer a current scene category, wherein the scene category inference model is obtained by performing model training according to multiple pieces of logic unit information in different scenes and multiple generated scene categories, and the multiple pieces of logic unit information are obtained by performing information filling on a preset logic unit frame through multiple entities and entity characteristics in the data of the multiple dimensions.
In a third aspect, an embodiment of the present application further provides a modeling apparatus, including:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring data with multiple dimensions, and the data with multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data;
the extraction module is used for extracting a plurality of entities and entity characteristics from the data of the plurality of dimensions;
the filling module is used for filling information into a preset logic unit frame based on the entities and the entity characteristics so as to obtain a plurality of logic unit information;
The generation module is used for generating a plurality of scene categories;
and the construction module is used for training the plurality of logic unit information and the plurality of scene categories by adopting a Bayesian network so as to construct a scene category inference model for inferring the scene category.
In a fourth aspect, embodiments of the present application further provide a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the modeling method described above, or the steps of the scene category inference method described above.
In a fifth aspect, an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the modeling method or the steps of the scene category inference method when executing the program.
According to the modeling method provided by the embodiment of the application, the data of a plurality of dimensions are obtained, and the data of the plurality of dimensions at least comprise: environmental data, device operational data, user behavior habit data, and social context data; extracting a plurality of entities and entity features from the data in a plurality of dimensions; based on a plurality of entities and entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information; generating a plurality of scene categories; model training is performed with a plurality of scene categories based on the plurality of logical unit information to construct a scene category inference model for inferring scene categories. According to the scheme, the object-oriented technology is introduced, so that the flexibility of modeling the panoramic view is greatly improved, and the later maintenance cost and convenience are reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the application and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a panoramic sensing architecture according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a modeling method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a modeling method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of a third modeling method according to an embodiment of the present application.
Fig. 5 is a schematic view of a scenario architecture of a modeling method according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a modeling apparatus according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a first structure of an electronic device according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a second structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application based on the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of a panoramic sensing architecture according to an embodiment of the present application. The modeling method is applied to electronic equipment. A panoramic sensing architecture is arranged in the electronic equipment. The panoramic awareness architecture is an integration of hardware and software in an electronic device for implementing the modeling method.
The panoramic sensing architecture comprises an information sensing layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information sensing layer is used for acquiring information of the electronic equipment or information in an external environment. The information sensing layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, a gesture sensor, a barometer, a heart rate sensor, and the like.
Wherein the distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of an environment in which the electronic device is located. The light sensor may be used to detect light information of an environment in which the electronic device is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocities of an electronic device in various directions. Inertial sensors may be used to detect motion data of the electronic device. The gesture sensor may be used to sense gesture information of the electronic device. Barometers may be used to detect the air pressure of an environment in which an electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
The data processing layer is used for processing the data acquired by the information sensing layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by an information sensing layer into a higher or more abstract dimension so as to comprehensively process the plurality of single-dimensional data. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer, so that the transformed data meets the processing requirement. Data reduction refers to maximally simplifying the data volume on the premise of keeping the original appearance of the data as much as possible.
The feature extraction layer is used for extracting features of the data processed by the data processing layer so as to extract features included in the data. The extracted features can reflect the state of the electronic equipment itself or the state of the user or the environmental state of the environment where the electronic equipment is located, etc.
The feature extraction layer may extract features by filtration, packaging, integration, or the like, or process the extracted features.
Filtering means that the extracted features are filtered to delete redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate multiple feature extraction methods together to construct a more efficient and accurate feature extraction method for extracting features.
The scene modeling layer is used for constructing a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment or the state of a user or the state of the environment and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, a physical relationship model, an object-oriented model, and the like from the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent service for users according to the model constructed by the scene modeling layer. For example, the intelligent service layer may provide basic application services for users, may perform system intelligent optimization for electronic devices, and may provide personalized intelligent services for users.
In addition, the panoramic sensing architecture can also comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the algorithms can form an algorithm library. For example, the algorithm library may include a markov algorithm, an implicit dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual network, a long-short term memory network, a convolutional neural network, a cyclic neural network, and the like.
The embodiment of the application provides a modeling method which can be applied to electronic equipment. The electronic device may be a smart phone, tablet computer, gaming device, AR (Augmented Reality ) device, automobile, data storage, audio playback, video playback, notebook, desktop computing device, wearable device such as a watch, glasses, helmet, electronic bracelet, electronic necklace, electronic clothing, etc.
Referring to fig. 2, fig. 2 is a schematic flow chart of a modeling method according to an embodiment of the present application. Wherein the modeling method comprises the steps of:
110, acquiring data of multiple dimensions, wherein the data of multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data.
In the embodiment of the application, the data in multiple dimensions can be panoramic data, namely data information in different dimensions in a certain scene.
The environmental data may be natural environmental data detected by the electronic device through the sensor, such as weather, temperature, geographical location, etc. The device operation data is operation data inside the electronic device, such as an operation process, a network signal, and the like. The user behavior habit data may be data describing a user portrait, such as favorite games, frequent places, photographing habits, device usage habits, and the like. Social context data such as local landscapes, customs, laws and regulations, etc.
In embodiments of the present application, the panoramic data may also include other data related to the user or device, such as device resource data, application usage habit data, and the like.
In some embodiments, the acquired panoramic data may be stored in a database having a particular data structure to facilitate data recall when a task is subsequently performed. For example, the database may be a structured query language (Structured Query Language, SQL for short) based database.
120, extracting a plurality of entities and entity features from the data in a plurality of dimensions.
Specifically, there may be various ways to extract entities and physical features from panoramic data. In some embodiments, the step of extracting a plurality of entities and entity features from the data of a plurality of dimensions may include the following steps:
extracting a plurality of entities from the data in a plurality of dimensions by using a conditional random field;
a plurality of physical features are extracted from the data in a plurality of dimensions using a principal component analysis technique.
Where an entity is understood as a support for the nature, concrete things, individual subjects, phenomena of an entity, it is meant generally what can exist independently, both as a basis for all attributes and as a basis for everything. The conditional random field (conditional random field, abbreviated as CRF) is a differential probability matrix, which is a random field, and is commonly used for labeling or analyzing sequence data, such as natural language text or biological sequences. Conditional random fields based can be extracted from entities in the acquired panoramic data, such as city names, malls, device types, animals, buildings, devices, games, novels, meetings, temperatures, ambient light intensities, etc.
Principal component analysis (Principal Component Analysis, PCA) is a multivariate statistical method for examining the correlation among a plurality of variables. A set of variables which may have correlation is converted into a set of variables which are not linearly correlated through positive-negative conversion, and the converted set of variables is called a main component. Principal component analysis seeks to replace the original indices by combining a number of original, somewhat correlated (e.g., P indices) into a new set of mutually independent composite indices. Features of the above-mentioned entities, such as temperature, humidity, frequency of use of the device, etc., can be extracted from the panoramic data based on principal component analysis techniques.
130, based on the plurality of entities and the entity characteristics, information filling is performed on the preset logic unit frame to obtain a plurality of logic unit information.
Specifically, the logic unit frame is a unit frame with specific structure and logic, and is used for bearing the entity and the combination of the entity characteristics with the structure and the logic relationship.
It should be noted that the logic unit framework may include a plurality of entities and/or entity features, and among the obtained plurality of logic unit information, different logic unit information may have partially identical entities and/or entity features.
140, generating a plurality of scene categories.
In some embodiments, the scene category may be obtained by using a manual marking mode, and may be some common scenes defined in a daily environment, such as walking, running, riding, driving, subway, high-speed rail, airplane, and the like for the scene of user travel.
In some embodiments, scene categories may also be obtained in a clustered fashion. That is, the step of "generating a plurality of scene categories" may include the following flow:
clustering the panoramic data by adopting a preset clustering algorithm to obtain a plurality of data sets;
the corresponding classification labels are matched for each data set to generate a plurality of scene categories.
Specifically, in order to avoid generating some unnecessary scene categories, a preset clustering algorithm (such as a topic generation model, a deep neural network model, etc.) may be only adopted to perform clustering processing on the data in the multiple dimensions, and then multiple clustered data sets are obtained. Each data set includes at least one of more than one plurality of dimensions of data.
And 150, performing model training according to the plurality of logic unit information and the plurality of scene categories to construct a scene category inference model for inferring the scene category.
Specifically, a user-personalized scene category inference model may be constructed by calculating event occurrence probability distributions between a plurality of logical unit information and scene categories. That is, the step of "employing model training with a plurality of scene categories based on a plurality of logical unit information to construct a scene category inference model for inferring a scene category" may include the following steps:
constructing probability distribution between a plurality of logic unit information and a plurality of scene categories by adopting a Bayesian network to obtain a scene category inference model, wherein the probability distribution is as follows:
p t =p t (1|x),p t (2|x),p t (3|x),...,p t (n|x)
wherein n represents the index of scene category, x represents logic unit information, P t The probability of occurrence of scene category n when the logical unit information at time t is x is represented. The probability distribution is the output of the user personalized scene category inference model.
According to the user history and the current panoramic data, the obtained personalized scene category inference model can be utilized to analyze the current panoramic data (such as environment data, equipment operation data, user behavior habit data, social situation data and the like) of a single user in real time so as to infer scene categories in real time, and finally output scene category probability distribution p of the current moment t t =p t (1|x),p t (2|x),p t (3|x),...,p t (n|x). Then, the scene category with the highest probability can be selected as the scene category at the moment of the user t.
In some embodiments, referring to fig. 3, fig. 3 is a second flowchart of a modeling method according to an embodiment of the present application.
The step of "filling information into a preset logic unit frame based on a plurality of entities and entity features to obtain a plurality of logic unit information" may include the following steps:
131, constructing a first relation between entities and a second relation between the entities and the entity characteristics to obtain an entity relation library;
and 132, filling information into a preset logic unit framework according to the entity relation library, the plurality of entities and the entity characteristics to obtain a plurality of logic unit information.
Specifically, the purpose here is to build an entity-relationship model, that is, a knowledge base containing entities, relationships between entities, and attributes owned by the entities. This knowledge base can then be used for later scene category discrimination.
In some embodiments, reference is made to fig. 4 and 5. Fig. 4 is a schematic flow chart of a third modeling method according to an embodiment of the present application. Fig. 5 is a schematic view of a scenario architecture of a modeling method according to an embodiment of the present application.
In some embodiments, the step of "constructing a first relationship between entities and a second relationship between entities and entity features to obtain an entity relationship library" may include the following procedures:
1311. and constructing a first relation between the entities and the second relation between the entities and the entity characteristics by adopting the entity connection model so as to obtain an entity relation library.
Entity relationship (Entity Relationship Diagram, E-R) model refers to a method that provides representations of entity types, attributes, and relationships that describe a conceptual model of the real world. Any ontology that can be used to describe a particular domain of discussion is then mapped onto a logical model, such as a relational model. The first relationship between entities may include two types, one is a parallel relationship and the other is an inclusion relationship. For example, an environment, a tree, and a grass are included, and an environment and a grass are included, and a tree and a grass are juxtaposed.
In some embodiments, with continued reference to fig. 4 and 5, the structure of the preset logic cell framework may include: class, object identification, inheritance, and object properties.
The step of filling information into a preset logic unit frame according to the entity relation library, a plurality of entities and entity characteristics to obtain the entity relation library may include the following steps:
1321, determining a target entity from a plurality of entities according to the entity relation library and the structure of the logic unit framework, so as to fill the object, thereby obtaining the entity relation library;
1322, based on the target entity and the entity relation library, determining a corresponding entity, entity characteristic or first relation to perform information filling on other logic units in the logic unit framework.
Specifically, for the obtained entity and entity characteristics, an entity connection model is adopted, and five logic units including an object, a class, an object identifier, inheritance and an object attribute are constructed by centering on the object which has a specific structure and a specific function and is mutually connected.
It should be noted that "class" mainly refers to a class of physical objects. For example, applications on electronic devices can be considered a "class" because the applications are as fruit-like and contain many types of applications;
an "object" is mainly a specific entity, for example, a specific application can be regarded as a specific object, for example, an APP belongs to an "application class";
The "object identification" is an identification number of an object. A string of numbers (e.g., ID numbers) may be used to represent a particular application, which may be more convenient for storage and presentation;
"inheritance" refers primarily to inheritance relationships between entities. For example, an APP inherits the class of "application class", and the APP has the characteristics of "application class";
"object properties" primarily means that this object contains those properties. Such as an APP, may have attributes such as size, version number, developer, function, etc.
In the embodiment of the application, because the basic knowledge is established based on the entity-relation knowledge base, a set of rules does not need to be independently re-established for each scene category. After the entity-relation knowledge base is obtained, when a new scene category exists, the new category is only added into the entity-relation knowledge base for training, the characteristics are not required to be reconstructed, and the training of the whole model is performed again.
It is to be understood that in embodiments of the application, terms such as "first," "second," and the like are used merely to distinguish between similar objects and not necessarily to describe a particular order or sequence, such that the described objects may be interchanged where appropriate.
In particular, the application is not limited by the order of execution of the steps described, as some of the steps may be performed in other orders or concurrently without conflict.
As can be seen from the above, in the modeling method provided by the embodiment of the present application, the data of multiple dimensions is obtained, where the data of multiple dimensions at least includes: environmental data, device operational data, user behavior habit data, and social context data; extracting a plurality of entities and entity features from the data in a plurality of dimensions; based on a plurality of entities and entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information; generating a plurality of scene categories; model training is performed according to the plurality of logic unit information and the plurality of scene categories to construct a scene category inference model. According to the scheme, the object-oriented technology is introduced, so that the flexibility of modeling the panoramic view is greatly improved, and the later maintenance cost and convenience are reduced. In addition, through using the omnidirectional panoramic data of the user, the finally constructed panoramic view can have good personalized characteristics, the identification accuracy and the personalized degree are greatly improved, more accurate user panoramic information is provided for subsequent intelligent service based on panoramic categories, and the quality and the level of intelligent service can be remarkably improved.
The embodiment of the application also provides a scene category deducing method, which comprises the following steps:
acquiring data of multiple dimensions in a current scene;
the data of the plurality of dimensions is processed according to a pre-trained scene category inference model to infer a current scene category. The scene category inference model is obtained by performing model training according to a plurality of pieces of logic unit information in different scenes and a plurality of generated scene categories, wherein the plurality of pieces of logic unit information are obtained by performing information filling on a preset logic unit frame through a plurality of entities and entity characteristics in the data of a plurality of dimensions.
In the embodiment of the application, the information sensing layer collects panoramic data of multiple dimensions in the current scene, such as environment data, equipment operation data, user behavior habit data, social situation data and the like of the equipment through the sensor in the electronic equipment. And then, the information perception layer provides the collected panoramic data for the data processing layer to carry out data cleaning, data integration, data transformation, data reduction and other processes. And then, the data processing layer provides the processed data to the feature extraction layer, and the feature extraction layer takes the panoramic data from the data processing layer as data needing to be subjected to feature extraction. The scene modeling layer models based on the features extracted by the feature extraction layer, performs information filling on a preset logic unit frame by using the obtained features to obtain a plurality of logic unit information, and trains a specified model based on the obtained plurality of logic unit information to obtain a scene category inference model for identifying scene categories after training. Finally, the intelligent service layer identifies scene categories containing scenes under different panoramic data according to the model constructed by the scene modeling layer; for example, data of multiple dimensions may be acquired: the environment volume is weak, the air temperature is almost unchanged, the equipment using frequency is low, the equipment motion quantity is almost zero, and the current time point is the office scene when the user is on duty.
The embodiment of the application also provides a modeling device. The modeling means may be integrated in an electronic device. The electronic device may be a smart phone, tablet computer, gaming device, AR (Augmented Reality ) device, automobile, data storage, audio playback, video playback, notebook, desktop computing device, wearable device such as a watch, glasses, helmet, electronic bracelet, electronic necklace, electronic clothing, etc.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a modeling apparatus according to an embodiment of the present application. The modeling apparatus 200 may include: an acquisition module 201, an extraction module 202, a filling module 203, a generation module 204, and a construction module 205, wherein:
an acquiring module 201, configured to acquire data in multiple dimensions, where the data in multiple dimensions at least includes: environmental data, device operational data, user behavior habit data, and social context data;
an extraction module 202, configured to extract a plurality of entities and entity features from the data in the plurality of dimensions;
the filling module 203 is configured to fill information into a preset logic unit frame based on the plurality of entities and the entity characteristics, so as to obtain a plurality of logic unit information;
A generating module 204, configured to generate a plurality of scene categories;
a construction module 205, configured to perform model training with the plurality of scene categories according to the plurality of logic unit information, so as to construct a scene category inference model for inferring a scene category.
In some embodiments, the filling module 203 may include:
the construction submodule is used for constructing a first relation between the entities and a second relation between the entities and the entity characteristics to obtain an entity relation library;
and the filling sub-module is used for filling information into a preset logic unit frame according to the entity relation library, the plurality of entities and the entity characteristics so as to obtain a plurality of logic unit information.
In some embodiments, the structure of the preset logic unit frame includes: five logic units of class, object identification, inheritance and object attribute; the filling sub-module may be configured to:
determining a target entity from the entities according to the entity relation library and the structure of the logic unit framework so as to fill the object;
and determining corresponding entities, entity characteristics or first relations based on the target entities and the entity relation library to fill information into other logic units in the logic unit framework.
In some embodiments, the build sub-module may be used to: and constructing a first relation between the entities and the second relation between the entities and the entity characteristics by adopting the entity connection model so as to obtain an entity relation library.
In some embodiments, the decimation module 202 may be configured to:
extracting a plurality of entities from the data in the plurality of dimensions by using a conditional random field;
a plurality of physical features are extracted from the data in a plurality of dimensions using a principal component analysis technique.
In some embodiments, the generation module 204 may be configured to:
clustering the data of the multiple dimensions by adopting a preset clustering algorithm to obtain multiple data sets;
the corresponding classification labels are matched for each data set to generate a plurality of scene categories.
In some embodiments, the build module 205 may be used to;
constructing probability distribution between the logic unit information and the scene category by adopting a Bayesian network to obtain a scene category inference model for inferring the scene category, wherein the probability distribution is as follows:
p t =p t (1|x),p,(2|x),p,(3|x),...,p t (n|x)
wherein n represents the index of scene category, x represents logic unit information, P t The probability of occurrence of scene category n when the logical unit information at time t is x is represented.
As can be seen from the above, the modeling apparatus 200 according to the embodiment of the present application obtains the data of multiple dimensions, where the data of multiple dimensions at least includes: environmental data, device operational data, user behavior habit data, and social context data; extracting a plurality of entities and entity features from the data of the plurality of dimensions; based on the entities and the entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information; generating a plurality of scene categories; training according to the plurality of logic unit information and the plurality of scene categories to construct a scene category inference model. According to the scheme, the object-oriented technology is introduced, so that the flexibility of modeling the panoramic view is greatly improved, and the later maintenance cost and convenience are reduced. In addition, through using the omnidirectional panoramic data of the user, the finally constructed panoramic view can have good personalized characteristics, the identification accuracy and the personalized degree are greatly improved, more accurate user panoramic information is provided for subsequent intelligent service based on panoramic categories, and the quality and the level of intelligent service can be remarkably improved.
The embodiment of the application also provides electronic equipment. The electronic device may be a smart phone, tablet computer, gaming device, AR (Augmented Reality ) device, automobile, data storage, audio playback, video playback, notebook, desktop computing device, wearable device such as a watch, glasses, helmet, electronic bracelet, electronic necklace, electronic clothing, etc.
Referring to fig. 7, fig. 7 is a schematic diagram of a first structure of an electronic device 300 according to an embodiment of the present application. Wherein the electronic device 300 comprises a processor 301 and a memory 302. The processor 301 is electrically connected to the memory 302.
The processor 301 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling computer programs stored in the memory 302, and calling data stored in the memory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 301 in the electronic device 300 loads the instructions corresponding to the processes of one or more computer programs into the memory 302 according to the following steps, and the processor 301 executes the computer programs stored in the memory 302, so as to implement various functions:
Acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data;
extracting a plurality of entities and entity features from the data of the plurality of dimensions;
based on the entities and the entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information;
generating a plurality of scene categories;
training is performed with a plurality of scene categories based on a plurality of logical unit information to construct a scene category inference model for inferring scene categories.
In some embodiments, when the information is filled into the preset logic unit frame based on the plurality of entities and the entity characteristics to obtain the plurality of logic unit information, the processor 301 is configured to perform the following steps:
constructing a first relation between entities and a second relation between the entities and the entity characteristics to obtain an entity relation library;
and filling information into a preset logic unit frame according to the entity relation library, the plurality of entities and the entity characteristics to obtain a plurality of logic unit information.
In some embodiments, the structure of the preset logic unit frame includes: five logic units of class, object identification, inheritance and object attribute; when the information is filled into the preset logic unit frame according to the entity relation library, the plurality of entities and the entity characteristics, the processor 301 is configured to execute the following steps:
Determining a target entity from the entities according to the entity relation library and the structure of the logic unit framework so as to fill the object;
and determining corresponding entities, entity characteristics or first relations based on the target entities and the entity relation library to fill information into other logic units in the logic unit framework.
In some embodiments, when constructing a first relationship between entities and a second relationship between entities and entity characteristics, to obtain an entity relationship library, the processor 301 is configured to perform the following steps:
and constructing a first relation between the entities and the second relation between the entities and the entity characteristics by adopting the entity connection model so as to obtain an entity relation library.
In some embodiments, when extracting a plurality of entities and entity features from the data of the plurality of dimensions, the processor 301 is configured to perform the following steps:
extracting a plurality of entities from the data in the plurality of dimensions by using a conditional random field;
and extracting a plurality of entity features from the data in the plurality of dimensions by adopting a principal component analysis technology.
In some embodiments, when generating the plurality of scene categories, the processor 301 is configured to perform the following steps:
Clustering the data of the multiple dimensions by adopting a preset clustering algorithm to obtain multiple data sets;
the corresponding classification labels are matched for each data set to generate a plurality of scene categories.
In some embodiments, in model training with multiple scene categories based on multiple logical unit information to construct a scene category inference model, the processor 301 is configured to perform the steps of:
constructing probability distribution between the logic unit information and the scene category by adopting a Bayesian network to obtain the scene category inference model, wherein the probability distribution is as follows:
p t =p t (1|x),p t (2|-x),p t (3|x),...,p t (n|x)
wherein n represents the index of scene category, x represents logic unit information, P t The probability of occurrence of scene category n when the logical unit information at time t is x is represented.
In some embodiments, the processor 301 in the electronic device 300 may further load instructions corresponding to the processes of one or more computer programs into the memory 302 according to the following steps, and the processor 301 executes the computer programs stored in the memory 302, thereby implementing the following functions:
acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene category inference model to infer a current scene category, wherein the scene category inference model is obtained by performing model training according to multiple pieces of logic unit information in different scenes and multiple generated scene categories, and the multiple pieces of logic unit information are obtained by performing information filling on a preset logic unit frame through multiple entities and entity characteristics in the data of the multiple dimensions.
Memory 302 may be used to store computer programs and data. The memory 302 stores computer programs that include instructions that are executable in a processor. The computer program may constitute various functional modules. The processor 301 executes various functional applications and data processing by calling a computer program stored in the memory 302.
In some embodiments, referring to fig. 8, fig. 8 is a schematic diagram of a second structure of an electronic device 300 according to an embodiment of the present application.
Wherein the electronic device 300 further comprises: a display 303, a control circuit 304, an input unit 305, a sensor 306, and a power supply 307. The processor 301 is electrically connected to the display 303, the control circuit 304, the input unit 305, the sensor 306, and the power supply 307.
The display 303 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of images, text, icons, video, and any combination thereof.
The control circuit 304 is electrically connected to the display 303, and is used for controlling the display 303 to display information.
The input unit 305 may be used to receive input numbers, character information or user characteristic information (e.g., a fingerprint), and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. The input unit 305 may include a fingerprint recognition module.
The sensor 306 is used to collect information of the electronic device itself or information of a user or external environment information. For example, the sensor 306 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, a gesture sensor, a barometer, a heart rate sensor, and the like.
The power supply 307 is used to power the various components of the electronic device 300. In some embodiments, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 8, the electronic device 300 may further include a camera, a bluetooth module, etc., which will not be described herein.
As can be seen from the above, the embodiment of the present application provides an electronic device, which performs the following steps: acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data; extracting a plurality of entities and entity features from the data in a plurality of dimensions; based on the entities and the entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information; generating a plurality of scene categories; model training is carried out according to the plurality of logic unit information and the plurality of scene categories so as to construct a scene category inference model. According to the scheme, the object-oriented technology is introduced, so that the flexibility of modeling the panoramic view is greatly improved, and the later maintenance cost and convenience are reduced. In addition, through using the omnidirectional panoramic data of the user, the finally constructed panoramic view can have good personalized characteristics, the identification accuracy and the personalized degree are greatly improved, more accurate user panoramic information is provided for subsequent intelligent service based on panoramic categories, and the quality and the level of intelligent service can be remarkably improved.
The embodiment of the present application also provides a storage medium, in which a computer program is stored, where when the computer program runs on a computer, the computer executes the modeling method or the scene category inference method according to any one of the above embodiments.
For example, in some embodiments, the computer program, when run on the computer, performs the steps of:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data;
extracting a plurality of entities and entity features from the data of the plurality of dimensions;
based on the entities and the entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information;
generating a plurality of scene categories;
model training is performed with the plurality of scene categories according to the plurality of logic unit information to construct a scene category inference model for inferring scene categories.
For another example, in some embodiments, the computer program, when run on the computer, performs the steps of:
Acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene category inference model to infer a current scene category, wherein the scene category inference model is obtained by performing model training according to multiple pieces of logic unit information in different scenes and multiple generated scene categories, and the multiple pieces of logic unit information are obtained by performing information filling on a preset logic unit frame through multiple entities and entity characteristics in the data of the multiple dimensions.
It should be noted that, those skilled in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the storage medium may include, but is not limited to: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The modeling method, the modeling device, the storage medium, the electronic equipment and the scene category deducing method provided by the embodiment of the application are described in detail. The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (9)

1. A modeling method, the modeling method comprising:
acquiring data of multiple dimensions, wherein the data of the multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data;
extracting a plurality of entities and entity features from the data of the plurality of dimensions;
based on the entities and the entity characteristics, filling information into a preset logic unit frame to obtain a plurality of logic unit information; the structure of the preset logic unit frame comprises: five logic units of class, object identification, inheritance and object attribute, and the information filling of the preset logic unit frame comprises the following steps: constructing a first relation between entities and a second relation between the entities and the entity characteristics to obtain an entity relation library; determining a target entity from the entities according to the entity relation library and the structure of the logic unit frame so as to fill objects in the logic unit frame; determining corresponding entities, entity characteristics or first relations based on the target entities and the entity relation library to fill information into other logic units in the logic unit framework;
Generating a plurality of scene categories;
model training is performed with the plurality of scene categories according to the plurality of logic unit information to construct a scene category inference model for inferring scene categories.
2. The modeling method of claim 1, wherein constructing a first relationship between entities and a second relationship between entities and entity features to obtain an entity relationship library comprises:
and constructing a first relation between the entities and the second relation between the entities and the entity characteristics by adopting the entity connection model so as to obtain an entity relation library.
3. The modeling method of claim 1, wherein extracting a plurality of entities and entity features from the data of the plurality of dimensions comprises:
extracting a plurality of entities from the data in the plurality of dimensions by using a conditional random field;
and extracting a plurality of entity features from the data in the plurality of dimensions by adopting a principal component analysis technology.
4. The modeling method of claim 1, wherein the generating a plurality of scene categories comprises:
clustering the data of the multiple dimensions by adopting a preset clustering algorithm to obtain multiple data sets;
The corresponding classification labels are matched for each data set to generate a plurality of scene categories.
5. The modeling method of claim 1, wherein the training the model with the plurality of scene categories based on the plurality of logical unit information to construct a scene category inference model for inferring a scene category comprises:
constructing probability distribution between the logic unit information and the scene category by adopting a Bayesian network to obtain a scene category inference model for inferring the scene category, wherein the probability distribution is as follows:
p t =p t (1|x),p t (2|x),p t (3|x),...,p t (n|x)
wherein n represents the index of scene category, x represents logic unit information, P t The probability of occurrence of scene category n when the logical unit information at time t is x is represented.
6. A scene category inference method, the scene category inference method comprising:
acquiring data of multiple dimensions in a current scene;
processing the data of the multiple dimensions according to a pre-trained scene category inference model to infer a current scene category, wherein the scene category inference model is obtained by performing model training according to multiple pieces of logic unit information under different scenes and multiple generated scene categories, and the multiple pieces of logic unit information are obtained by performing information filling on a preset logic unit frame through multiple entities and entity characteristics in the data of the multiple dimensions; the structure of the preset logic unit frame comprises: five logic units of class, object identification, inheritance and object attribute, and the information filling of the preset logic unit frame comprises the following steps: constructing a first relation between entities and a second relation between the entities and the entity characteristics to obtain an entity relation library; determining a target entity from the entities according to the entity relation library and the structure of the logic unit framework so as to fill the object; and determining corresponding entities, entity characteristics or first relations based on the target entities and the entity relation library to fill information into other logic units in the logic unit framework.
7. A modeling apparatus, characterized in that the modeling apparatus comprises:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring data with multiple dimensions, and the data with multiple dimensions at least comprises: environmental data, device operational data, user behavior habit data, and social context data;
the extraction module is used for extracting a plurality of entities and entity characteristics from the data of the plurality of dimensions;
the filling module is used for filling information into a preset logic unit frame based on the entities and the entity characteristics so as to obtain a plurality of logic unit information; the structure of the preset logic unit frame comprises: five logic units of class, object identification, inheritance and object attribute, and the information filling of the preset logic unit frame comprises the following steps: constructing a first relation between entities and a second relation between the entities and the entity characteristics to obtain an entity relation library; determining a target entity from the entities according to the entity relation library and the structure of the logic unit framework so as to fill the object; determining corresponding entities, entity characteristics or first relations based on the target entities and the entity relation library to fill information into other logic units in the logic unit framework;
The generation module is used for generating a plurality of scene categories;
and the construction module is used for carrying out model training on the plurality of logic unit information and the plurality of scene categories so as to construct a scene category inference model for inferring the scene category.
8. A storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the method according to any one of claims 1-5 or the steps of the method according to claim 6.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1-5 or the steps of the method according to claim 6 when the program is executed by the processor.
CN201910282120.8A 2019-04-09 2019-04-09 Modeling method and device, storage medium and electronic equipment Active CN111797856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282120.8A CN111797856B (en) 2019-04-09 2019-04-09 Modeling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282120.8A CN111797856B (en) 2019-04-09 2019-04-09 Modeling method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111797856A CN111797856A (en) 2020-10-20
CN111797856B true CN111797856B (en) 2023-12-12

Family

ID=72805757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282120.8A Active CN111797856B (en) 2019-04-09 2019-04-09 Modeling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111797856B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241260B (en) * 2020-10-22 2022-04-26 宁波和利时智能科技有限公司 Modeling method and system for physical entity of discrete industry
CN112764802A (en) * 2021-01-19 2021-05-07 挂号网(杭州)科技有限公司 Business logic customization method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254194A (en) * 2011-07-19 2011-11-23 清华大学 Supervised manifold learning-based scene classifying method and device
CN102460431A (en) * 2009-05-08 2012-05-16 佐科姆有限公司 System and method for behavioural and contextual data analytics
CN107339990A (en) * 2017-06-27 2017-11-10 北京邮电大学 Multi-pattern Fusion alignment system and method
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108875693A (en) * 2018-07-03 2018-11-23 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and its storage medium
CN108898174A (en) * 2018-06-25 2018-11-27 Oppo(重庆)智能科技有限公司 A kind of contextual data acquisition method, contextual data acquisition device and electronic equipment
CN109033053A (en) * 2018-07-10 2018-12-18 广州极天信息技术股份有限公司 A kind of knowledge edition method and device based on scene
CN109101931A (en) * 2018-08-20 2018-12-28 Oppo广东移动通信有限公司 A kind of scene recognition method, scene Recognition device and terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10878339B2 (en) * 2017-01-27 2020-12-29 Google Llc Leveraging machine learning to predict user generated content
US20190057320A1 (en) * 2017-08-16 2019-02-21 ODH, Inc. Data processing apparatus for accessing shared memory in processing structured data for modifying a parameter vector data structure

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102460431A (en) * 2009-05-08 2012-05-16 佐科姆有限公司 System and method for behavioural and contextual data analytics
CN102254194A (en) * 2011-07-19 2011-11-23 清华大学 Supervised manifold learning-based scene classifying method and device
CN107339990A (en) * 2017-06-27 2017-11-10 北京邮电大学 Multi-pattern Fusion alignment system and method
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108898174A (en) * 2018-06-25 2018-11-27 Oppo(重庆)智能科技有限公司 A kind of contextual data acquisition method, contextual data acquisition device and electronic equipment
CN108875693A (en) * 2018-07-03 2018-11-23 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and its storage medium
CN109033053A (en) * 2018-07-10 2018-12-18 广州极天信息技术股份有限公司 A kind of knowledge edition method and device based on scene
CN109101931A (en) * 2018-08-20 2018-12-28 Oppo广东移动通信有限公司 A kind of scene recognition method, scene Recognition device and terminal device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
类别约束下自适应主题建模的图像场景分类;唐颖军等;小型微型计算机系统;全文 *

Also Published As

Publication number Publication date
CN111797856A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN111800331A (en) Notification message pushing method and device, storage medium and electronic equipment
CN105989174B (en) Region-of-interest extraction element and region-of-interest extracting method
CN111814475A (en) User portrait construction method and device, storage medium and electronic equipment
CN111797854A (en) Scene model establishing method and device, storage medium and electronic equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111797302A (en) Model processing method and device, storage medium and electronic equipment
CN111797856B (en) Modeling method and device, storage medium and electronic equipment
CN111753683A (en) Human body posture identification method based on multi-expert convolutional neural network
CN111797851A (en) Feature extraction method and device, storage medium and electronic equipment
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN114783601A (en) Physiological data analysis method and device, electronic equipment and storage medium
CN116935188B (en) Model training method, image recognition method, device, equipment and medium
Qiao et al. Group behavior recognition based on deep hierarchical network
Shi et al. Sensor‐based activity recognition independent of device placement and orientation
CN111798019B (en) Intention prediction method, intention prediction device, storage medium and electronic equipment
CN111797175B (en) Data storage method and device, storage medium and electronic equipment
CN111814812A (en) Modeling method, modeling device, storage medium, electronic device and scene recognition method
CN111797874B (en) Behavior prediction method and device, storage medium and electronic equipment
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN111797862A (en) Task processing method and device, storage medium and electronic equipment
CN111797261A (en) Feature extraction method and device, storage medium and electronic equipment
CN111797986A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN111797875B (en) Scene modeling method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant