CN111797875B - Scene modeling method and device, storage medium and electronic equipment - Google Patents

Scene modeling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111797875B
CN111797875B CN201910282458.3A CN201910282458A CN111797875B CN 111797875 B CN111797875 B CN 111797875B CN 201910282458 A CN201910282458 A CN 201910282458A CN 111797875 B CN111797875 B CN 111797875B
Authority
CN
China
Prior art keywords
feature
features
data
scene
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910282458.3A
Other languages
Chinese (zh)
Other versions
CN111797875A (en
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282458.3A priority Critical patent/CN111797875B/en
Publication of CN111797875A publication Critical patent/CN111797875A/en
Application granted granted Critical
Publication of CN111797875B publication Critical patent/CN111797875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a scene modeling method, a device, a storage medium and electronic equipment, wherein the scene modeling method comprises the following steps: obtaining perception data of a current scene; acquiring a plurality of characteristics of the current scene according to the perception data; performing value filling on the plurality of features to obtain a plurality of feature key value pairs; and modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene. In the scene modeling method provided by the embodiment of the application, the electronic equipment can perform value filling on a plurality of characteristics of the current scene and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.

Description

Scene modeling method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a scene modeling method, a device, a storage medium, and an electronic apparatus.
Background
With the development of electronic technology, electronic devices such as smartphones are capable of providing more and more services to users. For example, the electronic device may provide social services, navigation services, travel recommendation services, and the like to the user. In order to be able to provide targeted, personalized services to users, the electronic device needs to identify the scene in which the user is located.
Disclosure of Invention
The embodiment of the application provides a scene modeling method, a scene modeling device, a storage medium and electronic equipment, which can improve the intelligent degree of the electronic equipment.
The embodiment of the application provides a scene modeling method, which comprises the following steps:
obtaining perception data of a current scene;
acquiring a plurality of characteristics of the current scene according to the perception data;
performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature;
and modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene.
The embodiment of the application also provides a scene modeling device, which comprises:
the first acquisition module is used for acquiring the perception data of the current scene;
The second acquisition module is used for acquiring a plurality of characteristics of the current scene according to the perception data;
the value filling module is used for filling the plurality of features with values to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature;
and the modeling module is used for modeling the plurality of characteristic key value pairs according to the probability map model so as to obtain a scene model of the current scene.
The embodiment of the application also provides a storage medium, wherein the storage medium stores a computer program, and when the computer program runs on a computer, the computer program causes the computer to execute the scene modeling method.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the scene modeling method by calling the computer program stored in the memory.
The scene modeling method provided by the embodiment of the application comprises the following steps: obtaining perception data of a current scene; acquiring a plurality of characteristics of the current scene according to the perception data; performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature; and modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene. In the scene modeling method, the electronic device can perform value filling on a plurality of characteristics of the current scene, and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the application and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a scenario modeling method provided by an embodiment of the present application.
Fig. 2 is a schematic flow chart of a scene modeling method according to an embodiment of the present application.
Fig. 3 is a second flowchart of a scene modeling method according to an embodiment of the present application.
Fig. 4 is a third flowchart of a scene modeling method according to an embodiment of the present application.
Fig. 5 is a fourth flowchart of a scene modeling method according to an embodiment of the present application.
Fig. 6 is a fifth flowchart of a scene modeling method according to an embodiment of the present application.
Fig. 7 is a sixth flowchart of a scene modeling method according to an embodiment of the present application.
Fig. 8 is a seventh flowchart of a scene modeling method according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a first structure of a scene modeling apparatus according to an embodiment of the present application.
Fig. 10 is a schematic diagram of a second structure of a scene modeling apparatus according to an embodiment of the present application.
Fig. 11 is a schematic diagram of a first structure of an electronic device according to an embodiment of the present application.
Fig. 12 is a schematic diagram of a second structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application based on the embodiments of the present application.
Referring to fig. 1, fig. 1 is an application scenario schematic diagram of a scenario modeling method according to an embodiment of the present application. The scene modeling method is applied to the electronic equipment. A panoramic sensing architecture is arranged in the electronic equipment. The panoramic sensing architecture is an integration of hardware and software for implementing the scene modeling method in the electronic device.
The panoramic sensing architecture comprises an information sensing layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information sensing layer is used for acquiring information of the electronic equipment or information in an external environment. The information sensing layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, a gesture sensor, a barometer, a heart rate sensor, and the like. The sensors included in the information sensing layer may not be limited to the above-listed sensors, and may include sensors not listed.
Wherein the distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of an environment in which the electronic device is located. The light sensor may be used to detect light information of an environment in which the electronic device is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocities of an electronic device in various directions. Inertial sensors may be used to detect motion data of the electronic device. The gesture sensor may be used to sense gesture information of the electronic device. Barometers may be used to detect the air pressure of an environment in which an electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
The data processing layer is used for processing the data acquired by the information sensing layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by an information sensing layer into a higher or more abstract dimension so as to comprehensively process the plurality of single-dimensional data. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer, so that the transformed data meets the processing requirement. Data reduction refers to maximally simplifying the data volume on the premise of keeping the original appearance of the data as much as possible.
The feature extraction layer is used for extracting features of the data processed by the data processing layer so as to extract features included in the data. The extracted features can reflect the state of the electronic equipment itself or the state of the user or the environmental state of the environment where the electronic equipment is located, etc.
The feature extraction layer may extract features by filtration, packaging, integration, or the like, or process the extracted features.
Filtering means that the extracted features are filtered to delete redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate multiple feature extraction methods together to construct a more efficient and accurate feature extraction method for extracting features.
The scene modeling layer is used for constructing a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment or the state of a user or the state of the environment and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, a physical relationship model, an object-oriented model, and the like from the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent service for users according to the model constructed by the scene modeling layer. For example, the intelligent service layer may provide basic application services for users, may perform system intelligent optimization for electronic devices, and may provide personalized intelligent services for users.
In addition, the panoramic sensing architecture can also comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the algorithms can form an algorithm library. For example, the algorithm library may include a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a fast automatic keyword extraction algorithm, a recurrent neural network, a long-short-term memory network, a convolutional neural network, a recurrent neural network, and the like.
The embodiment of the application provides a scene modeling method which can be applied to electronic equipment. The electronic device may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality ) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook computer, a desktop computing device, a wearable device such as an electronic watch, electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic article of clothing, or the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a scene modeling method according to an embodiment of the present application.
The scene modeling method comprises the following steps:
110, obtaining the perception data of the current scene.
The electronic device may obtain the perceived data of the current scene. The current scene is a scene of the environment where the electronic equipment is currently located, namely, a scene of the environment where the user of the electronic equipment is currently located. The perceptual data may comprise any data. For example, the sensory data may include a variety of data including ambient temperature, ambient light intensity, image data, audio data, text data displayed on an electronic device, and the like.
The electronic device can acquire the sensing data of the current scene through the information sensing layer in the panoramic sensing architecture. For example, the electronic device may detect an ambient temperature through a temperature sensor, an ambient light intensity through a light sensor, image data in the surrounding environment through a camera, audio data in the surrounding environment through a microphone, and text data displayed on the electronic device through a display control circuit.
120, obtaining a plurality of features of the current scene according to the perception data.
After the electronic device acquires the perception data of the current scene, the electronic device can acquire a plurality of characteristics of the current scene according to the perception data. The plurality of features may be used to reflect the current scene situation. Wherein the plurality of features may include any physical object, virtual object, concept name, and the like. For example, the plurality of features may include people, animals, buildings, cell phones, games, novels, meetings, temperatures, ambient light intensities, and the like.
In some embodiments, after the electronic device obtains the plurality of features of the current scene, a feature vector of the current scene may be constructed through the plurality of features, and the current scene may be quantized through the feature vector, so as to represent the current scene through the feature vector.
130, performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature.
After the electronic device obtains the plurality of features of the current scene, the plurality of features can be subjected to value filling, namely, each feature is assigned, so that each feature has a corresponding feature value, and a plurality of feature key value pairs are obtained. Wherein each of the feature key value pairs includes a feature and a feature value corresponding to the feature.
Each feature can be used as a key in the non-relational database, and the feature value corresponding to the feature can be used as a value in the non-relational database. The characteristic key value pair is a data pair formed by key-value.
When the electronic device performs value filling on each feature, the feature value filled for the feature may be a specific numerical value or a vector.
140, modeling the plurality of feature key value pairs according to the probability map model to obtain a scene model of the current scene.
After the electronic equipment obtains a plurality of characteristic key value pairs, the characteristic key value pairs can be modeled according to the probability map model so as to obtain a scene model of the current scene. The scene model may be used to represent a current scene.
The electronic device may then provide personalized services to the user based on the scene model. For example, when the user starts the driving mode, the electronic device may query the state of the user in the driving mode according to the scene model, for example, determine whether the user waits for a traffic light, and make a corresponding decision according to the determined result.
For example, in some embodiments, the electronic device may obtain, through an information sensing layer, sensing data of a current scene, and obtain, through a feature extraction layer, a plurality of features of the current scene according to the sensing data. It may be appreciated that, before the feature extraction layer extracts a plurality of features of the current scene according to the perceptual data, the perceptual data may also be processed by a data processing layer, for example, performing a data cleaning process, a data transformation process, and so on. And then, the feature extraction layer acquires a plurality of features of the current scene according to the perceived data processed by the data processing layer.
And then, the electronic equipment can perform value filling on the plurality of features through a feature extraction layer to obtain a plurality of feature key value pairs, and model the plurality of feature key value pairs through a scene modeling layer according to a probability map model to obtain a scene model of the current scene.
In the embodiment of the application, the electronic equipment can perform value filling on a plurality of characteristics of the current scene and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.
In some embodiments, referring to fig. 3, fig. 3 is a second flowchart of a scene modeling method according to an embodiment of the present application.
The step 130 of performing value filling on the plurality of features to obtain a plurality of feature key value pairs includes the following steps:
131, obtaining a feature value corresponding to each feature according to each feature and a preset mapping relation, wherein the preset mapping relation comprises a corresponding relation between a preset feature and a preset feature value;
and 132, performing value filling on the plurality of features through the feature values corresponding to each feature to obtain a plurality of feature key value pairs.
The preset mapping relation can be preset in the electronic device. The preset mapping relation comprises a corresponding relation between preset features and preset feature values.
When the electronic equipment performs value filling on the plurality of features, the feature value corresponding to each feature can be obtained according to each feature and the preset mapping relation. The electronic device may match each feature with the preset mapping relationship, so as to query a preset feature value corresponding to a preset feature identical to the feature in the preset mapping relationship.
And then, the electronic equipment performs value filling on the plurality of features through the feature values corresponding to each feature so as to obtain a plurality of feature key value pairs. For example, after the electronic device queries the feature value corresponding to each feature, the feature value corresponding to the feature may be filled into the value position corresponding to the feature.
In some embodiments, referring to fig. 4, fig. 4 is a third flowchart of a scene modeling method according to an embodiment of the present application.
Step 120, obtaining a plurality of features of the current scene according to the perception data, includes the following steps:
121, selecting a corresponding feature extraction model according to the data type of the perception data;
122 extracting a plurality of features of the current scene from the perceptual data by the feature extraction model.
A plurality of feature extraction models may be preset in the electronic device, each feature extraction model being used for feature extraction of one type of perception data. For example, a recurrent neural network model, a rapid automatic keyword extraction (Rapid Automatic Keyword Extraction, RAKE) algorithm model, a convolutional neural network model, a recurrent neural network model, a long-term memory network model, and the like may be preset in the electronic device.
The cyclic neural network model is used for processing sensor data to extract features from the sensor data, and can be used for processing data detected by sensors such as a temperature sensor and an acceleration sensor. A fast automatic keyword extraction algorithm model is used to process text data to extract text features from the text data. The convolutional neural network model is used to process image data to extract image features from the image data. The recurrent neural network model is used to process the audio data to extract audio features from the audio data. The long-term memory network model may also be used to process audio data to extract audio features from the audio data.
After the electronic device acquires the perception data of the current scene, a corresponding feature extraction model can be selected according to the data type of the perception data. When the sensory data includes a plurality of data types, the electronic device may select a corresponding feature extraction model according to each data type.
The electronic device then extracts a plurality of features of the current scene from the perceptual data via the selected feature extraction model.
In some embodiments, referring to fig. 5, fig. 5 is a fourth flowchart of a scene modeling method according to an embodiment of the present application.
Wherein, step 122, extracting a plurality of features of the current scene from the perception data through the feature extraction model includes the following steps:
1221 extracting a plurality of first features from the sensor data through a recurrent neural network model;
1222, clustering the plurality of first features through a K-nearest neighbor algorithm to obtain a plurality of feature classes, wherein each feature class comprises a plurality of first features;
1223 extracting a second feature from each of the plurality of feature classes to obtain a plurality of second features.
Wherein the sensory data acquired by the electronic device includes sensor data. The sensor data is the data acquired by each sensor of the electronic equipment. The electronic device may extract a plurality of first features from the sensor data through the recurrent neural network model. The first feature may include, for example, a "temperature", "humidity", "intensity", "speed", "distance", "angle", "length", "direction", etc. feature.
And then, the electronic equipment clusters the plurality of first features through a K nearest neighbor algorithm to obtain a plurality of feature classes. Wherein each of the feature classes includes a plurality of first features. For example, a certain feature class may include features such as "temperature", "humidity", and another feature class may include features such as "angle", "length", "direction", and the like.
After the electronic device obtains the plurality of feature classes, one second feature may be extracted from each feature class in the plurality of feature classes to obtain a plurality of second features. Wherein the second feature may be a synthesis of a plurality of first features in the feature class. For example, the second feature extracted from the feature class including "temperature", "humidity" may be "weather".
The electronic device may then determine the resulting plurality of second features as features of the current scene.
In some embodiments, referring to fig. 6, fig. 6 is a fifth flowchart of a scene modeling method according to an embodiment of the present application.
Wherein, step 122, extracting a plurality of features of the current scene from the perception data through the feature extraction model, further comprises the following steps:
1224 extracting third features from the text data by a fast automatic keyword extraction algorithm model;
1225 extracting fourth features from the image data by a convolutional neural network model;
1226 extracting fifth features from the audio data by a recurrent neural network model or a long-short term memory network model.
The sensing data acquired by the electronic equipment further comprises text data, image data and audio data. The feature extraction model selected by the electronic device for the text data may be a fast automatic keyword extraction algorithm model, the feature extraction model selected for the image data may be a convolutional neural network model, and the feature extraction model selected for the audio data may be a recurrent neural network model or a long-short-term memory network model.
And then, the electronic equipment extracts a third feature from the text data through a rapid automatic keyword extraction algorithm model, extracts a fourth feature from the image data through a convolutional neural network model, and extracts a fifth feature from the audio data through a recurrent neural network model or a long-short-term memory network model.
Wherein the third, fourth, and fifth features may each comprise one or more features. The third feature may include, for example, a "novel," "meeting," "business trip," or the like feature. The fourth feature may include, for example, a "landscape", "building", "person", "exposure", "pixel", etc. feature. The fifth feature may include, for example, a "music," "singer," "album," or the like feature.
The electronic device may then take the extracted third, fourth, and fifth features as features of the current scene as well.
In some embodiments, referring to fig. 7, fig. 7 is a sixth flowchart of a scene modeling method according to an embodiment of the present application.
Wherein, step 122, extracting a plurality of features of the current scene from the perception data through the feature extraction model, further comprises the following steps:
1227, aggregating the plurality of second features, the third feature, the fourth feature, and the fifth feature to obtain a feature vector of the current scene;
1228 determining a plurality of features in the feature vector as a plurality of features of the current scene.
After the electronic device acquires a plurality of second features from the sensor data, acquires a third feature from the text data, acquires a fourth feature from the image data and acquires a fifth feature from the audio data, the plurality of second features, the third feature, the fourth feature and the fifth feature can be aggregated to obtain feature vectors of the current scene.
For example, the plurality of second features includes feature A, B, the third feature includes feature C, D, the fourth feature includes feature E, F, G, and the fifth feature includes feature H, I, and the feature vector obtained after the aggregation may be P (a, B, C, D, E, F, G, H, I). The feature vector P is the feature vector of the current scene.
The electronic device then determines a plurality of features in the feature vector as a plurality of features of the current scene.
Because the extracted features in different data types may have the condition of coincidence, the features in the feature vector obtained after aggregation have certain redundancy, and multidimensional information can be conveniently obtained.
In some embodiments, referring to fig. 8, fig. 8 is a seventh flowchart of a scene modeling method according to an embodiment of the present application.
Step 110, before obtaining the perceived data of the current scene, further includes the following steps:
151, setting a corresponding preset feature value for each preset feature in the plurality of preset features;
152, establishing a preset mapping relationship between the preset features and the preset feature values according to the preset features and the preset feature values corresponding to each preset feature.
The preset mapping relation between the preset features and the preset feature values can be preset in the electronic equipment. Wherein, a corresponding preset characteristic value can be set for each preset characteristic in a plurality of preset characteristics in a manual setting mode. And then, establishing a preset mapping relation between the preset features and preset feature values according to the preset features and the preset feature values corresponding to each preset feature.
For example, a plurality of preset features may be determined by an expert in the art, and then a preset feature value is set for each preset feature. And then, storing the preset features, the preset feature values and the corresponding relation between each preset feature and the preset feature value in a database form, so as to establish the preset mapping relation between the preset features and the preset feature values. The preset characteristic value may be a specific numerical value or a vector.
It is to be understood that in embodiments of the application, terms such as "first," "second," and the like are used merely to distinguish between similar objects and not necessarily to describe a particular order or sequence, such that the described objects may be interchanged where appropriate. Furthermore, the term "plurality" means two or more, i.e. at least two.
In particular, the application is not limited by the order of execution of the steps described, as some of the steps may be performed in other orders or concurrently without conflict.
As can be seen from the above, the scene modeling method provided by the embodiment of the present application includes: obtaining perception data of a current scene; acquiring a plurality of characteristics of the current scene according to the perception data; performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature; and modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene. In the scene modeling method, the electronic device can perform value filling on a plurality of characteristics of the current scene, and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.
The embodiment of the application also provides a scene modeling device which can be integrated in the electronic equipment. The electronic device may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality ) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook computer, a desktop computing device, a wearable device such as an electronic watch, electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic article of clothing, or the like.
Referring to fig. 9, fig. 9 is a schematic diagram of a first structure of a scene modeling apparatus according to an embodiment of the present application.
Wherein the scene modeling apparatus 200 includes: a first acquisition module 201, a second acquisition module 202, a value filling module 203, and a modeling module 204.
A first obtaining module 201, configured to obtain perceptual data of a current scene.
The first acquisition module 201 may acquire the perceived data of the current scene. The current scene is a scene of the environment where the electronic equipment is currently located, namely, a scene of the environment where the user of the electronic equipment is currently located. The perceptual data may comprise any data. For example, the sensory data may include a variety of data including ambient temperature, ambient light intensity, image data, audio data, text data displayed on an electronic device, and the like.
The first acquisition module 201 may acquire the sensing data of the current scene through an information sensing layer in a panoramic sensing architecture of the electronic device. For example, ambient temperature may be detected by a temperature sensor, ambient light intensity may be detected by a light sensor, image data in the surrounding environment may be acquired by a camera, audio data in the surrounding environment may be acquired by a microphone, and text data displayed on an electronic device may be acquired by a display control circuit.
A second obtaining module 202, configured to obtain a plurality of features of the current scene according to the perceptual data.
After the first obtaining module 201 obtains the perceived data of the current scene, the second obtaining module 202 may obtain a plurality of features of the current scene according to the perceived data. The plurality of features may be used to reflect the current scene situation. Wherein the plurality of features may include any physical object, virtual object, concept name, and the like. For example, the plurality of features may include people, animals, buildings, cell phones, games, novels, meetings, temperatures, ambient light intensities, and the like.
In some embodiments, after the second obtaining module 202 obtains the plurality of features of the current scene, a feature vector of the current scene may be constructed through the plurality of features, and the current scene may be quantized through the feature vector to represent the current scene through the feature vector.
And the value filling module 203 is configured to perform value filling on the plurality of features to obtain a plurality of feature key value pairs, where each feature key value pair includes a feature and a feature value corresponding to the feature.
After the second obtaining module 202 obtains the plurality of features of the current scene, the value filling module 203 may perform value filling on the plurality of features, that is, assign a value to each feature, so that each feature has a corresponding feature value, so as to obtain a plurality of feature key value pairs. Wherein each of the feature key value pairs includes a feature and a feature value corresponding to the feature.
Each feature can be used as a key in the non-relational database, and the feature value corresponding to the feature can be used as a value in the non-relational database. The characteristic key value pair is a data pair formed by key-value.
Note that, when the value filling module 203 performs value filling on each feature, the feature value filled for the feature may be a specific numerical value or may be a vector.
The modeling module 204 is configured to model the plurality of feature key value pairs according to a probability map model, so as to obtain a scene model of the current scene.
After the value filling module 203 obtains the plurality of feature key value pairs, the modeling module 204 may model the plurality of feature key value pairs according to the probability map model to obtain a scene model of the current scene. The scene model may be used to represent a current scene.
The electronic device may then provide personalized services to the user based on the scene model. For example, when the user starts the driving mode, the electronic device may query the state of the user in the driving mode according to the scene model, for example, determine whether the user waits for a traffic light, and make a corresponding decision according to the determined result.
In the embodiment of the application, the electronic equipment can perform value filling on a plurality of characteristics of the current scene and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.
In some embodiments, the value filling module 203 is configured to perform the steps of:
acquiring a feature value corresponding to each feature according to each feature and a preset mapping relation, wherein the preset mapping relation comprises a corresponding relation between a preset feature and a preset feature value;
And filling the values of the plurality of features by the feature values corresponding to each feature to obtain a plurality of feature key value pairs.
The preset mapping relation can be preset in the electronic device. The preset mapping relation comprises a corresponding relation between preset features and preset feature values.
When the value filling module 203 performs value filling on the plurality of features, a feature value corresponding to each feature may be obtained according to each feature and the preset mapping relationship. The value filling module 203 may match each feature with the preset mapping relationship, so as to query a preset feature value corresponding to a preset feature identical to the feature in the preset mapping relationship.
Then, the value filling module 203 performs value filling on the plurality of features through the feature values corresponding to each feature, so as to obtain a plurality of feature key value pairs. For example, after the value filling module 203 queries the feature value corresponding to each feature, the feature value corresponding to the feature may be filled into the value position corresponding to the feature.
In some embodiments, the second acquisition module 202 is configured to perform the steps of:
selecting a corresponding feature extraction model according to the data type of the perception data;
And extracting a plurality of features of the current scene from the perception data through the feature extraction model.
A plurality of feature extraction models may be preset in the electronic device, each feature extraction model being used for feature extraction of one type of perception data. For example, a recurrent neural network model, a rapid automatic keyword extraction (Rapid Automatic Keyword Extraction, RAKE) algorithm model, a convolutional neural network model, a recurrent neural network model, a long-term memory network model, and the like may be preset in the electronic device.
The cyclic neural network model is used for processing sensor data to extract features from the sensor data, and can be used for processing data detected by sensors such as a temperature sensor and an acceleration sensor. A fast automatic keyword extraction algorithm model is used to process text data to extract text features from the text data. The convolutional neural network model is used to process image data to extract image features from the image data. The recurrent neural network model is used to process the audio data to extract audio features from the audio data. The long-term memory network model may also be used to process audio data to extract audio features from the audio data.
After the first obtaining module 201 obtains the perceived data of the current scene, the second obtaining module 202 may select a corresponding feature extraction model according to the data type of the perceived data. When the perceptual data comprises a plurality of data types, the second acquisition module 202 may select a corresponding feature extraction model based on each data type.
Subsequently, the second acquisition module 202 extracts a plurality of features of the current scene from the perceptual data by the selected feature extraction model.
In some embodiments, when extracting the plurality of features of the current scene from the perceptual data by the feature extraction model, the second acquisition module 202 is configured to perform the steps of:
extracting a plurality of first features from the sensor data by a recurrent neural network model;
clustering the plurality of first features through a K nearest neighbor algorithm to obtain a plurality of feature classes, wherein each feature class comprises a plurality of first features;
extracting a second feature from each of the plurality of feature classes to obtain a plurality of second features.
Wherein the sensing data acquired by the first acquisition module 201 includes sensor data. The sensor data is the data acquired by each sensor of the electronic equipment. The second acquisition module 202 may extract a plurality of first features from the sensor data through a recurrent neural network model. The first feature may include, for example, a "temperature", "humidity", "intensity", "speed", "distance", "angle", "length", "direction", etc. feature.
Subsequently, the second obtaining module 202 clusters the plurality of first features through a K-nearest neighbor algorithm, to obtain a plurality of feature classes. Wherein each of the feature classes includes a plurality of first features. For example, a certain feature class may include features such as "temperature", "humidity", and another feature class may include features such as "angle", "length", "direction", and the like.
After obtaining the plurality of feature classes, the second obtaining module 202 may extract a second feature from each feature class of the plurality of feature classes to obtain a plurality of second features. Wherein the second feature may be a synthesis of a plurality of first features in the feature class. For example, the second feature extracted from the feature class including "temperature", "humidity" may be "weather".
The second acquisition module 202 may then determine the resulting plurality of second features as features of the current scene.
In some embodiments, when extracting the plurality of features of the current scene from the perceptual data by the feature extraction model, the second acquisition module 202 is further configured to perform the steps of:
extracting a third feature from the text data by a fast automatic keyword extraction algorithm model;
Extracting a fourth feature from the image data by a convolutional neural network model;
and extracting a fifth feature from the audio data through a recurrent neural network model or a long-short-term memory network model.
The sensing data acquired by the first acquisition module 201 further includes text data, image data, and audio data. The feature extraction model selected by the second acquisition module 202 for text data may be a fast automatic keyword extraction algorithm model, the feature extraction model selected for image data may be a convolutional neural network model, and the feature extraction model selected for audio data may be a recurrent neural network model or a long-short-term memory network model.
Subsequently, the second acquisition module 202 extracts a third feature from the text data through a fast automatic keyword extraction algorithm model, a fourth feature from the image data through a convolutional neural network model, and a fifth feature from the audio data through a recursive neural network model or a long-short term memory network model.
Wherein the third, fourth, and fifth features may each comprise one or more features. The third feature may include, for example, a "novel," "meeting," "business trip," or the like feature. The fourth feature may include, for example, a "landscape", "building", "person", "exposure", "pixel", etc. feature. The fifth feature may include, for example, a "music," "singer," "album," or the like feature.
Subsequently, the second obtaining module 202 may also use the extracted third feature, fourth feature, and fifth feature as features of the current scene.
In some embodiments, when extracting the plurality of features of the current scene from the perceptual data by the feature extraction model, the second acquisition module 202 is further configured to perform the steps of:
aggregating the plurality of second features, the third features, the fourth features and the fifth features to obtain feature vectors of the current scene;
a plurality of features in the feature vector are determined as a plurality of features of the current scene.
The second obtaining module 202 obtains a plurality of second features from the sensor data, obtains a third feature from the text data, obtains a fourth feature from the image data, and obtains a fifth feature from the audio data, and then aggregates the plurality of second features, the third feature, the fourth feature, and the fifth feature to obtain feature vectors of the current scene.
For example, the plurality of second features includes feature A, B, the third feature includes feature C, D, the fourth feature includes feature E, F, G, and the fifth feature includes feature H, I, and the feature vector obtained after the aggregation may be P (a, B, C, D, E, F, G, H, I). The feature vector P is the feature vector of the current scene.
Subsequently, the second acquisition module 202 determines a plurality of features in the feature vector as a plurality of features of the current scene.
Because the extracted features in different data types may have the condition of coincidence, the features in the feature vector obtained after aggregation have certain redundancy, and multidimensional information can be conveniently obtained.
In some embodiments, referring to fig. 10, fig. 10 is a schematic diagram of a second structure of a scene modeling apparatus according to an embodiment of the present application.
Wherein, the scene modeling apparatus 200 further comprises: the relationship establishment module 205. The relationship establishment module 205 is configured to:
setting a corresponding preset feature value for each preset feature in the plurality of preset features;
and establishing a preset mapping relation between the preset features and preset feature values according to the preset features and the preset feature values corresponding to each preset feature.
The relationship establishing module 205 may set a preset mapping relationship between preset features and preset feature values in the electronic device in advance. Wherein, a corresponding preset characteristic value can be set for each preset characteristic in a plurality of preset characteristics in a manual setting mode. And then, establishing a preset mapping relation between the preset features and preset feature values according to the preset features and the preset feature values corresponding to each preset feature.
For example, a plurality of preset features may be determined by an expert in the art, and then a preset feature value is set for each preset feature. Then, the relationship establishing module 205 stores the plurality of preset features, the plurality of preset feature values, and the corresponding relationship between each preset feature and the preset feature value in a database, so as to establish a preset mapping relationship between the preset feature and the preset feature value. The preset characteristic value may be a specific numerical value or a vector.
In specific implementation, each module may be implemented as a separate entity, or may be combined arbitrarily and implemented as the same entity or several entities.
As can be seen from the above, the scene modeling apparatus 200 provided in the embodiment of the present application includes: a first obtaining module 201, configured to obtain perceived data of a current scene; a second obtaining module 202, configured to obtain a plurality of features of the current scene according to the perceptual data; a value filling module 203, configured to perform value filling on the plurality of features to obtain a plurality of feature key value pairs, where each feature key value pair includes a feature and a feature value corresponding to the feature; the modeling module 204 is configured to model the plurality of feature key value pairs according to a probability map model, so as to obtain a scene model of the current scene. The scene modeling device can perform value filling on a plurality of characteristics of the current scene, and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.
The embodiment of the application also provides electronic equipment. The electronic device may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality ) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook computer, a desktop computing device, a wearable device such as an electronic watch, electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic article of clothing, or the like.
Referring to fig. 11, fig. 11 is a schematic diagram of a first structure of an electronic device according to an embodiment of the present application.
Wherein the electronic device 300 comprises a processor 301 and a memory 302. The processor 301 is electrically connected to the memory 302.
The processor 301 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling computer programs stored in the memory 302, and calling data stored in the memory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 301 in the electronic device 300 loads the instructions corresponding to the processes of one or more computer programs into the memory 302 according to the following steps, and the processor 301 executes the computer programs stored in the memory 302, so as to implement various functions:
Obtaining perception data of a current scene;
acquiring a plurality of characteristics of the current scene according to the perception data;
performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature;
and modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene.
In some embodiments, the processor 301 performs the following steps for value filling the plurality of features to obtain a plurality of feature key value pairs:
acquiring a feature value corresponding to each feature according to each feature and a preset mapping relation, wherein the preset mapping relation comprises a corresponding relation between a preset feature and a preset feature value;
and filling the values of the plurality of features by the feature values corresponding to each feature to obtain a plurality of feature key value pairs.
In some embodiments, when acquiring the plurality of features of the current scene from the perceptual data, the processor 301 performs the steps of:
selecting a corresponding feature extraction model according to the data type of the perception data;
and extracting a plurality of features of the current scene from the perception data through the feature extraction model.
In some embodiments, the perceptual data comprises sensor data, and the processor 301 performs the following steps when extracting a plurality of features of the current scene from the perceptual data by the feature extraction model:
extracting a plurality of first features from the sensor data by a recurrent neural network model;
clustering the plurality of first features through a K nearest neighbor algorithm to obtain a plurality of feature classes, wherein each feature class comprises a plurality of first features;
extracting a second feature from each of the plurality of feature classes to obtain a plurality of second features.
In some embodiments, the perceptual data further comprises text data, image data, audio data, and when extracting the plurality of features of the current scene from the perceptual data by the feature extraction model, the processor 301 further performs the steps of:
extracting a third feature from the text data by a fast automatic keyword extraction algorithm model;
extracting a fourth feature from the image data by a convolutional neural network model;
and extracting a fifth feature from the audio data through a recurrent neural network model or a long-short-term memory network model.
In some embodiments, when extracting the plurality of features of the current scene from the perceptual data by the feature extraction model, the processor 301 further performs the steps of:
aggregating the plurality of second features, the third features, the fourth features and the fifth features to obtain feature vectors of the current scene;
a plurality of features in the feature vector are determined as a plurality of features of the current scene.
In some embodiments, prior to acquiring the perceptual data of the current scene, the processor 301 further performs the steps of:
setting a corresponding preset feature value for each preset feature in the plurality of preset features;
and establishing a preset mapping relation between the preset features and preset feature values according to the preset features and the preset feature values corresponding to each preset feature.
Memory 302 may be used to store computer programs and data. The memory 302 stores computer programs that include instructions that are executable in a processor. The computer program may constitute various functional modules. The processor 301 executes various functional applications and data processing by calling a computer program stored in the memory 302.
In some embodiments, referring to fig. 12, fig. 12 is a schematic diagram of a second structure of an electronic device according to an embodiment of the present application.
Wherein the electronic device 300 further comprises: a display 303, a control circuit 304, an input unit 305, a sensor 306, and a power supply 307. The processor 301 is electrically connected to the display 303, the control circuit 304, the input unit 305, the sensor 306, and the power supply 307.
The display 303 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of images, text, icons, video, and any combination thereof.
The control circuit 304 is electrically connected to the display 303, and is used for controlling the display 303 to display information.
The input unit 305 may be used to receive input numbers, character information or user characteristic information (e.g., a fingerprint), and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. The input unit 305 may include a fingerprint recognition module.
The sensor 306 is used to collect information of the electronic device itself or information of a user or external environment information. For example, the sensor 306 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, a gesture sensor, a barometer, a heart rate sensor, and the like.
The power supply 307 is used to power the various components of the electronic device 300. In some embodiments, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 12, the electronic device 300 may further include a camera, a bluetooth module, etc., which will not be described herein.
As can be seen from the above, the embodiment of the present application provides an electronic device, which performs the following steps: obtaining perception data of a current scene; acquiring a plurality of characteristics of the current scene according to the perception data; performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature; and modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene. The electronic equipment can perform value filling on a plurality of characteristics of the current scene, and train a plurality of characteristic key value pairs obtained by filling so as to obtain a scene model of the current scene. Therefore, the electronic equipment can inquire the state of the user according to the obtained scene model, and the electronic equipment can conveniently conduct intelligent operation, so that the intelligent degree of the electronic equipment can be improved.
The embodiment of the application also provides a storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer executes the scene modeling method according to any one of the embodiments.
It should be noted that, those skilled in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the storage medium may include, but is not limited to: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The scene modeling method, the device, the storage medium and the electronic equipment provided by the embodiment of the application are described in detail. The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (7)

1. A method of modeling a scene, comprising:
obtaining perception data of a current scene;
selecting a corresponding feature extraction model according to the data type of the perception data; extracting, by the feature extraction model, a plurality of features of the current scene from the perceptual data, comprising: extracting a plurality of first features from sensor data by a recurrent neural network model when the sensory data comprises sensor data; clustering the plurality of first features through a K nearest neighbor algorithm to obtain a plurality of feature classes, wherein each feature class comprises a plurality of first features; extracting a second feature from each of the plurality of feature classes to obtain a plurality of second features; when the perception data further comprises text data, image data and audio data, extracting third features from the text data through a rapid automatic keyword extraction algorithm model; extracting a fourth feature from the image data by a convolutional neural network model; extracting fifth features from the audio data through a recurrent neural network model or a long-short-term memory network model;
performing value filling on the plurality of features to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature;
And modeling the plurality of characteristic key value pairs according to the probability map model to obtain a scene model of the current scene.
2. The scene modeling method of claim 1, wherein said value filling the plurality of features to obtain a plurality of feature key value pairs comprises:
acquiring a feature value corresponding to each feature according to each feature and a preset mapping relation, wherein the preset mapping relation comprises a corresponding relation between a preset feature and a preset feature value;
and filling the values of the plurality of features by the feature values corresponding to each feature to obtain a plurality of feature key value pairs.
3. The scene modeling method of claim 1, wherein the extracting the plurality of features of the current scene from the perceptual data by the feature extraction model further comprises:
aggregating the plurality of second features, the third features, the fourth features and the fifth features to obtain feature vectors of the current scene;
a plurality of features in the feature vector are determined as a plurality of features of the current scene.
4. The scene modeling method of claim 1, wherein prior to the obtaining the perceptual data of the current scene, further comprising:
Setting a corresponding preset feature value for each preset feature in the plurality of preset features;
and establishing a preset mapping relation between the preset features and preset feature values according to the preset features and the preset feature values corresponding to each preset feature.
5. A scene modeling apparatus, comprising:
the first acquisition module is used for acquiring the perception data of the current scene;
the second acquisition module is used for selecting a corresponding feature extraction model according to the data type of the perception data; extracting, by the feature extraction model, a plurality of features of the current scene from the perceptual data, comprising: extracting a plurality of first features from sensor data by a recurrent neural network model when the sensory data comprises sensor data; clustering the plurality of first features through a K nearest neighbor algorithm to obtain a plurality of feature classes, wherein each feature class comprises a plurality of first features; extracting a second feature from each of the plurality of feature classes to obtain a plurality of second features; when the perception data further comprises text data, image data and audio data, extracting third features from the text data through a rapid automatic keyword extraction algorithm model; extracting a fourth feature from the image data by a convolutional neural network model; extracting fifth features from the audio data through a recurrent neural network model or a long-short-term memory network model;
The value filling module is used for filling the plurality of features with values to obtain a plurality of feature key value pairs, wherein each feature key value pair comprises a feature and a feature value corresponding to the feature;
and the modeling module is used for modeling the plurality of characteristic key value pairs according to the probability map model so as to obtain a scene model of the current scene.
6. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform the scene modeling method of any of claims 1 to 4.
7. An electronic device comprising a processor and a memory, the memory having stored therein a computer program, the processor being operable to perform the scene modeling method of any of claims 1-4 by invoking the computer program stored in the memory.
CN201910282458.3A 2019-04-09 2019-04-09 Scene modeling method and device, storage medium and electronic equipment Active CN111797875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282458.3A CN111797875B (en) 2019-04-09 2019-04-09 Scene modeling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282458.3A CN111797875B (en) 2019-04-09 2019-04-09 Scene modeling method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111797875A CN111797875A (en) 2020-10-20
CN111797875B true CN111797875B (en) 2023-12-01

Family

ID=72805307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282458.3A Active CN111797875B (en) 2019-04-09 2019-04-09 Scene modeling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111797875B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353379A (en) * 2011-07-06 2012-02-15 上海海事大学 Environment modeling method applicable to navigation of automatic piloting vehicles
CN103092875A (en) * 2011-11-04 2013-05-08 中国移动通信集团贵州有限公司 Searching method and searching device based on text
CN106407379A (en) * 2016-09-13 2017-02-15 天津大学 Hadoop platform based movie recommendation method
JP2018120362A (en) * 2017-01-24 2018-08-02 日本放送協会 Scene variation point model learning device, scene variation point detection device and programs thereof
CN108764304A (en) * 2018-05-11 2018-11-06 Oppo广东移动通信有限公司 scene recognition method, device, storage medium and electronic equipment
CN108875596A (en) * 2018-05-30 2018-11-23 西南交通大学 A kind of railway scene image, semantic dividing method based on DSSNN neural network
CN109325434A (en) * 2018-09-15 2019-02-12 天津大学 A kind of image scene classification method of the probability topic model of multiple features
CN109344813A (en) * 2018-11-28 2019-02-15 北醒(北京)光子科技有限公司 A kind of target identification and scene modeling method and device based on RGBD
CN109426832A (en) * 2017-08-30 2019-03-05 湖南拓视觉信息技术有限公司 Closed loop detection method, storage medium and electronic equipment in scene three-dimensional modeling

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644624B2 (en) * 2009-07-28 2014-02-04 Samsung Electronics Co., Ltd. System and method for indoor-outdoor scene classification
TW201227606A (en) * 2010-12-30 2012-07-01 Hon Hai Prec Ind Co Ltd Electronic device and method for designing a specified scene using the electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353379A (en) * 2011-07-06 2012-02-15 上海海事大学 Environment modeling method applicable to navigation of automatic piloting vehicles
CN103092875A (en) * 2011-11-04 2013-05-08 中国移动通信集团贵州有限公司 Searching method and searching device based on text
CN106407379A (en) * 2016-09-13 2017-02-15 天津大学 Hadoop platform based movie recommendation method
JP2018120362A (en) * 2017-01-24 2018-08-02 日本放送協会 Scene variation point model learning device, scene variation point detection device and programs thereof
CN109426832A (en) * 2017-08-30 2019-03-05 湖南拓视觉信息技术有限公司 Closed loop detection method, storage medium and electronic equipment in scene three-dimensional modeling
CN108764304A (en) * 2018-05-11 2018-11-06 Oppo广东移动通信有限公司 scene recognition method, device, storage medium and electronic equipment
CN108875596A (en) * 2018-05-30 2018-11-23 西南交通大学 A kind of railway scene image, semantic dividing method based on DSSNN neural network
CN109325434A (en) * 2018-09-15 2019-02-12 天津大学 A kind of image scene classification method of the probability topic model of multiple features
CN109344813A (en) * 2018-11-28 2019-02-15 北醒(北京)光子科技有限公司 A kind of target identification and scene modeling method and device based on RGBD

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
卷积深度置信网络的场景文本检测;王林等;计算机系统应用;全文 *
融合图像场景及物体先验知识的图像描述生成模型;汤鹏杰等;中国图象图形学报;全文 *

Also Published As

Publication number Publication date
CN111797875A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN111797858A (en) Model training method, behavior prediction method, device, storage medium and equipment
CN111243668B (en) Method and device for detecting molecule binding site, electronic device and storage medium
CN111797854B (en) Scene model building method and device, storage medium and electronic equipment
US11430091B2 (en) Location mapping for large scale augmented-reality
CN111797302A (en) Model processing method and device, storage medium and electronic equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111798259A (en) Application recommendation method and device, storage medium and electronic equipment
CN111797851A (en) Feature extraction method and device, storage medium and electronic equipment
CN111797873A (en) Scene recognition method and device, storage medium and electronic equipment
CN111797862A (en) Task processing method and device, storage medium and electronic equipment
CN112561084B (en) Feature extraction method and device, computer equipment and storage medium
CN111797856B (en) Modeling method and device, storage medium and electronic equipment
CN111798019B (en) Intention prediction method, intention prediction device, storage medium and electronic equipment
CN111797175B (en) Data storage method and device, storage medium and electronic equipment
CN111814812A (en) Modeling method, modeling device, storage medium, electronic device and scene recognition method
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN111797148A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN111797875B (en) Scene modeling method and device, storage medium and electronic equipment
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
CN111797869A (en) Model training method and device, storage medium and electronic equipment
WO2020207294A1 (en) Service processing method and apparatus, and storage medium and electronic device
CN111796916A (en) Data distribution method, device, storage medium and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant