CN111797261A - Feature extraction method and device, storage medium and electronic equipment - Google Patents

Feature extraction method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111797261A
CN111797261A CN201910282476.1A CN201910282476A CN111797261A CN 111797261 A CN111797261 A CN 111797261A CN 201910282476 A CN201910282476 A CN 201910282476A CN 111797261 A CN111797261 A CN 111797261A
Authority
CN
China
Prior art keywords
time
data
panoramic data
feature extraction
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282476.1A
Other languages
Chinese (zh)
Inventor
何明
陈仲铭
杨统
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282476.1A priority Critical patent/CN111797261A/en
Publication of CN111797261A publication Critical patent/CN111797261A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a feature extraction method, a feature extraction device, a storage medium and electronic equipment, wherein panoramic data are acquired respectively based on geographic positions and time information, and features are extracted from the first panoramic data acquired based on the geographic positions to form a first feature set; meanwhile, for second panoramic data acquired based on time information, features are extracted from the second panoramic data to form a second feature set, the first feature set and the second feature set are subjected to cross fusion processing to generate a cross feature set based on geographic positions and time information, so that the dimensionality and scale of the features acquired from the panoramic data are increased, the geographic position attributes and the time attributes can be fused, the features are more comprehensive, the behavior habits and preferences of users can be better drawn on scene category identification, and the accuracy of scene category identification can be improved.

Description

Feature extraction method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of feature extraction technologies, and in particular, to a feature extraction method and apparatus, a storage medium, and an electronic device.
Background
If the scene of the terminal is identified by acquiring the panoramic data of the terminal, the acquired panoramic data needs to be processed, panoramic features are extracted from the panoramic data, and the current scene category is depicted or identified through the panoramic features. However, most of feature extraction schemes in the prior art extract panoramic features from existing data individually, and face the problems of few feature dimensions, insufficient comprehensive information and the like, so that the accuracy of identifying scene categories is low.
Disclosure of Invention
The embodiment of the application provides a feature extraction method and device, a storage medium and electronic equipment, which can increase feature dimensions of scene recognition and improve recognition accuracy of scene categories.
In a first aspect, an embodiment of the present application provides a feature extraction method, including:
collecting first panoramic data based on a geographic location;
generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data;
acquiring second panoramic data based on the time information;
generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data;
and performing cross fusion on the first feature set and the second feature set to generate a cross feature set based on geographic position and time information.
In a second aspect, an embodiment of the present application provides a feature extraction apparatus, including:
the first data acquisition module is used for acquiring first panoramic data based on the geographic position;
the first feature extraction module is used for generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data;
the second data acquisition module is used for acquiring second panoramic data based on the time information;
the second feature extraction module is used for generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data;
and the feature fusion module is used for performing cross fusion on the first feature set and the second feature set to generate a cross feature set based on the geographic position and the time information.
In a third aspect, a storage medium is provided in an embodiment of the present application, and has a computer program stored thereon, where the computer program is enabled, when running on a computer, to cause the computer to execute a feature extraction method as provided in any embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the feature extraction method provided in any embodiment of the present application by calling the computer program.
According to the technical scheme provided by the embodiment of the application, panoramic data are acquired respectively based on the geographic position and the time information, and the first panoramic data acquired based on the geographic position are subjected to feature extraction to form a first feature set; meanwhile, for the second panoramic data acquired based on the time information, the features are extracted from the second panoramic data to form a second feature set, the first feature set and the second feature set are subjected to cross fusion processing, and a cross feature set based on the geographic position and the time information is generated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a panoramic sensing architecture of a feature extraction method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a first feature extraction method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a second method for feature extraction according to an embodiment of the present disclosure.
Fig. 4 is a schematic flow chart of a third method for feature extraction according to an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a feature extraction device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 7 is a second structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic view of a panoramic sensing architecture of a feature extraction method provided in an embodiment of the present application. The feature extraction method is applied to electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panoramic sensing architecture is an integration of hardware and software used for implementing the feature extraction method in electronic equipment.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as markov algorithm, hidden dirichlet distribution algorithm, bayesian classification algorithm, support vector machine, K-means clustering algorithm, K-nearest neighbor algorithm, conditional random field, residual network, long-short term memory network, convolutional neural network, cyclic neural network, and the like.
Based on the panoramic sensing framework, the electronic device collects first panoramic data based on the geographic position and second panoramic data based on the time information through the information sensing layer and/or other modes, and then the data processing layer processes the first panoramic data and the second panoramic data, such as row data cleaning, data integration and the like. Then, the feature extraction layer performs feature extraction according to a feature extraction scheme provided by the embodiment of the application, for example, panoramic data are acquired respectively based on the geographic position and the time information, and features are extracted from the first panoramic data acquired based on the geographic position to form a first feature set; meanwhile, for the second panoramic data acquired based on the time information, the features are extracted from the second panoramic data to form a second feature set, the first feature set and the second feature set are subjected to cross fusion processing, and a cross feature set based on the geographic position and the time information is generated.
An execution main body of the feature extraction method may be the feature extraction device provided in the embodiment of the present application, or an electronic device integrated with the feature extraction device, where the feature extraction device may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Taking the example that the feature extraction device is integrated in the intelligent terminal, the electronic device may collect panoramic data based on the geographic location and the time information, respectively, and extract features from the first panoramic data collected based on the geographic location to form a first feature set; meanwhile, for the second panoramic data acquired based on the time information, the features are extracted from the second panoramic data to form a second feature set, the first feature set and the second feature set are subjected to cross fusion processing, and a cross feature set based on the geographic position and the time information is generated.
Referring to fig. 2, fig. 2 is a first flowchart of a feature extraction method according to an embodiment of the present disclosure. The specific process of the feature extraction method provided by the embodiment of the application can be as follows:
step 101, collecting first panoramic data based on the geographic position.
The first panoramic data, the second panoramic data, and the like in the present embodiment include sensor data. The sensor data includes signals collected by various sensors on the electronic device, for example, the electronic device is provided with the following sensors: distance sensors, magnetic field sensors, light sensors, acceleration sensors, fingerprint sensors, hall sensors, position sensors, gyroscopes, inertial sensors, attitude sensors, barometers, heart rate sensors, etc.
In some embodiments, the status data of some sensors may be acquired in a targeted manner. For example, data collected by a position sensor and a light sensor are obtained, wherein current position information of the electronic device can be determined according to the data collected by the position sensor, and the light sensor can collect light intensity of the environment where the electronic device is currently located.
Further, in some embodiments, the panoramic data may also include several categories: environmental data, user behavior data, and terminal operational data. Each type of data comprises a plurality of data items, and the panoramic data is a collection of all the data items. In addition, "first" and "second" of the first panoramic data and the second panoramic data are only panoramic data obtained based on different information, and they contain data items of the same category.
The environmental data includes time, place, air quality, weather, temperature, humidity, sound, illumination and other data, the above data may be collected by corresponding sensors on the electronic device, such as time, place and the like, and another data may be obtained through a network after the time and place of the electronic device are determined.
The user behavior data comprises data such as history of application programs started by users, song listening history of users, video watching record of users, conversation behavior of users, game playing record of users and the like, and the use records of related application programs are collected through the electronic equipment.
The terminal operation data comprises operation modes of the electronic equipment in each time interval, for example, the operation modes comprise a game mode, an entertainment mode, a video mode and the like, the operation mode of the electronic equipment can be determined according to the type of the currently-operated application program, and the type of the currently-operated application program can be directly obtained from the classification information of the application program installation package; or, the terminal operation data may further include a remaining power, a display mode, a network state, a screen-off/lock state, and the like of the electronic device.
In the following, the panoramic data is taken as an example of the sensor data, a plurality of different places may appear in the process of using the electronic device by the user, and the application scenes of the user to the electronic device are different when the user is in different geographical positions, so that the used sensor data are also different. In addition, the application scenarios of the user to the electronic device are different in different time periods, so that the used sensor data has a certain time characteristic.
As shown in fig. 3, fig. 3 is a schematic flow chart of a second feature extraction method provided in the embodiment of the present application. In some embodiments, the step of acquiring the first panoramic data based on the geographic location of step S101 includes;
step S1011, collecting first panoramic data according to a preset frequency, and recording a geographical position corresponding to each first panoramic data;
step S1012, taking the geographic location as a reference, performing statistics on the collected first panoramic data, and obtaining first panoramic data corresponding to each geographic location.
In the operation process of the electronic equipment, sensor data are continuously collected according to a preset frequency to serve as first panoramic data. The data of which sensors are to be acquired is determined in advance, and the acquired sensor data is recorded as a piece of data, that is, a piece of first panoramic data includes data of a plurality of sensors. Each piece of sensor data corresponds to one piece of geographical position information, and when one piece of sensor data is recorded, the geographical position information obtained when the sensor data is collected is recorded in a manner of being associated with the sensor data.
For example, MySQL is used to construct a panoramic database based on geographic locations in advance, after sensor data is collected, the electronic device performs statistics periodically or at preset time intervals, for example, once every 24 hours, the collected sensor data is counted with the geographic locations as a reference, sensor data corresponding to each geographic location is obtained, and the sensor data is stored in the MySQL database according to the following form:<ai,d(ai)>,i∈(1,n)。
wherein, aiRepresenting a geographical position, d (a)i) Is shown at geographical location aiTo the collection of all sensor data collected. The size of n is not specific, but is determined according to the number of actual geographical locations after counting the collected sensor data with the geographical locations as a reference.
In addition, the position information obtained by positioning the electronic device through the position sensor is generally latitude and longitude data, after a period of time of acquisition, a plurality of different latitude and longitude data can be generated, each latitude and longitude data represents a positioning point, and actually, the distances of some positioning points are very close, so that a clustering algorithm can be adopted, and the acquired sensor data can be counted by taking the geographic position as a reference.
And 102, generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data.
In some embodiments, the step of taking the geographic locations as a reference, performing statistics on the collected first panoramic data, and obtaining the first panoramic data corresponding to each geographic location includes:
counting the geographic position of the first panoramic data to generate a geographic position list;
determining a density parameter of clustering analysis, clustering geographic positions in the geographic position list according to the density parameter and a preset clustering algorithm, and acquiring a plurality of geographic position sets;
and counting the collected first panoramic data by taking the geographic position sets as a reference to obtain first panoramic data corresponding to each geographic position set.
The preset Clustering algorithm may be a DBSCAN (Density-Based Clustering of applications with Noise) algorithm. The density parameters are the radius of aggregation and MinPts, which is the minimum number of clusters. These two parameters may be preset as necessary, for example, the polymerization radius is set to 1 km and MinPts is set to 10.
Acquiring the geographic position corresponding to each piece of recorded first panoramic data, generating a geographic position list, clustering the geographic positions in the list according to a DBSCAN algorithm, and finally generating a plurality of geographic position sets, wherein each set comprises a plurality of longitude and latitude data, and the more central longitude and latitude data in one geographic position set can be selected to represent the geographic position set.
For example, in some embodiments, after multiple geographical location sets are obtained through the DBSCAN algorithm, the geographical location data in each set is used as a new input, the position of the centroid is found by using iterative aggregation of the Kmeans algorithm, and the K value is set to 1. And after the centroid is solved, the centroid is used as a central point of the geographical position set, the longitude and latitude data of the central point are obtained, and the address of the central point on the map is searched for representing the geographical position set. For example, XX City XX district XX street XX number.
In this manner, the centroid of each set of geographic locations may be obtained, as well as specific address information. Assuming that there are 10 geographical location sets, a can be finally obtained1、a2……a10For a total of ten specific address information.
Then, taking the geographical position set as a unit, obtaining first panoramic data corresponding to each geographical position in the geographical position set, associating the first panoramic data with the address of the centroid of the geographical position set, and then carrying out panoramic imaging on the obtained first panoramic data<ai,d(ai)>The form of (a) is stored in a database. For example, a1Is the centroid of a clustered geographic location set, representing all geographic locations in the geographic location set, d (a)1) Then the first panoramic data corresponding to all geographic locations of the set of geographic locations is represented.
Wherein d (a) can be expressed in the form of a third-order tensori) The third order tensor has three dimensions: the geographic position corresponds to one dimension, and in addition, since a plurality of pieces of first panoramic data are collected from one geographic position, the number of pieces of first panoramic data corresponds to one dimension, and since the variety of panoramic data is numerous, the variety of panoramic data constitutes a third dimension.
After the panoramic data acquisition is finished, acquiring stored first panoramic data based on geographic position statistics from a MySQL database<ai,d(ai)>And performing feature extraction according to a preset first feature extraction algorithm to generate a first feature set.
If the DBSCAN algorithm is used to cluster the geographic locations, step 102, generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data, may include the following refinement steps: and according to a preset first feature extraction algorithm, acquiring first features corresponding to the geographic position sets from corresponding first panoramic data, and forming the first feature sets by the first features corresponding to the multiple geographic position sets.
The following two ways of extracting the first feature based on the geographic location may be provided in the embodiment of the present application. In a first mode, according to a feature extraction algorithm based on statistics, statistical features are extracted from the first panoramic data corresponding to each geographic position to form the first feature set. Features based on statisticsThe feature extraction algorithm generally extracts data such as a maximum value, a minimum value, and a mean value of each type of the first panorama data as features. For example, if the first panoramic data includes sensor data, the feature extraction is performed using the data of each sensor as one type of first panoramic data. For example, for<a1,d(a1)>For the acceleration sensor data in (1), d (a) is acquired1) Assuming that 100 pieces of acceleration sensor data are recorded in total, all the pieces of acceleration sensor data in (1) are characterized by calculating three pieces of data, i.e., a maximum value, a minimum value, and a mean value, of the 100 pieces of acceleration sensor data.
In this way, the sum a can be extracted1All corresponding features, constituting the first feature, may be denoted as f (a)1). Similarly, a can be obtainediCorresponding f (a)i) Wherein i ∈ (1, n), assuming that there are 10 sensors, and each sensor corresponds to three characteristic values of maximum value, minimum value, and mean value, then f (a)i) Can be expressed as a long vector of length 30.
The first set of features based on geographic location may be represented as:
X=[f(a1),f(a2),……,f(ai),……f(an)]。
in a second mode, the step of generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data includes: converting the first panoramic data into frequency domain panoramic data according to a discrete Fourier transform; extracting frequency domain features from the frequency domain panoramic data corresponding to each geographic position to form the first feature set X ═ f (a)1),f(a2),……,f(an)]。
In the second method, after the discrete first panoramic data is converted into frequency domain panoramic data by discrete fourier transform, frequency domain features, such as amplitude, period, and phase, are extracted. Here, the feature extraction is also performed in units of sensor types, that is, the frequency domain features such as the amplitude, the period, and the phase may be acquired for each sensor in the above manner. After feature extraction is performed on all sensor data in one geographical position set, a large number of first features are generated to form a first feature set X.
And 103, acquiring second panoramic data based on the time information.
And 104, generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data.
Next, the second panoramic data is acquired based on the time information, and since time is sequential, when the panoramic data acquired based on the time information is to be counted, the time period may be divided, and information counting may be performed according to the time period.
Referring to fig. 4, fig. 4 is a schematic flow chart of a third method for extracting features according to the embodiment of the present application. In some embodiments, the step 103 of acquiring the second panoramic data based on the time information includes:
step 1031, collecting second panoramic data in a time interval according to a preset frequency, and recording time information of each piece of second panoramic data;
step 1032, dividing the time interval into a plurality of time periods;
and 1033, counting the second panoramic data in the time interval by taking the time period as a reference, and acquiring a second panoramic data sequence corresponding to each time period.
The method includes the steps that 24 hours are used as a time interval, a sensor in the electronic device collects second panoramic data in the time interval according to preset frequency in the time interval, data of which sensors need to be collected are determined in advance, and when the collected sensor data are recorded, the sensor data are used as one piece of data, namely, one piece of second panoramic data comprises data of a plurality of sensors. And recording the acquisition time of each piece of sensor data as corresponding time information.
Next, the time interval is divided into a plurality of time periods, and the panoramic data in the time interval is counted according to the time periods.
In some embodiments, the time periods may be divided manually, e.g., each hour as one time period, and then a day may be divided into 24 time periods.
Alternatively, in some other embodiments, the time segments may be partitioned based on the information entropy. Specifically, the step 1032 of dividing the time interval into a plurality of time periods includes:
calculating the total information entropy of the time interval according to the second panoramic data in the time interval;
dividing the time interval into two time periods according to a time division point in a preset value range, and calculating weighted average information entropy of the two divided time periods;
determining a time division point with the maximum difference value between the weighted average information entropy and the total information entropy;
dividing the time interval into a first time period and a second time period according to the time division point;
taking the first time period and the second time period as new time intervals, and repeatedly executing the step of calculating the total information entropy of the time intervals according to the second panoramic data in the time intervals to the step of dividing the time intervals into the first time period and the second time period according to the time division points until the number of segments of the time intervals is greater than a preset threshold value.
Any kind of second panoramic data can be used to calculate the information entropy, for example, the second panoramic data includes terminal running data, and the terminal running data includes APP start data. Taking the WeChat APP as an example, assuming a time unit based on 10 minutes, a total of 144 time units are obtained in 24 hours, starting from 1 to 144, and counting the opening times of the WeChat APP in each ten minutes, which is recorded as counttWhere t represents the tth ten minutes.
Based on the number of opening times of the WeChat APP, calculating the probability within the tth ten minutes:
Figure BDA0002022117700000111
the time interval is then divided into a plurality of time segments based on the information entropy of time:
a. calculating the total information entropy of the whole time interval:
Figure BDA0002022117700000112
where T is 144.
b. Suppose that the time interval is divided at the x-th point, and x has a value in the range of [2,143]]Taking the example of the 10 th point division, the time interval can be divided into two parts: 1 to 10 and 11 to 144, followed by calculation of two time periods after the slicing [1,10 ]]And [11,144]Weighted average information entropy of (1):
Figure BDA0002022117700000113
c. calculating the difference between the weighted average information entropy and the total information entropy: Δ E (10) ═ Ent ([1, T ]) -Ent ([1, T ]; 10).
d. And (c) repeating the steps b and c, wherein each x can obtain a delta E (x) because the value range of x in the step b is [2,143 ]. The largest Δ e (x) is selected, and the corresponding x is taken as the point of time segment division. Assuming that Δ E is maximum when x is 20, the divided time point is 20, and the time interval is divided into two time segments [1,20] and [21,144 ].
After step d, two time periods can be obtained: [1, x ] and [ x +1,144 ]. Repeating the steps a to d for the two divided time periods until the number of the segments obtained after the time interval is divided reaches a preset threshold, for example, the preset threshold is 24, and stopping the iteration when the complete time interval is divided into 24 time periods.
After the time periods are divided, the first panoramic data in the time intervals are counted by taking the time periods as a reference, and a second panoramic data sequence corresponding to each time period is obtained. To be provided with<tj,d(tj)>The form of (a) is stored to the MySQL database. For example, one of the divided time periods is [1, 3]]Then the time period corresponds to the first half hour of the time interval. Assuming that the acquisition frequency is once per second, 1800 pieces of second panoramic data correspond to the second panoramic data sequence corresponding to the time period.
It should be noted that the above time intervals, time units and other parameters are all exemplified, and in practical application, the time intervals, time units and other parameters may be set as needed, and are not limited to the above numerical values.
Step 104, generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data, wherein the step includes: and extracting time domain features from the second panoramic data sequence corresponding to the time periods according to a time domain feature extraction algorithm, and forming the second feature set by the time domain features corresponding to the time periods.
Since the panoramic data sequence has the characteristic of time sequence, a time domain feature extraction algorithm can be adopted to extract the time domain feature from the panoramic data sequence as the second feature. Taking the acceleration sensor data as an example, the acceleration sensor data sequence is counted, and the time domain characteristics such as the peak value, the mean value, the root mean square value, the kurtosis index, the wave form factor and the like in the data sequence are obtained and used as the second characteristics. Extracting the time domain feature f (t) in each time segment in such a wayj) Wherein j ∈ (1, m), f (t)j) Form a second feature set Y ═ f (t)1),f(t2),……,f(tj),……,f(tm)]。
And 105, performing cross fusion on the first feature set and the second feature set to generate a cross feature set based on the geographic position and the time information.
After the first feature set X and the second feature set Y are obtained, features in the two feature sets are subjected to cross fusion.
Specifically, in another optional embodiment, the step 105 of cross-fusing the first feature set and the second feature set to generate a cross-feature set based on geographic location and time information includes:
and calculating Cartesian products of the first feature set and the second feature set, and taking the Cartesian products as the cross feature set.
In the embodiment of the application, features are cross-fused by adopting a Cartesian product calculation mode of two sets. Specifically, the cartesian products of the first feature set X and the second feature set Y may be expressed as X × Y and Y × X. The elements in the cartesian product are pairs of elements in the first feature set X and elements in the second feature set Y, and specifically the following:
X×Y={{f(a1),f(t1)},{f(a1),f(t2)},……,{f(a1),f(tm)},……,
{f(a2),f(t1)},{f(a2),f(t2)},……,{f(a2),f(tm)},……,
{f(an),f(t1)},{f(an),f(t2)},……,{f(an),f(tm)}}
Y×X={{f(t1),f(a1)},{f(t1),f(a2)},……,{f(t1),f(an)},……,
{f(t2),f(a1)},{f(t2),f(a2)},……,{f(t2),f(an)},……,
{f(tm),f(a1)},{f(tm),f(a2)},……,{f(tm),f(an)}}
the cross fusion features are obtained through Cartesian product, so that the geographic position and the time information can be embodied, for example, { f (a) }1),f(t1) And (5) representing panoramic features of the user at the geographical position a1 at the time period t1, wherein the crossed features form a crossed feature set. By constructing the feature based on the geographic position (namely the first feature), the movement attribute of the panoramic data can be better described; by constructing the panoramic feature (namely the second feature) based on time, the panoramic preference and habit of the user on the time dimension can be reflected more accurately (panoramic data reflects panoramic information); in addition, through the cross fusion of the features based on the geographic position and the features based on the time, on one hand, the scale and the range of the panoramic features are remarkably enlarged, on the other hand, through the simultaneous fusion of the geographic track attribute and the time attribute, the behavior habits and the panoramic preferences of the user can be accurately drawn on the identification of the panoramic category, and the panoramic category is greatly improvedOther recognition accuracy and user experience in practical applications.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
As can be seen from the above, in the feature extraction method provided in the embodiment of the present application, panoramic data is acquired based on geographic position and time information, and features are extracted from the panoramic data acquired based on the geographic position to form a first feature set; meanwhile, the characteristics are extracted from the panoramic data acquired based on the time information to form a second characteristic set, the first characteristic set and the second characteristic set are subjected to cross fusion processing to generate a cross characteristic set based on the geographic position and the time information, so that the dimensionality and scale of the characteristics acquired from the panoramic data are increased, the geographic position attribute and the time attribute can be fused, the characteristics are more comprehensive, the behavior habits and the preferences of the user can be better drawn on the identification of the scene categories, and the accuracy of scene category identification can be improved.
In one embodiment, a feature extraction apparatus is also provided. Referring to fig. 5, fig. 5 is a schematic structural diagram of a feature extraction apparatus 400 according to an embodiment of the present disclosure. The feature extraction apparatus 400 is applied to an electronic device, and the feature extraction apparatus 400 includes a first data acquisition module 401, a first feature extraction module 402, a second data acquisition module 403, a second feature extraction module 404, and a feature fusion module 405, as follows:
a first data collecting module 401, configured to collect the first panoramic data based on the geographic location.
A first feature extraction module 402, configured to generate a first feature set according to a preset first feature extraction algorithm and the first panoramic data.
A second data collecting module 403, configured to collect second panoramic data based on the time information.
A second feature extraction module 404, configured to generate a second feature set according to a preset second feature extraction algorithm and the second panoramic data.
A feature fusion module 405, configured to perform cross fusion on the first feature set and the second feature set to generate a cross feature set based on geographic location and time information.
In some embodiments, the feature fusion module 405 is further configured to: and calculating Cartesian products of the first feature set and the second feature set, and taking the Cartesian products as the cross feature set.
In some embodiments, the first data acquisition module 401 is further configured to: acquiring first panoramic data according to a preset frequency, and recording a geographical position corresponding to each piece of first panoramic data; and counting the collected first panoramic data by taking the geographic positions as a reference to obtain first panoramic data corresponding to each geographic position.
In some embodiments, the first feature extraction module 402 is further configured to: and according to a feature extraction algorithm based on statistics, extracting statistical features from the first panoramic data corresponding to each geographic position to form the first feature set.
In some embodiments, the first feature extraction module 402 is further configured to: converting the first panoramic data into frequency domain panoramic data according to a discrete Fourier transform; and extracting frequency domain features from the frequency domain panoramic data corresponding to each geographic position to form the first feature set.
In some embodiments, the first feature extraction module 402 is further configured to: counting the geographic position of the first panoramic data to generate a geographic position list;
determining a density parameter of clustering analysis, clustering geographic positions in the geographic position list according to the density parameter and a preset clustering algorithm, and acquiring a plurality of geographic position sets;
counting the collected first panoramic data by taking the geographic position sets as a reference to obtain first panoramic data corresponding to each geographic position set;
and according to a preset first feature extraction algorithm, acquiring first features corresponding to the geographic position sets from corresponding first panoramic data, and forming the first feature sets by the first features corresponding to the multiple geographic position sets.
In some embodiments, the second data acquisition module 403 is further configured to: acquiring second panoramic data in a time interval according to a preset frequency, and recording time information of each piece of second panoramic data;
dividing the time interval into a plurality of time segments;
counting the second panoramic data in the time interval by taking the time intervals as a reference to obtain a second panoramic data sequence corresponding to each time interval;
and extracting time domain features from the second panoramic data sequence corresponding to the time periods according to a time domain feature extraction algorithm, and forming the second feature set by the time domain features corresponding to the time periods.
In some embodiments, the second data acquisition module 403 is further configured to:
in some embodiments, the total information entropy of the time interval is calculated from the second panorama data within the time interval;
dividing the time interval into two time periods according to a time division point in a preset value range, and calculating weighted average information entropy of the two divided time periods;
determining a time division point with the maximum difference value between the weighted average information entropy and the total information entropy;
dividing the time interval into a first time period and a second time period according to the time division point;
taking the first time period and the second time period as new time intervals, and repeatedly executing the step of calculating the total information entropy of the time intervals according to the second panoramic data in the time intervals to the step of dividing the time intervals into the first time period and the second time period according to the time division points until the number of segments of the time intervals is greater than a preset threshold value.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in the feature extraction apparatus provided in this embodiment of the application, the first data acquisition module 401 acquires first panoramic data based on a geographic location, the second data acquisition module 403 acquires second panoramic data based on time information, and for the first panoramic data acquired based on the geographic location, the first feature extraction module 402 extracts features from the first panoramic data to form a first feature set; meanwhile, for second panoramic data acquired based on time information, the second feature extraction module 404 extracts features from the second panoramic data to form a second feature set, the feature fusion module 405 performs cross fusion on the first feature set and the second feature set to generate a cross feature set based on geographic position and time information, so that the dimensionality and scale of the features acquired from the panoramic data are increased, and the geographic position attribute and the time attribute can be fused, so that the features are more comprehensive, behavior habits and preferences of users can be better depicted in scene category identification, and the accuracy of scene category identification can be further improved.
The embodiment of the application also provides the electronic equipment. The electronic device can be a smart phone, a tablet computer and the like. As shown in fig. 6, fig. 6 is a first schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 300 comprises a processor 301 and a memory 302. The processor 301 is electrically connected to the memory 302.
The processor 301 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 301 in the electronic device 300 loads instructions corresponding to one or more processes of the computer program into the memory 302 according to the following steps, and the processor 301 runs the computer program stored in the memory 302, so as to implement various functions:
collecting first panoramic data based on a geographic location;
generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data;
acquiring second panoramic data based on the time information;
generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data;
and performing cross fusion on the first feature set and the second feature set to generate a cross feature set based on geographic position and time information.
In some embodiments, when the first feature set and the second feature set are cross-fused to generate a cross-feature set based on geographic location and time information, the processor 301 performs the following steps:
and calculating Cartesian products of the first feature set and the second feature set, and taking the Cartesian products as the cross feature set.
In some embodiments, when acquiring the first panoramic data based on the geographic location, the processor 301 performs the following steps;
collecting panoramic data according to a preset frequency, and recording a geographic position corresponding to each piece of panoramic data;
and counting the collected first panoramic data by taking the geographic positions as a reference to obtain first panoramic data corresponding to each geographic position.
In some embodiments, when generating the first feature set according to a preset first feature extraction algorithm and the first panoramic data, the processor 301 performs the following steps:
and according to a feature extraction algorithm based on statistics, extracting statistical features from the first panoramic data corresponding to each geographic position to form the first feature set.
In some embodiments, when generating the first feature set according to a preset first feature extraction algorithm and the first panoramic data, the processor 301 performs the following steps:
converting the first panoramic data into frequency domain panoramic data according to a discrete Fourier transform;
and extracting frequency domain features from the frequency domain panoramic data corresponding to each geographic position to form the first feature set.
In some embodiments, when the collected first panoramic data is counted by taking the geographic location as a reference and the first panoramic data corresponding to each geographic location is obtained, the processor 301 performs the following steps:
counting the geographic position of the panoramic data to generate a geographic position list;
determining a density parameter of clustering analysis, clustering geographic positions in the geographic position list according to the density parameter and a preset clustering algorithm, and acquiring a plurality of geographic position sets;
taking the geographic position sets as a reference, counting the collected panoramic data, and acquiring panoramic data corresponding to each geographic position set;
when generating the first feature set according to the preset first feature extraction algorithm and the first panoramic data, the processor 301 executes the following steps:
and according to a preset first feature extraction algorithm, acquiring first features corresponding to the geographic position sets from corresponding panoramic data, and forming the first feature sets by the first features corresponding to the multiple geographic position sets.
In some embodiments, when acquiring the second panoramic data based on the time information, the processor 301 performs the following steps:
acquiring panoramic data in a time interval according to a preset frequency, and recording time information of each piece of panoramic data;
dividing the time interval into a plurality of time segments;
taking the time periods as a reference, counting the panoramic data in the time intervals, and acquiring a panoramic data sequence corresponding to each time period;
when generating the second feature set according to a preset second feature extraction algorithm and the second panoramic data, the processor 301 executes the following steps: and extracting time domain features from the panoramic data sequence corresponding to the time periods according to a time domain feature extraction algorithm, and forming the second feature set by the time domain features corresponding to the time periods.
In some embodiments, when the time interval is divided into a plurality of time periods, the processor 301 performs the following steps:
calculating the total information entropy of the time interval according to the second panoramic data in the time interval;
dividing the time interval into two time periods according to a time division point in a preset value range, and calculating weighted average information entropy of the two divided time periods;
determining a time division point with the maximum difference value between the weighted average information entropy and the total information entropy;
dividing the time interval into a first time period and a second time period according to the time division point;
taking the first time period and the second time period as new time intervals, and repeatedly executing the step of calculating the total information entropy of the time intervals according to the second panoramic data in the time intervals to the step of dividing the time intervals into the first time period and the second time period according to the time division points until the number of segments of the time intervals is greater than a preset threshold value.
Memory 302 may be used to store computer programs and data. The memory 302 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. The processor 301 executes various functional applications and feature extraction by calling a computer program stored in the memory 302.
In some embodiments, as shown in fig. 7, fig. 7 is a second schematic structural diagram of an electronic device provided in the embodiments of the present application. The electronic device 300 further includes: radio frequency circuit 303, display screen 304, control circuit 305, input unit 306, audio circuit 307, sensor 308, and power supply 309. The processor 301 is electrically connected to the rf circuit 303, the display 304, the control circuit 305, the input unit 306, the audio circuit 307, the sensor 308, and the power source 309, respectively.
The radio frequency circuit 303 is used for transceiving radio frequency signals to communicate with a network device or other electronic devices through wireless communication.
The display screen 304 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 305 is electrically connected to the display screen 304, and is used for controlling the display screen 304 to display information.
The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 306 may include a fingerprint recognition module.
Audio circuitry 307 may provide an audio interface between the user and the electronic device through a speaker, microphone. Where audio circuitry 307 includes a microphone. The microphone is electrically connected to the processor 301. The microphone is used for receiving voice information input by a user.
The sensor 308 is used to collect external environmental information. The sensor 308 may include one or more of an ambient light sensor, an acceleration sensor, a gyroscope, and the like.
The power supply 309 is used to power the various components of the electronic device 300. In some embodiments, the power source 309 may be logically coupled to the processor 301 through a power management system, such that functions to manage charging, discharging, and power consumption management are performed through the power management system.
Although not shown in fig. 7, the electronic device 300 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, the electronic device provided in the embodiments of the present application may acquire panoramic data based on geographic location and time information, respectively, and extract features from first panoramic data acquired based on the geographic location to form a first feature set; meanwhile, for the second panoramic data acquired based on the time information, the features are extracted from the second panoramic data to form a second feature set, the first feature set and the second feature set are subjected to cross fusion processing, and a cross feature set based on the geographic position and the time information is generated.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the feature extraction method according to any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The term "module" as used herein may be considered a software object executing on the computing system. The different components, modules, engines, and services described herein may be considered as implementation objects on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The feature extraction method, the feature extraction device, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A method of feature extraction, comprising:
collecting first panoramic data based on a geographic location;
generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data;
acquiring second panoramic data based on the time information;
generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data;
and performing cross fusion on the first feature set and the second feature set to generate a cross feature set based on geographic position and time information.
2. The feature extraction method of claim 1, further comprising:
and calculating Cartesian products of the first feature set and the second feature set, and taking the Cartesian products as the cross feature set.
3. The feature extraction method of claim 1 or 2, wherein the step of acquiring the first panorama data based on a geographical location comprises:
acquiring first panoramic data according to a preset frequency, and recording a geographical position corresponding to each piece of first panoramic data;
and counting the collected first panoramic data by taking the geographic positions as a reference to obtain first panoramic data corresponding to each geographic position.
4. The feature extraction method of claim 3, further comprising:
and according to a feature extraction algorithm based on statistics, extracting statistical features from the first panoramic data corresponding to each geographic position to form the first feature set.
5. The feature extraction method of claim 3, further comprising:
converting the first panoramic data into frequency domain panoramic data according to a discrete Fourier transform;
and extracting frequency domain features from the frequency domain panoramic data corresponding to each geographic position to form the first feature set.
6. The feature extraction method of claim 3, further comprising:
counting the geographic position of the first panoramic data to generate a geographic position list;
determining a density parameter of clustering analysis, clustering geographic positions in the geographic position list according to the density parameter and a preset clustering algorithm, and acquiring a plurality of geographic position sets;
counting the collected first panoramic data by taking the geographic position sets as a reference to obtain first panoramic data corresponding to each geographic position set;
and according to a preset first feature extraction algorithm, acquiring first features corresponding to the geographic position sets from corresponding first panoramic data, and forming the first feature sets by the first features corresponding to the multiple geographic position sets.
7. The feature extraction method according to claim 1 or 2, wherein the step of acquiring the second panorama data based on the time information comprises:
acquiring second panoramic data in a time interval according to a preset frequency, and recording time information of each piece of second panoramic data;
dividing the time interval into a plurality of time segments;
counting the second panoramic data in the time interval by taking the time intervals as a reference to obtain a second panoramic data sequence corresponding to each time interval;
and extracting time domain features from the second panoramic data sequence corresponding to the time periods according to a time domain feature extraction algorithm, and forming the second feature set by the time domain features corresponding to the time periods.
8. The feature extraction method of claim 7, wherein the step of dividing the time interval into a plurality of time segments comprises:
calculating the total information entropy of the time interval according to the second panoramic data in the time interval;
dividing the time interval into two time periods according to a time division point in a preset value range, and calculating weighted average information entropy of the two divided time periods;
determining a time division point with the maximum difference value between the weighted average information entropy and the total information entropy;
dividing the time interval into a first time period and a second time period according to the time division point;
taking the first time period and the second time period as new time intervals, and repeatedly executing the step of calculating the total information entropy of the time intervals according to the second panoramic data in the time intervals to the step of dividing the time intervals into the first time period and the second time period according to the time division points until the number of segments of the time intervals is greater than a preset threshold value.
9. A feature extraction device characterized by comprising:
the first data acquisition module is used for acquiring first panoramic data based on the geographic position;
the first feature extraction module is used for generating a first feature set according to a preset first feature extraction algorithm and the first panoramic data;
the second data acquisition module is used for acquiring second panoramic data based on the time information;
the second feature extraction module is used for generating a second feature set according to a preset second feature extraction algorithm and the second panoramic data;
and the feature fusion module is used for performing cross fusion on the first feature set and the second feature set to generate a cross feature set based on the geographic position and the time information.
10. The feature extraction apparatus of claim 9, wherein the feature fusion module is further configured to: and calculating Cartesian products of the first feature set and the second feature set, and taking the Cartesian products as the cross feature set.
11. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the feature extraction method according to any one of claims 1 to 8.
12. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is configured to execute the feature extraction method according to any one of claims 1 to 8 by calling the computer program.
CN201910282476.1A 2019-04-09 2019-04-09 Feature extraction method and device, storage medium and electronic equipment Pending CN111797261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282476.1A CN111797261A (en) 2019-04-09 2019-04-09 Feature extraction method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282476.1A CN111797261A (en) 2019-04-09 2019-04-09 Feature extraction method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111797261A true CN111797261A (en) 2020-10-20

Family

ID=72805777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282476.1A Pending CN111797261A (en) 2019-04-09 2019-04-09 Feature extraction method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111797261A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214505A (en) * 2020-10-21 2021-01-12 北京金堤征信服务有限公司 Data synchronization method and device, computer readable storage medium and electronic equipment
CN112380215A (en) * 2020-11-17 2021-02-19 北京融七牛信息技术有限公司 Automatic feature generation method based on cross aggregation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366250A (en) * 2013-07-12 2013-10-23 中国科学院深圳先进技术研究院 City appearance environment detection method and system based on three-dimensional live-action data
CN104252498A (en) * 2013-06-28 2014-12-31 Sap欧洲公司 Context sensing recommendation
US20170286845A1 (en) * 2016-04-01 2017-10-05 International Business Machines Corporation Automatic extraction of user mobility behaviors and interaction preferences using spatio-temporal data
CN109040595A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 History panorama processing method, device, equipment and storage medium based on AR

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252498A (en) * 2013-06-28 2014-12-31 Sap欧洲公司 Context sensing recommendation
CN103366250A (en) * 2013-07-12 2013-10-23 中国科学院深圳先进技术研究院 City appearance environment detection method and system based on three-dimensional live-action data
US20170286845A1 (en) * 2016-04-01 2017-10-05 International Business Machines Corporation Automatic extraction of user mobility behaviors and interaction preferences using spatio-temporal data
CN109040595A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 History panorama processing method, device, equipment and storage medium based on AR

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈圣楠: "基于时空上下文感知的移动推荐模型研究", 《CNKI优秀硕士学位论文全文库》, 15 March 2017 (2017-03-15), pages 30 - 35 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214505A (en) * 2020-10-21 2021-01-12 北京金堤征信服务有限公司 Data synchronization method and device, computer readable storage medium and electronic equipment
CN112380215A (en) * 2020-11-17 2021-02-19 北京融七牛信息技术有限公司 Automatic feature generation method based on cross aggregation

Similar Documents

Publication Publication Date Title
CN111797858A (en) Model training method, behavior prediction method, device, storage medium and equipment
CN111800445B (en) Message pushing method and device, storage medium and electronic equipment
CN111800331A (en) Notification message pushing method and device, storage medium and electronic equipment
CN111798260A (en) User behavior prediction model construction method and device, storage medium and electronic equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111797288A (en) Data screening method and device, storage medium and electronic equipment
CN111814475A (en) User portrait construction method and device, storage medium and electronic equipment
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111797849B (en) User activity recognition method and device, storage medium and electronic equipment
CN111797854A (en) Scene model establishing method and device, storage medium and electronic equipment
CN111797851A (en) Feature extraction method and device, storage medium and electronic equipment
CN111797261A (en) Feature extraction method and device, storage medium and electronic equipment
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN111797079A (en) Data processing method, data processing device, storage medium and electronic equipment
WO2020207297A1 (en) Information processing method, storage medium, and electronic device
CN111797856B (en) Modeling method and device, storage medium and electronic equipment
CN111797874B (en) Behavior prediction method and device, storage medium and electronic equipment
CN111798019B (en) Intention prediction method, intention prediction device, storage medium and electronic equipment
CN111814812A (en) Modeling method, modeling device, storage medium, electronic device and scene recognition method
CN111797127B (en) Time sequence data segmentation method and device, storage medium and electronic equipment
CN111797860B (en) Feature extraction method and device, storage medium and electronic equipment
CN111797878B (en) Data processing method and device, storage medium and electronic equipment
CN111797877B (en) Data processing method and device, storage medium and electronic equipment
CN111797880A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111796916A (en) Data distribution method, device, storage medium and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination