CN111401393B - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111401393B
CN111401393B CN201910000534.7A CN201910000534A CN111401393B CN 111401393 B CN111401393 B CN 111401393B CN 201910000534 A CN201910000534 A CN 201910000534A CN 111401393 B CN111401393 B CN 111401393B
Authority
CN
China
Prior art keywords
data
weight
sample data
sample
monitoring data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910000534.7A
Other languages
Chinese (zh)
Other versions
CN111401393A (en
Inventor
李杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910000534.7A priority Critical patent/CN111401393B/en
Publication of CN111401393A publication Critical patent/CN111401393A/en
Application granted granted Critical
Publication of CN111401393B publication Critical patent/CN111401393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Abstract

The embodiment of the invention provides a data processing method and device, electronic equipment and a storage medium. The data processing method comprises the following steps: acquiring first monitoring data of a monitored object; obtaining a first weight based on the first monitoring data and the first sample data; determining a second weight based on a constraint condition satisfied by the first weight and the second weight; determining second monitoring data based on second weight and second sample data, wherein the second sample data is dimension reduction data of the first sample data; the second monitoring data is dimension reduction data of the first monitoring data.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
The data dimensionality reduction is: the data processing method has the advantages that the data with high dimensionality are converted into the data with low dimensionality through data processing, so that the data volume of one piece of data is reduced, the calculation amount of the data is reduced, and the data efficiency is improved. On one hand, in the process of data dimension reduction, some information loss is accompanied, and how to reduce the information loss in the process of data dimension reduction, so that the data after dimension reduction can still provide the required information; on the other hand, in the related art, deep learning models such as a neural network are used for data dimension reduction, training of the deep learning models and training of the deep learning models in the data dimension reduction process can lead to a large amount of calculation amount, so that even if the data amount after dimension reduction is reduced, the calculation amount caused by a large amount of data dimension reduction is introduced, the reduction of the total data calculation amount is not obvious, namely the dimension reduction effect is not obvious.
Disclosure of Invention
The embodiment of the invention provides a data processing method and device, electronic equipment and a storage medium.
The technical scheme of the invention is realized as follows:
a method of data processing, comprising:
acquiring first monitoring data of a monitored object;
obtaining a first weight based on the first monitoring data and the first sample data;
determining a second weight based on a constraint condition satisfied by the first weight and the second weight;
determining second monitoring data based on second weight and second sample data, wherein the second sample data is dimension reduction data of the first sample data; the second monitoring data is dimension reduction data of the first monitoring data.
Based on the above scheme, obtaining the first weight based on the first monitoring data and the first sample data includes:
and obtaining a first weight in the value range according to the tuning parameters.
Based on the above scheme, the first weight located in the value range satisfies the following functional relationship:
Figure BDA0001933377710000021
wherein p is j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; h is a preset constraint parameter.
Based on the above scheme, the value of H is between 1 and 7.
Based on the above scheme, obtaining the first weight located in the value range according to the tuning parameter includes:
Figure BDA0001933377710000022
wherein p is j|i For the ith to jth of the first monitoring dataThe first weight of the first sample data; sigma i The tuning parameters; a. The i The ith first monitoring data; n is a radical of i For K in the first sample data and A i A data set of most recent first sample data; x k Is the kth first monitoring data.
Based on the above scheme, the determining the second weight based on the constraint condition satisfied by the first weight and the second weight includes:
determining the second weight by adopting the following functional relation;
Figure BDA0001933377710000023
R w a data dimension for the second monitored data and the second sample data; p is a radical of formula j|i The first weight from the ith first monitoring data to the jth first sample data; n is the total number of the first sample data; b is i Is the ith second monitoring data.
Based on the above scheme, the determining second monitoring data based on the second weight and the second sample data includes:
determining the second monitoring data based on the second weight and the second sample data according to the following functional relationship;
Figure BDA0001933377710000031
q j/i said second weight for the ith said first monitored data to the jth said second sample data; y is k The kth second monitoring data; n is the total number of the second sample data; y is j For the kth second monitoring data; b is i The ith second monitoring data.
Based on the above scheme, the method further comprises:
and inputting the second monitoring data into a trained deep learning model to obtain the identification information of the second monitoring data, wherein the deep learning model is obtained by training with the second sample data as a sample.
A data processing apparatus comprising:
the first monitoring data module is used for acquiring first monitoring data of a monitored object;
the first weight module is used for obtaining a first weight based on the first monitoring data and the first sample data;
the second weight module is used for determining a second weight based on the constraint condition met by the first weight and the second weight;
the second monitoring data module is used for determining second monitoring data based on second weight and second sample data, wherein the second sample data is dimension reduction data of the first sample data; the second monitoring data is dimension reduction data of the first monitoring data.
An electronic device, comprising:
a memory for storing a plurality of data files to be transmitted,
and the processor is connected with the memory and used for realizing the data processing method provided by one or more of the technical schemes by executing the computer executable instructions on the memory.
A computer storage medium, where computer-executable instructions are stored, and the computer-executable instructions are executed by a processor, so as to implement a data processing method provided by one or more of the foregoing technical solutions.
According to the technical scheme provided by the embodiment of the invention, after the first monitoring data of the monitored object is acquired, the first weight can be calculated with the first sample data in the sample data set, then the second weight can be obtained based on the constraint condition met by the first weight and the second weight, finally the second monitoring data after dimension reduction is obtained by combining the second weight and the second sample data after dimension reduction of the first sample data. Meanwhile, the dimension reduction is carried out by referring to the first sample data and the second sample data subjected to the dimension reduction of the first sample data, so that the loss of the information quantity needing to be reserved can be reduced by using the dimension reduction of the sample data, and the dimension reduction effect is improved.
Drawings
Fig. 1 is a schematic flow chart of a first data processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a second data processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an effect of data dimensionality reduction according to an embodiment of the present invention;
FIG. 5 is a block diagram of another data processing apparatus according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a third data processing method according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
As shown in fig. 1, the present embodiment provides a data processing method, including:
step S110: acquiring first monitoring data of a monitored object;
step S120: obtaining a first weight based on the first monitoring data and the first sample data;
step S130: determining a second weight based on a constraint condition satisfied by the first weight and the second weight;
step S140: determining second monitoring data based on second weight and second sample data, wherein the second sample data is dimension reduction data of the first sample data; the second monitoring data is dimension reduction data of the first monitoring data.
In this embodiment, first monitoring data of a monitored object is obtained, where the first monitoring data may be raw data before dimensionality reduction, and may be from a monitoring device that monitors the monitored object, for example, a sensor worn by the monitored object. If the monitored object is a movable object, the first monitoring data can be motion data detected by a motion sensor worn by the monitored object. For example, the motion data includes: acceleration data detected by the acceleration sensor, and the like. If the monitored object is a living body, the first monitoring data can be a physiological signal detected by a vital sign sensor worn by the monitored object. The physiological signal may include: heartbeat signal or respiration signal, etc. In some embodiments, the first monitoring data may further include: monitoring network behavior data of a user, the network behavior data comprising: the data accompanying various operations when the user uses the network, such as social behavior data generated by using a social application, network shopping behavior data by using shopping software, and network reading behavior data generated by using a reading application. Of course, the above is only an example of the first monitoring data, and the specific implementation is not limited to any one of the above examples.
In this embodiment, the first monitored data has a higher data dimension than the second monitored data generated by dimension reduction. For example, the data dimension of the first monitored data is a first dimension, the data dimension of the second monitored data is a second dimension, and the first dimension is greater than the second dimension. If the first dimension is J, one piece of the first monitoring data comprises J elements.
In this embodiment, the first sample data and the first monitoring data have the same data dimension, and the second sample data and the second monitoring data have the same data dimension.
The second sample data is dimension reduction data of the first sample data, and thus, for example, dimension reduction operation is performed on the first sample data to obtain the second sample data. The dimension reduction operation includes, but is not limited to, at least one of:
a distributed domain embedding (SNE) algorithm;
a t-distribution domain embedding (tSNE) algorithm;
principal Component Analysis (PCA) algorithm.
And obtaining second sample data after the dimension reduction processing is carried out on the first sample data through a dimension reduction algorithm.
In step S120 of this embodiment, a weight is obtained based on the first monitoring data and the first sample data, and the weight may be referred to as a dimension-reduced reference variable. Specifically, the first weight is determined based on a distance between the first monitored data and the first sample data, which may be referred to as a first distance. For example, the first weight is positively correlated with the first distance, that is, the greater the distance between the first monitored data and the first sample data, the greater the difference between the first monitored data and the first sample data, the greater the first weight. In some embodiments, the first distance may be obtained by taking the first monitoring data and the first sample data as vectors and calculating the first distance by dot products of the vectors. The first distance may be a euclidean distance.
In this embodiment, the second weight may be another reference variable calculated based on the second monitoring data and the second sample data. Likewise, the second weight may be positively correlated with a distance between the second monitored data and the second sample data, where the distance may be a second distance. An alternative way of the derivation of the second distance is similar to the second distance.
In some embodiments, the first functional relationship for calculating the first weight based on the first monitored data and the first sample data may be the same as the second functional relationship for calculating the second weight based on the second monitored data and the second sample data. However, in some embodiments, the first functional relationship and the second functional relationship may be different in view of dimensional differences of the first monitored data and the second monitored data.
However, in the present embodiment, the first weight and the second weight satisfy a predetermined constraint condition, and when the first weight is found, the second weight can be found based on the known constraint condition.
After the second weight is obtained, the second weight and the second sample data are used as known quantities, the known quantities are input into a second functional relation, and second monitoring data are obtained, so that the second monitoring data obtained after the dimension reduction of the first monitoring data are obtained reversely. In this embodiment, the dimension reduction data (second monitoring data) of the first monitoring data obtained may no longer be constrained by deep learning models such as a neural network, so as to reduce the amount of computation generated by model training, and the intermediate variables involved in the computation only include the first weight and the second weight, which has the characteristics of small amount of computation and high efficiency. In the dimension reduction process, the first sample data and the second sample data after the dimension reduction of the first sample data are referred to, so that the loss of information needing to be reserved in the dimension reduction process is reduced, and the loss of the information quantity of the second monitoring data after the dimension reduction is ensured.
In some embodiments, the method further comprises:
step S150: and inputting the second monitoring data into a trained deep learning model to obtain the identification information of the second monitoring data, wherein the deep learning model is trained by the second sample data.
In this embodiment, the second sample data is sample data for training a deep learning model, and thus, the deep learning model obtained by training using the second sample data has a high recognition degree for the second monitoring data because the second monitoring data is generated by dimensionality reduction according to the second sample data, and can well obtain a required output result, for example, a classification result and/or a clustering result, based on the second monitoring data.
For example, if the first monitored data is motion data, the second monitored data is also motion data, and the deep learning model is trained by using second sample data (which is also motion data), so that the deep learning model can recognize the recognition result such as the motion type and/or the motion amplitude represented by the first monitored data through the processing of the second monitored data.
In some embodiments, the step S110 may include: obtaining a first weight in a value interval according to the tuning parameters; for example, a first weight located in a value range is obtained according to a tuning parameter and a first distance between the first monitoring data and the first sample data.
The first distance may be used to characterize a degree of difference or similarity between the first monitored data and the first sample data. The tuning parameters may be preset or calculated. By introducing the optimization parameters, the value of the first weight is not too large or too small, but is within a preset value interval, so that subsequent processing of the second weight and second monitoring data is facilitated.
In some embodiments, the first weight located in the value range satisfies the following functional relationship:
Figure BDA0001933377710000081
wherein p is j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; h is a preset constraint parameter.
In this embodiment, H may be predetermined, for example, a value of H is between 1 and 7. In still other embodiments, H can have a value between 1 and 5, or H can have a value between 1 and 4. In some embodiments, the H can also be any positive number. By p j|i The value range of the tuning parameters can be reversely deduced by setting the value interval. For example, in some embodiments, one or more of the first monitored data may be taken and the tuning parameter determined, and once the tuning parameter is determined, the calculation of the first weight for all of the first monitored data may be used.
In some embodiments, the obtaining a first weight located in a value range according to the tuning parameter and the first distance includes:
Figure BDA0001933377710000082
wherein σ i The tuning parameters; a. The i The ith first monitoring data; n is a radical of i For K in the first sample data and A i A data set of most recent first sample data; x k Is the kth first monitoring data.
In the embodiment of the present invention, the mathematical symbol | | | | represents the absolute value between two vectors in the functional relationship. The square of the absolute value can be expressed as the distance between two vectors.
In some embodiments, the step S130 may include:
determining the second weight by adopting the following functional relation;
Figure BDA0001933377710000083
/>
R w a data dimension for the second monitored data and the second sample data; p is a radical of j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; b is i The ith second monitoring data.
The R is w Can be 2 or 3, etc.; for example, if the second monitoring data needs to be mapped to a two-dimensional space after dimensionality reduction, R w Is 2; if the second monitoring data needs to be mapped to the three-dimensional space after dimensionality reduction, R w Is 3.
The second weight can be found out by the constraint condition. In some embodiments, the second weight may also be obtained by the following functional relationship:
Figure BDA0001933377710000091
the meaning of the parameter in the present functional relationship can be functionally->
Figure BDA0001933377710000092
The meaning of (1).
The step S140 may include:
determining the second monitoring data based on the second weight and the second sample data according to the following functional relationship;
Figure BDA0001933377710000093
q j/i said second weight for the ith said first monitored data to the jth said second sample data; y is k The kth second monitoring data; n is the total number of the second sample data; y is j The kth second monitoring data; b is i Is the ith second monitoring data.
At a known q j/i 、Y k And Y j In the case of (3), B can be simply found out i (ii) a Thereby obtaining A i Dimension reduction data B i
As shown in fig. 3, the present embodiment provides a data processing apparatus including:
a first monitoring data module 110, configured to obtain first monitoring data of a monitored object;
a first weighting module 120, configured to obtain a first weight based on the first monitoring data and the first sample data;
a second weight module 130, configured to determine a second weight based on a constraint condition that is satisfied by the first weight and the second weight;
a second monitoring data module 140, configured to determine second monitoring data based on a second weight and second sample data, where the second sample data is dimension reduction data of the first sample data; the preset function relationship is as follows: characterizing a functional relationship of an incidence relationship between the second distance between the second sample data and the second weight; the second monitoring data is dimension reduction data of the first monitoring data.
In some embodiments, the first monitoring data module 110, the first weighting module 120, the second weighting module 130, and the second monitoring data module 140 may be program modules, which can be executed by a processor to implement the functions of the above modules.
In other embodiments, the first monitoring data module 110, the first weighting module 120, the second weighting module 130, and the second monitoring data module 140 may be a combination of hardware and software modules, which may include various types of programmable arrays; the programmable array includes, but is not limited to, a field programmable array or a complex programmable array.
In still other embodiments, the first monitor data module 110, the first weight module 120, the second weight module 130, and the second monitor data module 140 may be pure hardware modules, such as an application specific integrated circuit.
In some embodiments, the first weight module 120 is specifically configured to obtain a first weight located in a value range according to the tuning parameter.
In some embodiments, the first weight located in the value range satisfies the following functional relationship:
Figure BDA0001933377710000101
wherein p is j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; h is a preset constraint parameter.
In some embodiments, H is between 1 and 7.
In some embodiments, the first weight module 120 is specifically configured to obtain the first weight located in the value range according to the following functional relationship and the tuning parameter:
Figure BDA0001933377710000111
wherein σ i The tuning parameters; a. The i The ith first monitoring data; n is a radical of i For K and A in the first sample data i A data set of most recent first sample data; x k Is the kth first monitoring data.
In some embodiments, the second weight module 130 is specifically configured to determine the second weight by using the following functional relationship;
Figure BDA0001933377710000112
R w a data dimension for the second monitored data and the second sample data; p is a radical of j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; b is i The ith second monitoring data.
In some further embodiments, the second monitoring data module 140 is specifically configured to determine the second monitoring data based on the second weight and the second sample data according to the following functional relationship;
Figure BDA0001933377710000113
q j/i said second weight for the ith said first monitored data to the jth said second sample data; y is k The kth second monitoring data; n is the total number of the second sample data; y is j Kth of the second monitored data; b is i The ith second monitoring data.
In still other embodiments, the apparatus further comprises:
and the identification information module is used for inputting the second monitoring data into a trained deep learning model to obtain identification information of the second monitoring data, wherein the deep learning model is obtained by training by taking the second sample data as a sample.
Several specific examples are provided below in connection with any of the embodiments described above:
the invention realizes action recognition and motion data processing and analysis based on the motion sensors worn on each part of the monitored object. Firstly, wearing various motion sensors in advance, collecting data under specified motion, and correspondingly marking the motion types of the data; reducing the dimension of the data by utilizing a tSNE algorithm in manifold learning, and storing the obtained sample data after dimension reduction; when the action is identified, corresponding action sensors are worn at the same positions, collected action data are subjected to data dimension reduction processing in a time sequence mode, and the data subjected to dimension reduction are classified to realize the action identification. In addition, the obtained dimension reduction data can be collected and analyzed, and statistics of the motion rules can be further achieved.
The present example provides an action recognition algorithm based on stream data dimension reduction, which may include the following steps:
s1, sample data collection and dimension reduction data generation: collecting data collected by all wearing sensors under a specified action (such as standing, walking, sliding and sitting), and reducing the dimension of the sample data (the dimension of the data after dimension reduction is 2 dimensions) by selecting appropriate parameters by using a tSNE algorithm in manifold learning, so that the sample data after dimension reduction is separated as far as possible according to different action categories (or various clustering measurement indexes such as ARI and the like are adopted);
s2, reducing the dimension of the action data and identifying the action: after new motion data are collected, an algorithm for realizing data dimension reduction based on a neighbor manifold learning algorithm is provided, original motion data are mapped into 2-dimensional data, dimension reduction results of sample data are combined in a 2-dimensional plane, and action recognition is realized by using KNN;
s3, analyzing motion data: in general, due to the fact that the dimension of the collected motion data is very high, if the rule that the data cannot be observed visually is not processed, the data dimension reduction is adopted to present the high-dimensional motion data in a 2-dimensional plane, and visual analysis is facilitated.
Wherein, in the step S1, noting the collected original sample data set as X = { X = 1 ,...,X n Is the corresponding motion label L = { L = } 1 ,...,l n }. The data after the tSNE dimensionality reduction is marked as Y = { Y = 1 ,...,Y n And performing data dimension reduction based on the data in step S2.
In step S2, newly collected motion data is recorded as A i The dimension reduction algorithm in the invention is realized based on the result after the tSNE dimension reduction, and other dimension reduction results can be replaced by others if the other dimension reduction results are better. Calculation of A i Weights p to points in the sample data set X j|i
Figure BDA0001933377710000131
Wherein p is j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is a radical of i Is at a distance A in the sample data set i The set of the most recent K elements. In addition, a positive number H (generally between 1 and 5) is determined in advance, so that the appropriate σ is selected i Let a
Figure BDA0001933377710000132
P is obtained by calculation j|i Then, it is marked as B i New data A i The result after dimension reduction is defined as B in a low-dimensional space i Weights q to points Y j/i
Figure BDA0001933377710000133
For obtaining reduced dimension data B i The following objective function is constructed:
Figure BDA0001933377710000134
the newly collected motion data A can be obtained by solving the objective function i The result after dimensionality reduction. FIG. 4 is a schematic diagram of reduced data contained in a two-dimensional plane. Reference numerals 0 to 9 in fig. 4 denote data of 10 different motion types.
Because each sample data corresponds to a corresponding action type, the KNN algorithm is trained in a low-dimensional space, and then the dimension B of new data is reduced i And the inference is carried out, so that the calculation cost is greatly reduced while the action recognition is realized. In addition, in the data analysis step in step S3, a two-dimensional histogram may be drawn according to the data after the dimension reduction, so as to analyze the distribution rules of various actions.
As shown in fig. 5, the present example provides another data processing apparatus including:
the storage module can be used for storing the first sample data and the second sample data;
a sensor module, configured to acquire the first monitoring data, for example, including motion data before dimension reduction;
and the data dimension reduction module is used for carrying out dimension reduction processing on the original high-dimensional data to obtain low-dimensional data.
The motion identification module may be configured to identify the data after dimensionality reduction, for example, identify a motion type of the motion data after dimensionality reduction;
and the action analysis module can be used for carrying out data analysis according to the data with the reduced dimensionality and/or analyzing the identification result output by the action identification module to obtain a required analysis result.
As shown in fig. 6, the present example provides a data processing method based on the data processing apparatus shown in fig. 5, and may include:
the sensor module collects motion data;
collecting a sample data set (a data set comprising the first sample data and the second sample data) for data dimension reduction;
performing inference and action recognition by using an algorithm in a low-dimensional space to obtain a recognition result;
and performing action analysis by combining the existing data. Here, the existing data includes: data in the sample data set and/or second monitoring data after dimensionality reduction.
The embodiment provides a method and a system for identifying and analyzing actions based on flow data dimension reduction, the action identification algorithm provided by the system greatly improves the accuracy of action identification, and in addition, the data analysis can be more intuitively and effectively performed through dimension reduction processing on original data while the action identification is realized. Firstly, the invention provides a set of dimensionality reduction algorithm aiming at high-dimensional data, the algorithm is obviously different from the prior patent, the dimensionality reduction effect is also improved, and in addition, the dimensionality reduction algorithm is also suitable for a stream data scene; in addition, the motion recognition problem is solved based on the dimension reduction algorithm, and the motion rule can be subjected to correlation analysis according to the result after dimension reduction. All the above are the points to be protected and the key points of this method.
The embodiment of the invention also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions; after being executed, the computer-executable instructions can implement the data processing method provided by one or more of the technical schemes; for example, as shown in fig. 1-2 and 6. The computer storage medium may be a non-transitory storage medium.
The embodiment further provides an electronic device, which can operate the method for identifying a table provided by any of the foregoing technical solutions, and the method includes:
a memory for storing information;
the processor is connected with the memory and used for controlling the data processing method capable of realizing one or more technical schemes by executing computer executable instructions on the processor; for example, as shown in fig. 1-2 and 6. The computer storage medium may be a non-transitory storage medium.
The communication interface can be various types of network interfaces and can be used for receiving and sending information.
The memory can be various types of memories, such as random access memory, read only memory, flash memory, and the like. The memory may be used for information storage, e.g., storing computer-executable instructions, etc. The computer-executable instructions may be various program instructions, such as object program instructions and/or source program instructions, and the like.
The processor may be various types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor, among others.
The processor may be connected to the memory via a bus. The bus may be an integrated circuit bus, or the like.
In some embodiments, the image device may further include: a communication interface, which may include: a network interface, e.g., a local area network interface, a transceiver antenna, etc. The communication interface is also connected with the processor and can be used for information transceiving.
In some embodiments, the electronic device may further include: a human-machine interaction interface, which may include: and a keyboard and/or a mouse and the like are convenient for a user to interact information with the electronic equipment.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A data processing method, comprising:
acquiring first monitoring data of a monitored object;
obtaining a first weight based on the first monitoring data and the first sample data;
determining a second weight based on a constraint condition satisfied by the first weight and the second weight;
determining second monitoring data based on second weight and second sample data, wherein the second sample data is dimension reduction data of the first sample data; the second monitoring data is dimension reduction data of the first monitoring data.
2. The method of claim 1,
obtaining a first weight based on the first monitoring data and the first sample data, including:
and obtaining a first weight in the value range according to the tuning parameters.
3. The method of claim 2,
the first weight in the value range satisfies the following functional relationship:
Figure FDA0001933377700000011
wherein p is j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; h is a preset constraint parameter.
4. The method of claim 3,
h is between 1 and 7.
5. The method of claim 2,
the obtaining of the first weight located in the value range according to the tuning parameter includes:
Figure FDA0001933377700000012
wherein p is j|i The first weight for the ith of the first monitored data to the jth of the first sample data; sigma i The tuning parameters; a. The i The ith first monitoring data; n is a radical of i For K in the first sample data and A i A data set of most recent first sample data; x k Is the kth first monitoring data.
6. The method of claim 1, wherein determining the second weight based on a constraint satisfied by the first weight and the second weight comprises:
determining the second weight by adopting the following functional relation;
Figure FDA0001933377700000021
wherein R is w A data dimension for the second monitored data and the second sample data; p is a radical of j|i The first weight for the ith of the first monitored data to the jth of the first sample data; n is the total number of the first sample data; b i The ith second monitoring data.
7. The method of claim 1,
the determining second monitoring data based on the second weight and the second sample data includes:
determining the second monitoring data based on the second weight and the second sample data according to the following functional relationship:
Figure FDA0001933377700000022
wherein q is j/i Said second weight for the ith said first monitored data to the jth said second sample data; y is k The kth second monitoring data; n is the total number of the second sample data; y is j For the kth second monitoring data; b is i The ith second monitoring data.
8. The method of claim 1, further comprising:
and inputting the second monitoring data into a trained deep learning model to obtain the identification information of the second monitoring data, wherein the deep learning model is obtained by training with the second sample data as a sample.
9. A data processing apparatus, characterized by comprising:
the first monitoring data module is used for acquiring first monitoring data of a monitored object;
the first weight module is used for obtaining a first weight based on the first monitoring data and the first sample data;
the second weight module is used for determining a second weight based on the constraint condition met by the first weight and the second weight;
the second monitoring data module is used for determining second monitoring data based on second weight and second sample data, wherein the second sample data is dimension reduction data of the first sample data; the second monitoring data is dimension reduction data of the first monitoring data.
10. An electronic device, comprising:
a memory for storing a plurality of data files to be transmitted,
a processor coupled to the memory for implementing the method provided by any of claims 1 to 8 by executing computer-executable instructions located on the memory.
11. A computer storage medium storing computer-executable instructions capable, when executed by a processor, of implementing the method as claimed in any one of claims 1 to 8.
CN201910000534.7A 2019-01-02 2019-01-02 Data processing method and device, electronic equipment and storage medium Active CN111401393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910000534.7A CN111401393B (en) 2019-01-02 2019-01-02 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910000534.7A CN111401393B (en) 2019-01-02 2019-01-02 Data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111401393A CN111401393A (en) 2020-07-10
CN111401393B true CN111401393B (en) 2023-04-07

Family

ID=71413094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910000534.7A Active CN111401393B (en) 2019-01-02 2019-01-02 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111401393B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015158198A1 (en) * 2014-04-17 2015-10-22 北京泰乐德信息技术有限公司 Fault recognition method and system based on neural network self-learning
CN108596630A (en) * 2018-04-28 2018-09-28 招商银行股份有限公司 Fraudulent trading recognition methods, system and storage medium based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015158198A1 (en) * 2014-04-17 2015-10-22 北京泰乐德信息技术有限公司 Fault recognition method and system based on neural network self-learning
CN108596630A (en) * 2018-04-28 2018-09-28 招商银行股份有限公司 Fraudulent trading recognition methods, system and storage medium based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于样本类别确定度的半监督分类;高飞等;《北京航空航天大学学报》(第09期);全文 *
宏流形学习及其在监督分类中的应用;黄红兵;《遥感信息》(第03期);全文 *

Also Published As

Publication number Publication date
CN111401393A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
Yger et al. Riemannian approaches in brain-computer interfaces: a review
CN111027487B (en) Behavior recognition system, method, medium and equipment based on multi-convolution kernel residual error network
CN110929622A (en) Video classification method, model training method, device, equipment and storage medium
CN103886341A (en) Gait behavior recognition method based on feature combination
CN104156715A (en) Terminal device and information acquisition method and device
Arablouei et al. Animal behavior classification via deep learning on embedded systems
CN115482498A (en) Intelligent old-age care monitoring system based on video and method thereof
CN111954250B (en) Lightweight Wi-Fi behavior sensing method and system
Savvaki et al. Matrix and tensor completion on a human activity recognition framework
CN116311539B (en) Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
Lederman et al. Alternating diffusion for common manifold learning with application to sleep stage assessment
CN111931616A (en) Emotion recognition method and system based on mobile intelligent terminal sensor equipment
CN112052816A (en) Human behavior prediction method and system based on adaptive graph convolution countermeasure network
CN113435432B (en) Video anomaly detection model training method, video anomaly detection method and device
WO2021120007A1 (en) Infrared image sequence-based sleep quality evaluation system and method
Santoyo-Ramón et al. A study on the impact of the users’ characteristics on the performance of wearable fall detection systems
Venkat et al. Recognizing occluded faces by exploiting psychophysically inspired similarity maps
CN110598599A (en) Method and device for detecting abnormal gait of human body based on Gabor atomic decomposition
CN114495241A (en) Image identification method and device, electronic equipment and storage medium
CA2773925A1 (en) Method and systems for computer-based selection of identifying input for class differentiation
CN111401393B (en) Data processing method and device, electronic equipment and storage medium
CN111063438B (en) Sleep quality evaluation system and method based on infrared image sequence
Saeed et al. Comparison of classifier architectures for online neural spike sorting
CN113160987B (en) Health state prediction method, apparatus, computer device and storage medium
Stager et al. Dealing with class skew in context recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant