CN117131022A - Heterogeneous data migration method of electric power information system - Google Patents
Heterogeneous data migration method of electric power information system Download PDFInfo
- Publication number
- CN117131022A CN117131022A CN202311239407.5A CN202311239407A CN117131022A CN 117131022 A CN117131022 A CN 117131022A CN 202311239407 A CN202311239407 A CN 202311239407A CN 117131022 A CN117131022 A CN 117131022A
- Authority
- CN
- China
- Prior art keywords
- data
- power source
- power
- source data
- domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000013508 migration Methods 0.000 title claims abstract description 39
- 230000005012 migration Effects 0.000 title claims abstract description 39
- 238000013507 mapping Methods 0.000 claims abstract description 65
- 230000006978 adaptation Effects 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000004140 cleaning Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 108091023037 Aptamer Proteins 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims 1
- 230000037430 deletion Effects 0.000 claims 1
- 238000013523 data management Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 46
- 238000009826 distribution Methods 0.000 description 21
- 238000007726 management method Methods 0.000 description 7
- 230000010354 integration Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 5
- 238000007405 data analysis Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 101100460704 Aspergillus sp. (strain MF297-2) notI gene Proteins 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/214—Database migration support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/215—Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Quality & Reliability (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The invention belongs to the technical field of power data management, and particularly relates to a heterogeneous data migration method of a power information system. A heterogeneous data migration method of a power information system, the method comprising: step 1: acquiring power source data and power target data which are heterogeneous with each other; step 2: selecting a feature set from the power source data and the power target data; step 3: mapping the power target data by using the mean value difference mapping model to obtain mapped power target data; step 4: training an adaptation by domain adaptation loss; step 5: performing reduction and reconstruction on the power source data to extract higher-level feature representation; step 6: training the field countermeasure network; step 7: mapping the power source data to mapped power target data using a preset migration function. The invention realizes heterogeneous data migration of the electric power information system, brings about reduction of field difference and improvement of the utilization rate of the data.
Description
Technical Field
The invention belongs to the technical field of power data management, and particularly relates to a heterogeneous data migration method of a power information system.
Background
The power system is one of the important infrastructures of modern society, and is responsible for the key tasks of energy supply and power distribution. To ensure reliability and efficiency of power systems, the power industry has been actively exploring various techniques and methods to improve the operation and management of power systems. With the rapid development of information technology, power system monitoring and management has become more intelligent, but has also faced a series of challenges and problems.
The data in power systems is diverse, including power source data, power destination data, and other related data. These data are often heterogeneous, i.e., the data formats, units, and structures from different data sources may differ. This heterogeneity complicates and makes difficult the acquisition, integration and analysis of data. Conventional data processing methods often fail to effectively process such heterogeneous data, resulting in low information utilization. Power system data is typically affected by various noise, missing values, and outliers. These problems may lead to degradation in the quality of the data, thereby affecting the reliability of power system monitoring and management. In order to obtain accurate information, preprocessing operations such as cleaning, removing noise, filling missing values and the like must be performed on the data. Conventional methods typically require extensive manual intervention and complex rule formulation, are inefficient and difficult to handle for large-scale data.
Power system management relates to a number of fields including power engineering, data science, machine learning, and the like. There are differences between data and methods in different domains, so data integration and knowledge fusion across domains are required. Traditional data integration methods generally require professional domain knowledge and complex data conversion, and are difficult to realize efficient cross-domain data integration. In power system monitoring and management, it is often necessary to migrate models from one domain (source domain) to another domain (target domain) to accommodate data distribution in different domains. Conventional machine learning methods perform poorly in the face of domain changes because they often assume that the data distribution of the source domain and the target domain is the same.
Feature engineering is a critical task of extracting and selecting appropriate features for modeling. Power system data typically has high dimensional characteristics that require dimension reduction to reduce computational complexity and improve generalization of the model. Conventional feature engineering and dimension reduction methods typically require a great deal of expertise and experience, and the results are unstable. Domain countermeasure and migration learning is an important method of dealing with domain adaptation problems. They reduce the domain difference between the source domain and the target domain by countermeasure training, thereby improving the performance of the model on the target domain. However, existing domain countermeasure and migration learning methods still face challenges such as model stability, convergence speed, and parameter adjustment.
Disclosure of Invention
The invention mainly aims to provide a heterogeneous data migration method of an electric power information system, which realizes heterogeneous data migration of the electric power information system, reduces field difference and improves the utilization rate of data.
In order to solve the technical problems, the invention provides a heterogeneous data migration method of an electric power information system, which comprises the following steps:
step 1: acquiring power source data and power target data which are heterogeneous with each other;
step 2: selecting a feature set from the power source data and the power target data;
step 3: mapping the power target data by using a mean difference mapping model to obtain mapped power target data, so that the similarity between the power target data and the power source data in a feature space exceeds a set similarity threshold;
step 4: training an adaptation by domain adaptation loss to reduce the difference between the power source data and the mapped power target data;
step 5: performing reduction and reconstruction on the power source data to extract higher-level feature representation;
step 6: training a domain countermeasure network to minimize domain differences between the power source data and the mapped power target data;
step 7: mapping the power source data to mapped power target data using a preset migration function.
Further, in the step 1, after the power source data and the power target data which are heterogeneous with each other are obtained, data cleaning, noise removal, missing value removal and data standardization processing are performed on the power source data and the power target data respectively.
Further, the step 2 specifically includes: calculating information gain of each feature in the power source data and the power target data; ordering the information gains before selectionNThe feature with highest information gain is used as the feature set of the power source dataAnd a feature set of power target data。
Further, the mean difference mapping model is expressed using the following formula:
;
wherein,is the maximum averageThe value of the value difference;is the first in the power source dataA feature vector representing a representation of one sample of power source data in a feature space;is the first in the power targetA feature vector representing a representation of one sample of the power target data in a feature space;a number of data samples for the power source;a number of data samples for the power target;is a kernel function used to calculate the similarity between the representations of the samples in the feature space.
Further, the method for training an adaptive device through the domain adaptation loss in step 4 includes:
Substep 4.1: extracting power source data and mapping a characteristic representation of the power target data using a deep neural network based aptamer; the adaptation means comprises a domain classifierA shared feature extractorAnd two different classifiersAndthe method comprises the steps of carrying out a first treatment on the surface of the The obtained power source data is characterized by:
,
wherein the method comprises the steps ofIs the data of the power source and the data of the power source,is a characteristic representation of the power source data;
the resulting mapped power target data features are represented as:
,
wherein the method comprises the steps ofIs to map the power target data to be stored,is a characteristic representation that maps the power target data;
step 4.2: power source data classifierClassifying power source data into corresponding categories and domain classifierFor distinguishing power source data from mapping power target data; the classifier is to be classifiedIs a multi-layer sensor;
step 4.3: the domain adaptation loss is a loss function of an adaptive based on a deep neural network, which includes: power source data classification loss and domain classification loss; setting an optimization target to minimize field adaptation loss; the optimization objective is expressed using the following formula:
;
wherein,and adapting to the loss for the field.
Further, the power source data classifier is expressed using the following formula:
;
Wherein,andis the weight and bias parameters of the power source data classifier;representing an activation function;
the domain adaptation loss is expressed using the following formula:
;
wherein,classifying losses for the power source data;classifying the loss for the domain;andis a weight parameter, which is a preset value, used to balance the importance of two loss terms.
Further, the method comprises the steps of,
the power source data classification loss is expressed using the following formula:
;
wherein,is a power source data sampleIs a true value of (2);
the domain classification loss is expressed using the following formula:
;
wherein,andthe number of samples of the power source data and the mapped power target data, respectively.
Further, in the step 5, the power source data is reduced and reconstructed by using the improved automatic encoder to extract a higher-level feature representation; the loss function of the improved automatic encoder is expressed using the following formula:
;
wherein,a loss function for an automatic encoder;is the first in the power source dataTrue values of the individual samples;for a decoder function, for reconstructing input data from the encoded features;mapping power source data to a low-dimensional feature space for an encoder function; Representing a reconstruction error;andthe weight adjustment parameters are used for balancing the importance of the reconstruction error and regularization item;a weight matrix, which is an automatic encoder, for mapping the feature representation of the power source data to and from the low-dimensional feature space back to the original feature space;for regularization terms, the square of the Frobenius norm of the weight matrix is represented, and used for controlling the size of the weights to prevent overfitting;to the first of the noisy power source dataThe values of the individual samples.
Further, the loss function of the domain countermeasure network when the domain countermeasure network is trained in the step 6 is expressed by using the following formula:
;
wherein,representing a domain countermeasure network;to map the first in the power target dataTrue values of the individual samples.
Further, the migration function preset in the step 7 is a nonlinear mapping model based on the deep neural networkThe method comprises the steps of carrying out a first treatment on the surface of the The loss value of the migration function is expressed using the following formula:
;
wherein,representing a non-linear mapping model,Nonlinear mapping modelIs used for the number of layers of the (a),representing a non-linear mapping modelIs the first of (2)A matrix of layer weights is provided,is a weight regularization parameter.
The heterogeneous data migration method of the electric power information system has the following beneficial effects: the method of the invention firstly solves the problem of heterogeneous data processing and integration in the power system. In conventional power systems, the data formats, units, and structures from different data sources may vary, resulting in data processing difficulties. According to the invention, the characteristic set is selected and the mean difference mapping model is used to map the power target data to the characteristic space similar to the power source data, so that the effective integration of heterogeneous data is realized. This enables information from different data sources to be better utilized, helping to improve the accuracy of power system monitoring and management. Training of the adaptors minimizes the difference between the source domain and the target domain by domain adaptation loss. This helps the model to better adapt to the data distribution in the target domain, thereby improving the performance of the model in the target domain. Training of the adaptors helps to improve generalization capability of the model, so that the model can perform well on data in different fields. This means that the model can be effectively applied even in the unseen target area, reducing the need to retrain the model, improving the flexibility and maintainability of the system. In the training process of the adAN_SNter, the risk of overfitting can be reduced through the difference between the domain adaptation loss and the countermeasure source domain and the target domain. This helps to improve the stability and robustness of the model, especially in situations where the amount of data is limited. Domain countermeasure networks reduce domain differences between source and target domains through countermeasure training. This helps the model to adapt better to the data in the target area, improving the performance of the model. The introduction of domain countermeasure networks can improve the stability of the model in the face of domain adaptation problems. The method realizes the field invariance of the model by minimizing the field adaptation loss, thereby reducing the influence of the field change. The mean difference mapping model maps the power target data to a feature space similar to the power source data by mapping it. This helps to achieve migration of feature space so that data of the target domain can be better aligned with data of the source domain. The mapped power target data is more similar to the data in the source field in the feature space, so that the performance of the model in the target field is improved. The model may more accurately capture features and patterns of the target area. The application of the mean value difference mapping model enables the target field data to be more fully utilized, so that the data utilization rate is improved. This helps to improve the efficiency and accuracy of power system management.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a heterogeneous data migration method of an electric power information system according to an embodiment of the present invention.
Detailed Description
The method of the present invention will be described in further detail with reference to the accompanying drawings.
Example 1: referring to fig. 1, a heterogeneous data migration method of a power information system, the method comprising:
step 1: acquiring power source data and power target data which are heterogeneous with each other; the power data may include a variety of parameters such as current, voltage, power, frequency, etc., which are typically present in time series. The power source data typically comes from different power stations, different sensors and equipment, which may have different sampling frequencies, data formats and dimensions. The power target data may be user demand data, market price data, etc., which may also have different data structures and characteristics. Thus, in this step, the primary task is to collect such data from multiple heterogeneous data sources, which may vary in dimension, format, and characteristics.
Step 2: selecting a feature set from the power source data and the power target data; a particular set of features is selected from the power source data and the power target data. The characteristics of the power data may include frequency components, harmonic components, peak power, etc. The proper feature set is selected in order to establish an efficient mapping relationship in a subsequent step. In power data, characteristics generally refer to various measurements and parameters representing power system state and performance, such as current, voltage, power factor, and the like. In this step, the purpose of selecting the feature set is to screen out the most representative and important features from the large amount of data, so as to reduce the complexity and calculation cost of subsequent processing. This choice may be based on the requirements of domain expertise and data analysis.
Step 3: mapping the power target data by using a mean difference mapping model to obtain mapped power target data, so that the similarity between the power target data and the power source data in a feature space exceeds a set similarity threshold; the mean difference refers to a difference between the mean values of the power source data and the power target data in the feature space. By mapping the power target data, the difference between the power target data and the average value of the power source data in the characteristic space exceeds a set similarity threshold, and the target data and the source data can be ensured to have a certain degree of similarity. This helps preserve the important characteristics of the data while reducing the impact of heterogeneous data.
Step 4: training an adaptation by domain adaptation loss to reduce the difference between the power source data and the mapped power target data; the domain adaptation loss is a loss function that measures the difference between source data and target data in the feature space. By training an adapter to minimize this loss function, a better match between the source data and the mapped target data can be achieved.
Step 5: performing reduction and reconstruction on the power source data to extract higher-level feature representation; in power data analysis, dimension reduction techniques are typically used to extract higher level feature representations. This can help reduce the dimensionality of the data while retaining important information. The reduced dimension data can be more easily used for subsequent modeling and analysis.
Step 6: training a domain countermeasure network to minimize domain differences between the power source data and the mapped power target data; the domain countermeasure network is used to minimize domain differences between the power source data and the mapped power target data. The domain difference refers to the data coming from different power sources, possibly with different distributions. By training the countermeasure network, the source data and the target data can be mapped into a shared feature space to reduce domain differences, thereby increasing consistency and availability of the data.
Step 7: mapping the power source data to mapped power target data using a preset migration function. Mapping the power source data to the mapped power target data using a preset migration function. This function may be derived based on the models and parameters trained in the previous step. It ensures consistency and availability of the data and allows the data to be used directly in subsequent applications such as power system monitoring, fault detection, etc.
In particular, one of the characteristics of the power data is that they may come from different fields, such as different power systems, geographical locations or operating conditions. These domain differences can lead to differences in data distribution, complicating the use of data in different domains. The creative of the domain adaptation loss is that it maps data of different domains into a shared feature space by training an adaptation unit to reduce domain differences. This helps to improve the consistency of the data, making it easier to compare and analyze data in different fields. Another characteristic of power data is that they may have different statistical distributions in different areas, e.g. there may be differences in power systems in different geographical areas. The creativity of the domain countermeasure network is that the domain difference is reduced through countermeasure training, so that the data of different domains are more consistent in the shared feature space. This helps to improve the generalization performance and the mobility of the data, so that the model can work effectively in different fields.
Example 2: on the basis of the above embodiment, after the power source data and the power target data that are heterogeneous with each other are obtained in step 1, data cleaning, noise removal, missing value removal and data normalization are performed on the power source data and the power target data, respectively.
Specifically, data cleansing is an important step after data acquisition. The power data may be subject to various disturbances, such as instrument errors, sensor failures, or communication problems, which may lead to the presence of outliers or noise in the data. The purpose of data cleansing is to detect and correct these problems to ensure the accuracy of the data. For example, statistical methods or domain knowledge may be used to identify and repair outliers from adversely affecting subsequent analysis. The power data may contain missing values due to sensor failure or communication problems. In data analysis and modeling, missing values are often unacceptable because they may cause the model to be unstable or fail. Thus, removal of missing values is a necessary step, and interpolation methods or other techniques can be used to fill in missing values to preserve the integrity of the data. The power data may contain noise that is independent of the power system itself, which may interfere with analysis and modeling of the data. The purpose of removing noise is to improve the signal-to-noise ratio of the data, so that the data has more information value. This may be achieved by filtering techniques, smoothing methods or signal processing methods to remove high or low frequency noise. The power source data and the power target data may have different units and dimensions, which may cause problems for subsequent data analysis and modeling. The purpose of data normalization is to scale the data to the same scale so that they can be directly compared and analyzed. Normalization typically involves converting the data to a standard normal distribution with a mean of 0, standard deviation of 1, or scaling and transforming using other methods to ensure that the data are on the same scale.
Example 3: based on the above embodiment, the step 2 specifically includes: calculating information gain of each feature in the power source data and the power target data; ordering the information gains before selectionThe feature with highest information gain is used as the feature set of the power source dataAnd a feature set of power target data。
Specifically, for the power source data and the power target data, first, the information digest of the overall data is calculated. This needs to be calculated from the distribution of categories in the data. For each featureCalculating a condition digest given the characteristic condition. This means that the characteristic needs to be calculatedThe conditional probability distribution under each value condition of (2) and then calculating the corresponding conditional digest. Computing featuresInformation gain of (2). Using information gain formula to pick up the information of the whole dataSubtraction condition extractionThe information gain can be obtained. The information gain for each feature is calculated. Ordering the information gains of all the features before selectionThe feature with the highest information gain is taken as the feature set of the power source data and the power target data.
Example 4: on the basis of the above embodiment, the mean difference mapping model is expressed using the following formula:
;
Wherein,a value that is the maximum mean difference;is the first in the power source dataA feature vector representing a representation of one sample of power source data in a feature space;is the first in the power targetA feature vector representing a representation of one sample of the power target data in a feature space;a number of data samples for the power source;a number of data samples for the power target;is a kernel function used to calculate the similarity between the representations of the samples in the feature space.
In particular, the mean-difference mapping model MMD is a method for measuring the similarity between two data sets, and the basic principle is to measure the distribution difference of the two data sets in the feature space through a kernel function. The kernel function calculates the similarity between the representations of each sample in the feature space in the dataset, and then combines these similarities to derive a final similarity measure. The core idea of MMD is that if the distribution of two data sets in the feature space is similar, the similarity between their samples should be higher, and vice versa.
Difference of internal mean value): this term is used to measure the similarity between samples within the power source data. It calculates the similarity between each pair of samples in the power source data and takes the average value. If the distribution of the power source data in the feature space is uniform, the value of this term is low. Source-target mean difference ): this term is used to measure the similarity between the power source data and the power target data. It calculates the sample similarity between the power source data and the power target data and takes the average value. If the two distributions in the feature space are similar, the value of this term is lower. Target internal mean difference): this term is used to measure the similarity between samples within the power target data. It calculates the similarity between each pair of samples in the power target data and takes the average value. If the distribution of the power target data in the feature space is uniform, the value of this term is low. Target internal mean difference): similar to the third term, this term is used to measure the similarity between samples within the power target data.
The mean value difference mapping model MMD has the main function of measuring the distribution difference between two data sets, and is particularly suitable for field self-adaption and data migration tasks. If the power source data and the power target data come from different domains, their distribution may vary greatly. By computing MMD, this difference can be quantified and help to adjust the data to make them more similar in feature space, thereby improving domain adaptability. MMD may be used to select the most relevant features for a task. By comparing the MMD values of the different feature sets, it can be determined which features are most important for classification or target prediction of the data.
Example 5: on the basis of the above embodiment, the method for training an adaptive device by domain adaptation loss in step 4 includes:
substep 4.1: extracting power source data and mapping a characteristic representation of the power target data using a deep neural network based aptamer; the adaptation means comprises a domain classifierA shared feature extractorAnd two different classifiersAndthe method comprises the steps of carrying out a first treatment on the surface of the The obtained power source data is characterized by:
,
wherein the method comprises the steps ofIs the data of the power source and the data of the power source,is a characteristic representation of the power source data;
the resulting mapped power target data features are represented as:
,
wherein the method comprises the steps ofIs to map the power target data to be stored,is a characteristic representation that maps the power target data;
step 4.2: power source data classifierClassifying power source data into corresponding categories and domain classifierFor distinguishing power source data from mapping power target data; the classifier is to be classifiedIs a multi-layer sensor;
step 4.3: the domain adaptation loss is a loss function of an adaptive based on a deep neural network, which includes: power source data classification loss and domain classification loss; setting an optimization target to minimize field adaptation loss; the optimization objective is expressed using the following formula:
;
Wherein,and adapting to the loss for the field.
Specifically, first, a shared feature extractor is usedData of electric power sourceAnd mapping power target dataMapped to a feature representation space. The feature extractor is a deep neural network that can learn a high-level feature representation of the data. The goal of this step is to map the data of different domains to similar feature spaces to reduce domain differences. Next, two important components are trained: power source data classifierSum field classifier。The task of (1) is to categorize the characteristic representation of the power source data into different categories, such as the categorization of the power system state.The task of (1) is to distinguish whether the input data is from power source data or map power target data, i.e. to perform domain classification. The two classifiers work cooperatively to help model learning adapt to differences between different fields. Field adaptation loss) It is the core of this approach. It comprises two parts: power source data classification loss and domain classification loss. By minimizing this loss, the model is forced to learn how to map the power source data to the correct category while reducing the domain differences between the power source data and the mapped power target data. This may be achieved by a back propagation algorithm and optimizer to update the feature extractor Power source data classifierSum field classifierIs a parameter of (a).
The main purpose of this approach is to achieve domain adaptation, i.e. mapping power data from different domains to a shared feature space for classification or other tasks in this space. The following is a specific explanation of its role: by domain adaptation loss during training, the model is encouraged to map the power source data and the mapped power target data to similar feature spaces. This helps reduce the domain differences between the two data sets, making them more suitable for comparison and analysis in the shared feature representation space. Feature extractorLearning how to extract useful feature representations of the power data, which can be used for classification and like tasks. This helps to improve the quality of the characterization of the power data, thereby improving the performance of subsequent tasks. Model domain classifierLearning how to distinguish between different fields of data. This helps the model learn domain differences and adapts to these differences by adjusting the feature representation, thereby enhancing the generalization ability of the model.
Example 6: on the basis of the above embodiment, the power source data classifier is expressed using the following formula:
;
Wherein,andis the weight and bias parameters of the power source data classifier;representing an activation function;representing power source data classifier pair feature representationWhich is the output of a classification task.Is an activation function, typically used to introduce nonlinearities. The effect of the activation function is to transform the result of the linear combination into a nonlinear probability distribution to increase the expressive power of the model.Andis the weight and bias parameters of the power source data classifier, which are learned during the training process.For linear transformation of characteristic representationsWhileIs a bias term. The principle of the power source data classifier is to represent the input characteristicsMapped to the corresponding category to perform the classification task. This typically involves multiplying the feature representation with a weight matrix, adding bias, and performing a nonlinear transformation by an activation function. Such models learn how to extract and capture information related to classification tasks from features of the data. The function of the power source data classifier is to represent the featuresAnd converting into probability distribution of corresponding categories, thereby realizing classification tasks of the power source data. By training the classifier, the model can learn how to map the feature representation to the correct class label, giving it the ability to classify predictions.
The domain adaptation loss is expressed using the following formula:
;
wherein,classifying losses for the power source data;is collar (collar)Domain classification loss;andis a weight parameter, which is a preset value, used to balance the importance of two loss terms.
Is a domain adaptation penalty for training a model to reduce domain differences between power source data and mapped power target data.Andis a heavy parameter used to balance the importance of power source data classification loss and domain classification loss.The power source data classification loss is used for measuring the performance of the power source data classification task.Is the domain classification loss, and measures the performance of the model on domain classification. The principle of domain adaptation loss is by minimizingAnd maximizeTo train the model. The goal is to make the model perform well on the power source data classification task while minimizing the domain differences between the power source data and the mapped power target data. By adjustingAndcan control the value of (2)The model tradeoffs between the two tasks. The field adaptation loss is used for training the model to adapt to the distribution difference of data in different fields, so that the generalization performance of the model in mapping the power target data is improved. By minimizing The model is forced to learn how to map the power source data and the mapped power target data to similar feature representation spaces, reducing domain differences and making the model more suitable for use in mapping the power target data.
Example 7: on the basis of the above-mentioned embodiment,
the power source data classification loss is expressed using the following formula:
;
wherein,is a power source data sampleIs a true value of (2);
specifically, the principle of the classification loss of the power source data is to use binary cross Entropy (BinaryCross-Entropy) loss to measure the classification performance of the model on the power source data. This loss function is used to evaluate the differences between the classification predictions and the true labels of the model, which is a loss function commonly used for two classification tasks. Specifically, this loss function is applied to each power source data sampleA penalty term is calculated. The calculation of the loss term includes two parts: for positive sampleIndicating that the sample belongs to a class), the loss term includesFor negative sampleIndicating that the sample does not belong to the category), the penalty term includes. These two parts measure the classification probability of the positive sample and the classification probability of the negative sample, respectively. The whole loss function is calculated by averaging the loss terms of all the power source data samples, i.e. summing the loss terms of each sample and dividing by the number of power source data samples . It is used to measure the classification performance of the model on the power source data. The classification accuracy of the model on the power source data can be known by measuring the difference between the classification prediction of the model and the real label. The loss function is a key component of training the deep learning model. By minimizing the power source data classification loss, the model is forced to learn how to adjust weights and biases to improve the accuracy of its classification predictions. The loss function encourages the model to generate classification probabilities that are closer to the real tags. This helps to improve the performance of the classifier, enabling it to better distinguish between different classes.
The domain classification loss is expressed using the following formula:
;
wherein,andthe number of samples of the power source data and the mapped power target data, respectively.
Specifically, the principle of domain classification loss is to use binary cross Entropy (BinaryCross-Entropy) loss to measure the performance of a model on domain classification, i.e. to determine whether input data is from power source data or notIs the map power target data. For samples from power source data, the loss term isWhereinThe probability that the model predicts the sample as power source data is represented. For samples from mapped power target data, the penalty term is WhereinThe probability that the model predicts the sample as mapping power target data is represented. The overall loss function is calculated by averaging the loss terms of all power source data and mapped power target data samples, i.e., summing the loss terms of all samples and dividing by the total number of samples. The main role of this loss function is to train the model to reduce the domain differences between the power source data and the mapped power target data. By letting the model learn how to correctly classify the data as power source data or as mapped power target data, domain adaptation can be achieved, making the model better adapt to the data distribution of the mapped power target data. The penalty function is also used to measure the performance of the model on domain classification tasks. The method measures the distinguishing capability of the model on data in different fields, namely whether the model can accurately identify the data of the power source of the data. By minimizing domain classification loss, the model is forced to learn how to map the power source data and the mapped power target data to similar feature representation spaces, thereby reducing domain differences. This helps the model to have better generalization performance in mapping power target data.
Example 8: on the basis of the above embodiment, in the step 5, the power source data is reduced and reconstructed using the improved automatic encoder to extract a higher-level feature representation; the loss function of the improved automatic encoder is expressed using the following formula:
;
Wherein,a loss function for an automatic encoder;is the first in the power source dataTrue values of the individual samples;for a decoder function, for reconstructing input data from the encoded features;mapping power source data to a low-dimensional feature space for an encoder function;representing a reconstruction error;andthe weight adjustment parameters are used for balancing the importance of the reconstruction error and regularization item;a weight matrix, which is an automatic encoder, for mapping the feature representation of the power source data to and from the low-dimensional feature space back to the original feature space;for regularization terms, the square of the Frobenius norm of the weight matrix is represented, and used for controlling the size of the weights to prevent overfitting;to the first of the noisy power source dataThe values of the individual samples.
In particular, the goal of the improved automatic encoder is to reduce and reconstruct the power source data to extract higher level feature representations. It comprises an encoder) And decoder [ ]). The encoder maps the power source data to a low-dimensional feature space and the decoder reconstructs the low-dimensional features into an input to the original feature space.
,
This measure measures the difference between the original power source data and the decoded data. Its goal is to minimize reconstruction errors to ensure that the encoder and decoder can efficiently preserve the information of the data.
Regularization term 1:
,
this term is a weight matrixThe square of the Frobenius norm for controlling the magnitude of the weights to prevent model overfitting. It helps to maintain the stability of the weight matrix.
Regularization term 2:
,
this term is used to measure the reconstruction error of the source data after the addition of noise. The method aims at enabling the model to have certain robustness on noise data, so that stability of feature extraction is improved.
The function of the improved automatic encoder is as follows: the encoder maps the raw power source data to a low-dimensional feature space from which a higher-level feature representation is extracted. This helps the model capture important information in the data and reduces the dimensionality of the features, thereby improving the generalization ability of the model. The decoder reconstructs the low-dimensional features as an input to the original feature space, helping the model learn important information on how to preserve the original data. By minimizing reconstruction errors, the automatic encoder encourages the model to produce data similar to the original data. Regularization term 1 helps control the size of the weight matrix, preventing the model from overfitting the source data. The regularization term 2 is beneficial to robustness of the model to noise data by adding noise and measuring reconstruction errors, and improves stability of feature extraction.
Example 9: on the basis of the above embodiment, the loss function of the domain countermeasure network when the domain countermeasure network is trained in the step 6 is expressed using the following formula:
;
wherein,representing a domain countermeasure network;to map the first in the power target dataTrue values of the individual samples.
Specifically, the countermeasures against loss term of the power source data samples:
。
the goal of this term is to maximize the domain countermeasure networkCollateralsThe classification errors of the power source data samples make the domain countermeasure network unable to accurately distinguish which samples come from the power source data.
Mapping the counterloss term of the power target data sample:
。
the goal of this term is to maximize the classification error of the domain countermeasure network on the mapped power target data samples, i.e., to make the domain countermeasure network unable to accurately distinguish which samples are from the mapped power target data. The goal of the domain countermeasure network is to minimize domain countermeasure losses, and to make the model learn how to blur boundaries between power source data and mapped power target data, thereby achieving domain adaptation.
The area counter network loss function has the following effects: by introducing domain counter-loss, the model is forced to learn how to minimize domain differences between the power source data and the mapped power target data. The goal of the domain countermeasure network is to minimize domain differences, thereby improving the generalization performance of the model on mapping power target data. Domain countermeasures are also used to measure the performance of a model on domain classification tasks. The method measures the distinguishing capability of the model on data in different fields, namely whether the model can accurately identify the data of the power source of the data. Challenge training is a method of training a model by minimizing field challenge losses. By minimizing this loss, the model is forced to learn how to make the characteristic representations of the power source data and the mapped power target data as close as possible, thereby achieving domain adaptation.
Example 10: on the basis of the above embodiment, the migration function preset in step 7 is a nonlinear mapping model based on a deep neural networkThe method comprises the steps of carrying out a first treatment on the surface of the The loss value of the migration function is expressed using the following formula:
;
wherein,representing a non-linear mapping model,Nonlinear mapping modelIs used for the number of layers of the (a),representing a non-linear mapping modelIs the first of (2)A matrix of layer weights is provided,is a weight regularization parameter.
Specifically, a nonlinear mapping modelThe goal of (1) is to map the feature representation of the power source data to the feature representation of the mapped power target data, thereby enabling migration of feature space. Loss functionComprising two parts:
feature mapping error term:
。
this measure is a measure of the difference between the mapped power source data characteristics and the mapped power target data characteristics. The goal is to minimize the signature mapping error to ensure that the signature representations of the power source data and the mapped power target data are as close as possible.
Weight regularization term:
。
this term is used to control the nonlinear mapping modelTo prevent overfitting. Regularization terms ensure the generalization ability of the model by the magnitude of the penalty weights.
Nonlinear mapping model And corresponding loss functionHas the following functions: nonlinear mapping model by minimizing feature mapping errorsTrained to effect migration of feature spaces. This helps to bring the power source data and the mapped power target data closer together in the feature representation space, thereby improving the performance of the model on the mapped power target data. Regularization term helps control nonlinear mapping modelTo prevent overfitting. Regularization terms help to maintain model stability by punishing the magnitude of the weights.
While specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are by way of example only, and that various omissions, substitutions, and changes in the form and details of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the above-described method steps to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is limited only by the following claims.
Claims (10)
1. A method for heterogeneous data migration in a power information system, the method comprising:
Step 1: acquiring power source data and power target data which are heterogeneous with each other;
step 2: selecting a feature set from the power source data and the power target data;
step 3: mapping the power target data by using a mean difference mapping model to obtain mapped power target data, so that the similarity between the power target data and the power source data in a feature space exceeds a set similarity threshold;
step 4: training an adaptation by domain adaptation loss to reduce the difference between the power source data and the mapped power target data;
step 5: performing reduction and reconstruction on the power source data to extract higher-level feature representation;
step 6: training a domain countermeasure network to minimize domain differences between the power source data and the mapped power target data;
step 7: mapping the power source data to mapped power target data using a preset migration function.
2. The heterogeneous data migration method of the power information system according to claim 1, wherein in the step 1, after the power source data and the power target data heterogeneous with each other are acquired, data cleaning, noise removal, deletion removal, and data normalization are performed on the power source data and the power target data, respectively.
3. The heterogeneous data migration method of the power information system according to claim 1, wherein the step 2 specifically includes: computing power source data and power targetsInformation gain for each feature in the data; ordering the information gains before selectionThe feature with the highest information gain is used as the feature set of the power source data>And a feature set of power target data。
4. The heterogeneous data migration method of a power information system of claim 3, wherein the mean difference mapping model is expressed using the following formula:
;
wherein,a value that is the maximum mean difference; />Is +.>A feature vector representing a representation of one sample of power source data in a feature space; />Is +.>A feature vector representing a representation of one sample of the power target data in a feature space; />A number of data samples for the power source; />A number of data samples for the power target; />Is a kernel function used to calculate the similarity between the representations of the samples in the feature space.
5. The method for heterogeneous data migration of a power information system of claim 4, wherein the training of an adapter by domain adaptation loss in step 4 comprises:
Substep 4.1: extracting power source data and mapping a characteristic representation of the power target data using a deep neural network based aptamer; the adaptation means comprises a domain classifierA shared feature extractor->And two different classifiers>And->The method comprises the steps of carrying out a first treatment on the surface of the The obtained power source data is characterized by:
,
wherein the method comprises the steps ofIs power source data, +.>Is a characteristic representation of the power source data;
the resulting mapped power target data features are represented as:
,
wherein the method comprises the steps ofIs mapping power target data, +.>Is a characteristic representation that maps the power target data;
step 4.2: power source data classifierClassifying power source data into corresponding categories thereof, domain classifier->For distinguishing power source data from mapping power target data; said classifier->Is a multi-layer sensor;
step 4.3: the domain adaptation loss is a loss function of an adaptive based on a deep neural network, which includes: power source data classification loss and domain classification loss; setting an optimization target to minimize field adaptation loss; the optimization objective is expressed using the following formula:
;
wherein,and adapting to the loss for the field.
6. The method of heterogeneous data migration in a power information system of claim 5, wherein the power source data classifier is expressed using the following formula:
;
Wherein,and->Is the weight and bias parameters of the power source data classifier; />Representing an activation function;
the domain adaptation loss is expressed using the following formula:
;
wherein,classifying losses for the power source data; />Classifying the loss for the domain; />And->Is a weight parameter, which is a preset value, used to balance the importance of two loss terms.
7. The method for heterogeneous data migration of a power information system of claim 5,
the power source data classification loss is expressed using the following formula:
;
wherein,is a power source data sample->Is a true value of (2);
the domain classification loss is expressed using the following formula:
;
wherein,and->The number of samples of the power source data and the mapped power target data, respectively.
8. The heterogeneous data migration method of the power information system according to claim 7, wherein in the step 5, the power source data is reduced and reconstructed using a modified auto encoder to extract a higher level feature representation; the loss function of the improved automatic encoder is expressed using the following formula:
;
wherein,loss function for automatic encoderA number; />Is +. >True values of the individual samples; />For a decoder function, for reconstructing input data from the encoded features; />Mapping power source data to a low-dimensional feature space for an encoder function; />Representing a reconstruction error; />And->The weight adjustment parameters are used for balancing the importance of the reconstruction error and regularization item; />A weight matrix, which is an automatic encoder, for mapping the feature representation of the power source data to and from the low-dimensional feature space back to the original feature space; />For regularization terms, the square of the Frobenius norm of the weight matrix is represented, and used for controlling the size of the weights to prevent overfitting; />For +.>The values of the individual samples.
9. The heterogeneous data migration method of the power information system according to claim 8, wherein the loss function of the domain countermeasure network when the domain countermeasure network is trained in the step 6 is expressed using the following formula:
;
wherein,representing a domain countermeasure network; />To map the +.>True values of the individual samples.
10. The heterogeneous data migration method of the power information system according to claim 9, wherein the migration function preset in step 7 is a nonlinear mapping model based on a deep neural network The method comprises the steps of carrying out a first treatment on the surface of the The loss value of the migration function is expressed using the following formula:
;
wherein,representing a non-linear mapping model->,/>Nonlinear mapping model->Layer number of->Representing a non-linear mapping model->Is>Layer weight matrix,/->Is a weight regularization parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311239407.5A CN117131022B (en) | 2023-09-25 | 2023-09-25 | Heterogeneous data migration method of electric power information system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311239407.5A CN117131022B (en) | 2023-09-25 | 2023-09-25 | Heterogeneous data migration method of electric power information system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117131022A true CN117131022A (en) | 2023-11-28 |
CN117131022B CN117131022B (en) | 2024-03-29 |
Family
ID=88854609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311239407.5A Active CN117131022B (en) | 2023-09-25 | 2023-09-25 | Heterogeneous data migration method of electric power information system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117131022B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117435916A (en) * | 2023-12-18 | 2024-01-23 | 四川云实信息技术有限公司 | Self-adaptive migration learning method in aerial photo AI interpretation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111444432A (en) * | 2020-04-01 | 2020-07-24 | 中国科学技术大学 | Domain-adaptive deep knowledge tracking and personalized exercise recommendation method |
CN113344044A (en) * | 2021-05-21 | 2021-09-03 | 北京工业大学 | Cross-species medical image classification method based on domain self-adaptation |
CN115422994A (en) * | 2022-08-03 | 2022-12-02 | 北京交通大学 | Cross-city time sequence data migration prediction method and system |
CN116028876A (en) * | 2022-09-20 | 2023-04-28 | 北京工业大学 | Rolling bearing fault diagnosis method based on transfer learning |
-
2023
- 2023-09-25 CN CN202311239407.5A patent/CN117131022B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111444432A (en) * | 2020-04-01 | 2020-07-24 | 中国科学技术大学 | Domain-adaptive deep knowledge tracking and personalized exercise recommendation method |
CN113344044A (en) * | 2021-05-21 | 2021-09-03 | 北京工业大学 | Cross-species medical image classification method based on domain self-adaptation |
CN115422994A (en) * | 2022-08-03 | 2022-12-02 | 北京交通大学 | Cross-city time sequence data migration prediction method and system |
CN116028876A (en) * | 2022-09-20 | 2023-04-28 | 北京工业大学 | Rolling bearing fault diagnosis method based on transfer learning |
Non-Patent Citations (2)
Title |
---|
王光军: "多源领域自适应方法研究与应用", 中国优秀硕士学位论文全文数据库信息科技辑, no. 1, 15 January 2022 (2022-01-15), pages 138 - 2550 * |
许鹏: "迁移和协同学习新方法研究", 中国优秀硕士学位论文全文数据库信息科技辑, no. 1, 15 January 2021 (2021-01-15), pages 140 - 287 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117435916A (en) * | 2023-12-18 | 2024-01-23 | 四川云实信息技术有限公司 | Self-adaptive migration learning method in aerial photo AI interpretation |
CN117435916B (en) * | 2023-12-18 | 2024-03-12 | 四川云实信息技术有限公司 | Self-adaptive migration learning method in aerial photo AI interpretation |
Also Published As
Publication number | Publication date |
---|---|
CN117131022B (en) | 2024-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106895975B (en) | Bearing fault diagnosis method based on Stacked SAE deep neural network | |
CN114297936A (en) | Data anomaly detection method and device | |
CN106778863A (en) | The warehouse kinds of goods recognition methods of dictionary learning is differentiated based on Fisher | |
CN117131022B (en) | Heterogeneous data migration method of electric power information system | |
Zhang et al. | A novel data-driven method based on sample reliability assessment and improved CNN for machinery fault diagnosis with non-ideal data | |
CN112784920A (en) | Cloud-side-end-coordinated dual-anti-domain self-adaptive fault diagnosis method for rotating part | |
CN111428201A (en) | Prediction method for time series data based on empirical mode decomposition and feedforward neural network | |
WO2023231374A1 (en) | Semi-supervised fault detection and analysis method and apparatus for mechanical device, terminal, and medium | |
CN112418476A (en) | Ultra-short-term power load prediction method | |
CN117131449A (en) | Data management-oriented anomaly identification method and system with propagation learning capability | |
CN117596191A (en) | Power Internet of things abnormality detection method, device and storage medium | |
CN116738297B (en) | Diabetes typing method and system based on depth self-coding | |
CN117909881A (en) | Fault diagnosis method and device for multi-source data fusion pumping unit | |
CN116776209A (en) | Method, system, equipment and medium for identifying operation state of gateway metering device | |
CN116994040A (en) | Image recognition-based deep sea wind power generation PQDs (pulse-height distribution system) classification method and system | |
CN115034314A (en) | System fault detection method and device, mobile terminal and storage medium | |
CN113835964B (en) | Cloud data center server energy consumption prediction method based on small sample learning | |
CN115358473A (en) | Power load prediction method and prediction system based on deep learning | |
CN116521863A (en) | Tag anti-noise text classification method based on semi-supervised learning | |
CN115545104A (en) | KPI (Key Performance indicator) anomaly detection method, system and medium based on functional data analysis | |
CN105654128A (en) | Kernel norm regularized low-rank coding-based fan blade image fault identification method | |
CN111696070A (en) | Multispectral image fusion power internet of things fault point detection method based on deep learning | |
CN117798654B (en) | Intelligent adjusting system for center of steam turbine shafting | |
CN117649387B (en) | Defect detection method suitable for object with complex surface texture | |
CN118509121B (en) | Big data transmission method and system based on hybrid distribution estimation algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |