CN116304721A - Data standard making method and system for big data management based on data category - Google Patents

Data standard making method and system for big data management based on data category Download PDF

Info

Publication number
CN116304721A
CN116304721A CN202310588344.8A CN202310588344A CN116304721A CN 116304721 A CN116304721 A CN 116304721A CN 202310588344 A CN202310588344 A CN 202310588344A CN 116304721 A CN116304721 A CN 116304721A
Authority
CN
China
Prior art keywords
data
processed
class
standardized
meta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310588344.8A
Other languages
Chinese (zh)
Inventor
丁勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xijia Chuangzhi Data Technology Co ltd
Original Assignee
Beijing Xijia Chuangzhi Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xijia Chuangzhi Data Technology Co ltd filed Critical Beijing Xijia Chuangzhi Data Technology Co ltd
Priority to CN202310588344.8A priority Critical patent/CN116304721A/en
Publication of CN116304721A publication Critical patent/CN116304721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method and a system for formulating data standards in big data treatment based on data categories, which relate to the technical field of data analysis and establish a data category database; obtaining a standard meta-feature set of each data class; training a standardized model corresponding to each data class; acquiring data to be processed which needs to be subjected to data management; acquiring all attribute characteristics of data to be processed; obtaining a distance coefficient between the data to be processed and each data class; determining the data class with the smallest distance coefficient with the data to be processed; judging whether the optimal distance coefficient is smaller than a distance preset value or not; and (3) data to be processed, which is judged to be the optimal data class, is subjected to data standardization by calling a standardized model of the optimal data class. The invention has the advantages that: the method has the advantages that the intelligent classification and identification of the data in the data treatment process are realized, the standardized conversion of the data is carried out according to the data types, the time consumption of the data classification in the data treatment process can be effectively shortened, and the rapid and efficient data digitization of enterprises can be realized.

Description

Data standard making method and system for big data management based on data category
Technical Field
The invention relates to the technical field of data analysis, in particular to a method and a system for formulating data standards in big data treatment based on data types.
Background
With the development of global informatization and digitalization, various industries are performing or about to perform data management, and according to the main stream understanding of data management in the world and in the country, data standardization is a very important component in data management.
In the data age, there are a number of problems with the rational use of data. The decision and operation of enterprises need to be made from dependent data, however, in the treatment process of the data, a plurality of types of data which are generally existed due to times, industry transition and the like usually exist, each type of data has different treatment standards, and a great deal of time and effort are spent on establishing the relationship between the data and the treatment standards by a manual identification mode.
Disclosure of Invention
In order to solve the technical problems, the technical scheme solves the problems that in the treatment process of data, various types of data generally exist due to times, industry transition and the like, each type of data has different treatment standards, and a great deal of time and energy are spent on establishing the relationship between the data and the treatment standards by a manual identification mode.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a data standard making method in big data management based on data category includes:
establishing a data class library based on data classes related to the big data management field, wherein the data class library comprises a plurality of data classes;
extracting attribute characteristics corresponding to the data classes according to the attributes of each data class, and establishing a data characteristic set to obtain a standard meta characteristic set of each data class;
training a standardized model corresponding to each data class aiming at each data class, wherein the standardized model takes metadata of each data class as input and standardized processing data of each data class as output;
acquiring data to be processed which needs to be subjected to data management;
extracting data characteristics of the data to be processed to obtain all attribute characteristics of the data to be processed, and combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed;
performing distance calculation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
determining the data class with the smallest distance coefficient with the data to be processed, recording the corresponding data class as the optimal fit data class, and recording the distance coefficient corresponding to the optimal fit data class as the optimal distance coefficient;
judging whether the optimal distance coefficient is smaller than a distance preset value, if so, judging that the data to be processed which needs to be subjected to data management is the optimal data class, and if not, judging that the data to be processed which needs to be subjected to data management is the new class data;
uploading the data to be processed, which is judged to be the new class data, to a data background;
and for the data to be processed, which is judged to be the optimal data class, a standardized model of the optimal data class is called, and metadata of the data to be processed is input into the standardized model of the optimal data class to obtain standardized data of the data to be processed.
Preferably, the training the standardized model corresponding to each data class is specifically training the standardized model corresponding to each data class by using a neural network model.
Preferably, the standardized model includes:
the input layer is used for inputting an input value, and the input value is metadata of data to be processed;
the data layer is used for vectorizing and splicing metadata of the data to be processed to obtain vectorized data;
the recombination layer is used for carrying out recombination design on the vectorized data to obtain recombination data;
the transformation layer is used for carrying out high-dimensional transformation on the vectorized data to obtain high-dimensional data;
the activation layer is used for carrying out nonlinear mapping on the high-dimensional data to obtain activation data;
the splicing layer is used for carrying out splicing processing on the reconfiguration data and the activation data to obtain splicing data;
the scaling layer is used for normalizing the spliced data to obtain normalized data;
the remapping layer is used for remapping the normalized data to obtain an output value;
the output layer is used for outputting an output value, and the output value is standardized processing data of the data to be processed.
Preferably, the neural network model includes:
and the loss layer is used for determining a neural network model with the minimum loss function based on the loss function as a standardized model corresponding to the data class.
Preferably, training the standardized model corresponding to each data class by using the neural network model specifically includes:
acquiring a plurality of training sample metadata corresponding to the data class;
adding a standardized value to the metadata of the training sample based on standardized logic corresponding to the data class to obtain a training standardized value;
inputting a plurality of training sample metadata into an input layer, and taking a training standardization value as a preset output layer to perform node connection relations among a training data layer, a reorganization layer, a transformation layer, an activation layer, a splicing layer, a scaling layer and a remapping layer to obtain a plurality of preliminary training models;
acquiring a plurality of test sample metadata corresponding to the data class;
adding a standardized value to the training sample metadata based on standardized logic corresponding to the data class to obtain a test real standardized value;
inputting a plurality of test sample metadata into each preliminary training model to obtain a test prediction standardization value output by each preliminary training model;
the loss layer determines a loss function value of each preliminary training model based on the loss function, and screens out a minimum loss function value;
judging whether the minimum value of the loss function value is smaller than a first preset value, if yes, judging that the preliminary training model corresponding to the minimum value of the loss function value is a standardized model corresponding to the data class, if not, judging that the standardized model corresponding to the data class fails to train, acquiring a plurality of test sample metadata corresponding to the data class again, and training the standardized model corresponding to the data class again.
Preferably, the calculating the distance between the meta-feature set to be processed and the standard meta-feature set of each data class specifically includes:
obtaining the element number in the meta-feature set to be processed;
obtaining the element number of the standard meta-feature set of each data class;
determining the number of the same elements in the meta-feature set to be processed and the standard meta-feature set of each data class;
calculating a distance coefficient between the to-be-processed meta-feature set and the standard meta-feature set of each data class based on a distance coefficient calculation formula;
wherein, the distance coefficient calculation formula is:
Figure SMS_1
in the method, in the process of the invention,
Figure SMS_2
for the distance coefficient between the meta-feature set to be processed and the standard meta-feature set of the ith class of data,/for>
Figure SMS_3
For the number of elements in the meta-feature set to be processed, < +.>
Figure SMS_4
For the number of elements of the standard meta-feature set of the ith class of data, +.>
Figure SMS_5
For the number of identical elements in the meta-feature set to be processed and in the standard meta-feature set of the ith class of data, -/->
Figure SMS_6
Is the total number of data classes included in the data class library.
Furthermore, a system for formulating data standards in big data management based on data category is provided, which is used for implementing the method for formulating data standards in big data management based on data category, comprising:
a memory for storing a database of data categories and a set of standard meta-features for each data category;
the model training module is used for training a standardized model corresponding to each data class aiming at each data class;
the processor is electrically connected with the memory and the model training module and is used for carrying out data feature extraction on data to be processed to obtain all attribute features of the data to be processed, combining all attribute features of the data to be processed into a meta-feature set to be processed, carrying out distance calculation on the meta-feature set to be processed and a standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class, judging whether the optimal distance coefficient is smaller than a distance preset value, and obtaining standardized data of the data to be processed;
the input module is electrically connected with the processor and is used for inputting data to be processed;
the output module is electrically connected with the processor and is used for outputting standardized data of the data to be processed.
Optionally, the processor is internally integrated with a matching module and a data normalization module.
Optionally, the matching module includes:
the data attribute analysis unit is used for analyzing attribute characteristics of the data to be processed, extracting the data characteristics of the data to be processed, obtaining all attribute characteristics of the data to be processed, and combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed;
the computing unit is used for carrying out distance computation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
and the screening unit is used for determining the data class with the smallest distance coefficient with the data to be processed.
Optionally, the data normalization module includes:
the judging unit is used for judging whether the optimal distance coefficient is smaller than a distance preset value or not;
the standardized processing unit is used for calling a standardized model of the optimal data class, inputting metadata of the data to be processed into the standardized model of the optimal data class, and obtaining standardized data of the data to be processed.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a data standard making scheme for large data treatment based on data types, which is characterized in that a data type library is pre-established for the related data types in the treatment process, wherein the data type library comprises all data types existing in the data treatment process, and a standard meta-feature set of each data type and a standardized model corresponding to each data type are established for the attribute of each data type, then when the data is treated, the data types are rapidly identified through data feature comparison, then standardized treatment is carried out on the data to be treated based on the standardized model corresponding to the data types, and then the standardized data is output to a rear-end analysis port for unified measurement analysis treatment.
Drawings
FIG. 1 is a block diagram of a data standard making system in big data management based on data category;
FIG. 2 is a flow chart of a method for formulating data standards in big data management based on data categories;
FIG. 3 is a flow chart of a method for training a standardized model using a neural network model in accordance with the present invention;
FIG. 4 is a flow chart of a method for calculating the distance coefficient between the meta-feature set to be processed and the standard meta-feature set of each data class according to the present invention.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention. The preferred embodiments in the following description are by way of example only and other obvious variations will occur to those skilled in the art.
Referring to fig. 1, a system for formulating data standards in big data management based on data category includes:
the memory is used for storing the data category library and the standard meta-feature set of each data category;
the model training module is used for training a standardized model corresponding to each data class aiming at each data class;
the processor is electrically connected with the memory and the model training module and is used for extracting data characteristics of the data to be processed to obtain all attribute characteristics of the data to be processed, combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed, calculating the distance between the meta-characteristic set to be processed and a standard meta-characteristic set of each data class to obtain a distance coefficient between the data to be processed and each data class, judging whether the optimal distance coefficient is smaller than a distance preset value, and obtaining standardized data of the data to be processed;
the input module is electrically connected with the processor and is used for inputting data to be processed;
the output module is electrically connected with the processor and is used for outputting standardized data of the data to be processed.
The processor is internally integrated with a matching module and a data standardization module.
The matching module comprises:
the data attribute analysis unit is used for analyzing attribute characteristics of the data to be processed, extracting the data characteristics of the data to be processed, obtaining all attribute characteristics of the data to be processed, and combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed;
the computing unit is used for carrying out distance computation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
and the screening unit is used for determining the data class with the smallest distance coefficient with the data to be processed.
The data normalization module comprises:
the judging unit is used for judging whether the optimal distance coefficient is smaller than a distance preset value;
and the standardized processing unit is used for calling a standardized model of the optimal data class, inputting the metadata of the data to be processed into the standardized model of the optimal data class, and obtaining the standardized data of the data to be processed.
The working process of the data standard making system in the big data treatment based on the data category is as follows:
step one: establishing a data class library based on data classes related to the big data management field, extracting attribute characteristics corresponding to the data classes for the attribute of each data class, establishing a data characteristic set, and inputting the data class library and the standard meta characteristic set of each data class into a memory through an input module for storage;
step two: the model training module trains a standardized model corresponding to each data class aiming at each data class
Step three: inputting data to be processed which needs to be subjected to data management through an input module;
step four: the data attribute analysis unit analyzes attribute characteristics of the data to be processed, performs data feature extraction on the data to be processed to obtain all attribute characteristics of the data to be processed, and combines all attribute characteristics of the data to be processed into a metadata feature set to be processed;
step five: the computing unit performs distance computation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
step six: the screening unit determines the data class with the smallest distance coefficient with the data to be processed, and marks the data class as the optimal distance coefficient;
step seven: the judging unit judges whether the optimal distance coefficient is smaller than a distance preset value;
step eight: according to the judging result of the judging unit, if the optimal distance coefficient is smaller than the distance preset value, the standardized processing unit invokes a standardized model of a data class corresponding to the optimal distance coefficient, metadata of the data to be processed is input into the standardized model to obtain standardized data of the data to be processed, the output module outputs the standardized data of the data to be processed, and if the optimal distance coefficient is larger than the distance preset value, the output module outputs the data to be processed to a data background, and background staff performs data processing.
Still further, referring to fig. 2, a method for formulating a data standard in big data management based on data category is provided, which includes:
establishing a data class library based on data classes related to the big data treatment field, wherein the data class library comprises a plurality of data classes;
extracting attribute characteristics corresponding to the data classes according to the attributes of each data class, and establishing a data characteristic set to obtain a standard meta characteristic set of each data class;
training a standardized model corresponding to each data class aiming at each data class, wherein the standardized model takes metadata of each data class as input and standardized processing data of each data class as output;
acquiring data to be processed which needs to be subjected to data management;
extracting data characteristics of the data to be processed to obtain all attribute characteristics of the data to be processed, and combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed;
performing distance calculation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
determining the data class with the smallest distance coefficient with the data to be processed, recording the corresponding data class as the optimal fit data class, and recording the distance coefficient corresponding to the optimal fit data class as the optimal distance coefficient;
judging whether the optimal distance coefficient is smaller than a distance preset value, if so, judging that the data to be processed which needs to be subjected to data management is the optimal data class, and if not, judging that the data to be processed which needs to be subjected to data management is the new class data;
uploading the data to be processed, which is judged to be the new class data, to a data background;
and for the data to be processed, which is judged to be the optimal data class, a standardized model of the optimal data class is called, and metadata of the data to be processed is input into the standardized model of the optimal data class to obtain standardized data of the data to be processed.
The method comprises the steps of pre-establishing a data class library for related data classes in the treatment process, wherein the data class library comprises all data classes existing in the data treatment process, establishing a standard meta-feature set of each data class and a standardized model corresponding to each data class according to the attribute of each data class, quickly identifying the data class through data feature comparison during data treatment, carrying out standardized treatment on data to be treated based on the standardized model corresponding to the data class, and outputting the standardized data to a rear-end analysis port for unified measurement analysis treatment, so that the quick data standardization process in the data treatment process is realized.
For each data class, training the standardized model corresponding to each data class is specifically training the standardized model corresponding to each data class by adopting a neural network model.
The standardized model includes:
the input layer is used for inputting an input value, wherein the input value is metadata of data to be processed;
the data layer is used for vectorizing and splicing metadata of the data to be processed to obtain vectorized data;
the recombination layer is used for carrying out recombination design on the vectorized data to obtain recombination data;
the transformation layer is used for carrying out high-dimensional transformation on the vector data to obtain high-dimensional data;
the activation layer is used for carrying out nonlinear mapping on the high-dimensional data to obtain activation data;
the splicing layer is used for carrying out splicing treatment on the reconstructed data and the activated data to obtain spliced data;
the scaling layer is used for normalizing the spliced data to obtain normalized data;
the remapping layer is used for remapping the normalized data to obtain an output value;
the output layer is used for outputting an output value, and the output value is standardized processing data of the data to be processed.
The neural network is an algorithm mathematical model which simulates the behavior characteristics of the animal neural network and performs distributed parallel information processing. The network relies on the complexity of the system, and the aim of processing information is achieved by adjusting the relation of interconnection among a large number of nodes in the network;
according to the scheme, training of the data standardization processing model is carried out based on the neural network model, so that the data standardization scheme can be rapidly determined, rapid digital conversion of standardized logic for data is further achieved, and an accurate calculation model is further provided for a subsequent standardization process of data management.
The neural network model includes:
and the loss layer is used for determining a neural network model with the minimum loss function based on the loss function as a standardized model corresponding to the data class.
Referring to fig. 3, training a standardized model corresponding to each data class by using a neural network model specifically includes:
acquiring a plurality of training sample metadata corresponding to the data class;
adding a standardized value to the metadata of the training sample based on standardized logic corresponding to the data class to obtain a training standardized value;
inputting a plurality of training sample metadata into an input layer, and taking a training standardization value as a preset output layer to perform node connection relations among a training data layer, a reorganization layer, a transformation layer, an activation layer, a splicing layer, a scaling layer and a remapping layer to obtain a plurality of preliminary training models;
acquiring a plurality of test sample metadata corresponding to the data class;
adding a standardized value to the training sample metadata based on standardized logic corresponding to the data class to obtain a test real standardized value;
inputting a plurality of test sample metadata into each preliminary training model to obtain a test prediction standardization value output by each preliminary training model;
the loss layer determines a loss function value of each preliminary training model based on the loss function, and screens out a minimum loss function value;
judging whether the minimum value of the loss function value is smaller than a first preset value, if yes, judging that the preliminary training model corresponding to the minimum value of the loss function value is a standardized model corresponding to the data class, if not, judging that the standardized model corresponding to the data class fails to train, acquiring a plurality of test sample metadata corresponding to the data class again, and training the standardized model corresponding to the data class again.
In neural networks, a loss function is used as an objective function to evaluate the difference between model predictions and target values. The smaller the value of the loss function, the closer the model predicted value is to the target value, and the higher the model accuracy. When the value of the loss function is larger, the model predicted value is far away from the target value, and the model accuracy is lower;
it will be appreciated that the normalized models for different classes of data have different loss functions, e.g. for binary classes of data the loss function is
Figure SMS_7
. Wherein (1)>
Figure SMS_8
Predicting normalized values for the model,/->
Figure SMS_9
Data entityThe marginal standardization value is calculated, and the preliminary training model corresponding to the time when the loss function value is minimum is used as the standardization model corresponding to the data class, so that the standardized model trained can be effectively and accurately standardized for the data.
Referring to fig. 4, performing distance calculation on the to-be-processed meta-feature set and the standard meta-feature set of each data class specifically includes:
obtaining the element number in the meta-feature set to be processed;
obtaining the element number of the standard meta-feature set of each data class;
determining the number of the same elements in the meta-feature set to be processed and the standard meta-feature set of each data class;
calculating a distance coefficient between the to-be-processed meta-feature set and the standard meta-feature set of each data class based on a distance coefficient calculation formula;
wherein, the distance coefficient calculation formula is:
Figure SMS_10
in the method, in the process of the invention,
Figure SMS_11
for the distance coefficient between the meta-feature set to be processed and the standard meta-feature set of the ith class of data,/for>
Figure SMS_12
For the number of elements in the meta-feature set to be processed, < +.>
Figure SMS_13
For the number of elements of the standard meta-feature set of the ith class of data, +.>
Figure SMS_14
For the number of identical elements in the meta-feature set to be processed and in the standard meta-feature set of the ith class of data, -/->
Figure SMS_15
Is the total number of data classes included in the data class library.
By calculating the distance coefficient between the to-be-processed meta-feature set and the standard meta-feature set of the data class, the closer the distance coefficient is, the closer the distance between the to-be-processed meta-feature set and the standard meta-feature set of the data class is, and when the distance coefficient is smaller than a certain value, the data to be processed can be determined to belong to the corresponding data class, and the to-be-processed data is processed according to the standardized processing model of the corresponding data class, so that the intelligent classification standardized processing process of the to-be-processed data is realized.
In summary, the invention has the advantages that: the method has the advantages that the intelligent classification and identification of the data in the data treatment process are realized, the data standardized conversion is carried out according to the data types, the time consumption of the data classification in the data treatment process can be effectively shortened, the data treatment efficiency is improved, and the rapid and efficient data digitization of enterprises can be realized.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention, which is defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. The method for formulating the data standard in big data management based on the data category is characterized by comprising the following steps:
establishing a data class library based on data classes related to the big data management field, wherein the data class library comprises a plurality of data classes;
extracting attribute characteristics corresponding to the data classes according to the attributes of each data class, and establishing a data characteristic set to obtain a standard meta characteristic set of each data class;
training a standardized model corresponding to each data class aiming at each data class, wherein the standardized model takes metadata of each data class as input and standardized processing data of each data class as output;
acquiring data to be processed which needs to be subjected to data management;
extracting data characteristics of the data to be processed to obtain all attribute characteristics of the data to be processed, and combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed;
performing distance calculation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
determining the data class with the smallest distance coefficient with the data to be processed, recording the corresponding data class as the optimal fit data class, and recording the distance coefficient corresponding to the optimal fit data class as the optimal distance coefficient;
judging whether the optimal distance coefficient is smaller than a distance preset value, if so, judging that the data to be processed which needs to be subjected to data management is the optimal data class, and if not, judging that the data to be processed which needs to be subjected to data management is the new class data;
uploading the data to be processed, which is judged to be the new class data, to a data background;
and for the data to be processed, which is judged to be the optimal data class, a standardized model of the optimal data class is called, and metadata of the data to be processed is input into the standardized model of the optimal data class to obtain standardized data of the data to be processed.
2. The method for formulating the data standard in big data management based on the data category of claim 1, wherein training the standardized model corresponding to each data category is specifically training the standardized model corresponding to each data category by using a neural network model.
3. The method for formulating the data standard in big data governance based on the data category of claim 2, wherein the standardized model comprises:
the input layer is used for inputting an input value, and the input value is metadata of data to be processed;
the data layer is used for vectorizing and splicing metadata of the data to be processed to obtain vectorized data;
the recombination layer is used for carrying out recombination design on the vectorized data to obtain recombination data;
the transformation layer is used for carrying out high-dimensional transformation on the vectorized data to obtain high-dimensional data;
the activation layer is used for carrying out nonlinear mapping on the high-dimensional data to obtain activation data;
the splicing layer is used for carrying out splicing processing on the reconfiguration data and the activation data to obtain splicing data;
the scaling layer is used for normalizing the spliced data to obtain normalized data;
the remapping layer is used for remapping the normalized data to obtain an output value;
the output layer is used for outputting an output value, and the output value is standardized processing data of the data to be processed.
4. A method for formulating data standards in big data governance based on data categories according to claim 3, wherein said neural network model comprises:
and the loss layer is used for determining a neural network model with the minimum loss function based on the loss function as a standardized model corresponding to the data class.
5. The method for formulating the data standard in big data governance based on data class as claimed in claim 4, wherein training the standardized model corresponding to each data class by using the neural network model specifically comprises:
acquiring a plurality of training sample metadata corresponding to the data class;
adding a standardized value to the metadata of the training sample based on standardized logic corresponding to the data class to obtain a training standardized value;
inputting a plurality of training sample metadata into an input layer, and taking a training standardization value as a preset output layer to perform node connection relations among a training data layer, a reorganization layer, a transformation layer, an activation layer, a splicing layer, a scaling layer and a remapping layer to obtain a plurality of preliminary training models;
acquiring a plurality of test sample metadata corresponding to the data class;
adding a standardized value to the training sample metadata based on standardized logic corresponding to the data class to obtain a test real standardized value;
inputting a plurality of test sample metadata into each preliminary training model to obtain a test prediction standardization value output by each preliminary training model;
the loss layer determines a loss function value of each preliminary training model based on the loss function, and screens out a minimum loss function value;
judging whether the minimum value of the loss function value is smaller than a first preset value, if yes, judging that the preliminary training model corresponding to the minimum value of the loss function value is a standardized model corresponding to the data class, if not, judging that the standardized model corresponding to the data class fails to train, acquiring a plurality of test sample metadata corresponding to the data class again, and training the standardized model corresponding to the data class again.
6. The method for formulating the data standard in big data governance based on data category as set forth in claim 5, wherein the calculating the distance between the to-be-processed meta-feature set and the standard meta-feature set of each data category specifically includes:
obtaining the element number in the meta-feature set to be processed;
obtaining the element number of the standard meta-feature set of each data class;
determining the number of the same elements in the meta-feature set to be processed and the standard meta-feature set of each data class;
calculating a distance coefficient between the to-be-processed meta-feature set and the standard meta-feature set of each data class based on a distance coefficient calculation formula;
wherein, the distance coefficient calculation formula is:
Figure QLYQS_1
in the method, in the process of the invention,
Figure QLYQS_2
for the distance coefficient between the meta-feature set to be processed and the standard meta-feature set of the ith class of data,/for>
Figure QLYQS_3
For the number of elements in the meta-feature set to be processed, < +.>
Figure QLYQS_4
For the number of elements of the standard meta-feature set of the ith class of data, +.>
Figure QLYQS_5
For the number of identical elements in the meta-feature set to be processed and in the standard meta-feature set of the ith class of data, -/->
Figure QLYQS_6
Is the total number of data classes included in the data class library.
7. A data standard formulation system for big data governance based on data class, for implementing the method for formulating the data standard for big data governance based on data class according to any one of claims 1 to 6, comprising:
a memory for storing a database of data categories and a set of standard meta-features for each data category;
the model training module is used for training a standardized model corresponding to each data class aiming at each data class;
the processor is electrically connected with the memory and the model training module and is used for carrying out data feature extraction on data to be processed to obtain all attribute features of the data to be processed, combining all attribute features of the data to be processed into a meta-feature set to be processed, carrying out distance calculation on the meta-feature set to be processed and a standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class, judging whether the optimal distance coefficient is smaller than a distance preset value, and obtaining standardized data of the data to be processed;
the input module is electrically connected with the processor and is used for inputting data to be processed;
the output module is electrically connected with the processor and is used for outputting standardized data of the data to be processed.
8. The system for formulating data standards in big data governance based on data class of claim 7, wherein said processor has a matching module and a data normalization module integrated therein.
9. The system for formulating data standards in big data governance based on data class of claim 8, wherein said matching module comprises:
the data attribute analysis unit is used for analyzing attribute characteristics of the data to be processed, extracting the data characteristics of the data to be processed, obtaining all attribute characteristics of the data to be processed, and combining all attribute characteristics of the data to be processed into a meta-characteristic set to be processed;
the computing unit is used for carrying out distance computation on the meta-feature set to be processed and the standard meta-feature set of each data class to obtain a distance coefficient between the data to be processed and each data class;
and the screening unit is used for determining the data class with the smallest distance coefficient with the data to be processed.
10. The system for formulating data standards in big data governance based on data categories of claim 9, wherein said data normalization module comprises:
the judging unit is used for judging whether the optimal distance coefficient is smaller than a distance preset value or not;
the standardized processing unit is used for calling a standardized model of the optimal data class, inputting metadata of the data to be processed into the standardized model of the optimal data class, and obtaining standardized data of the data to be processed.
CN202310588344.8A 2023-05-24 2023-05-24 Data standard making method and system for big data management based on data category Pending CN116304721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310588344.8A CN116304721A (en) 2023-05-24 2023-05-24 Data standard making method and system for big data management based on data category

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310588344.8A CN116304721A (en) 2023-05-24 2023-05-24 Data standard making method and system for big data management based on data category

Publications (1)

Publication Number Publication Date
CN116304721A true CN116304721A (en) 2023-06-23

Family

ID=86820742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310588344.8A Pending CN116304721A (en) 2023-05-24 2023-05-24 Data standard making method and system for big data management based on data category

Country Status (1)

Country Link
CN (1) CN116304721A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251745A (en) * 2023-11-17 2023-12-19 山东顺国电子科技有限公司 Deep learning big data intelligent standard management method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090031882A1 (en) * 2004-07-09 2009-02-05 Sony Deutschland Gmbh Method for Classifying Music
CN109031954A (en) * 2018-08-03 2018-12-18 北京深度奇点科技有限公司 Method, welding method and equipment are determined based on the welding parameter of intensified learning
CN111078639A (en) * 2019-12-03 2020-04-28 望海康信(北京)科技股份公司 Data standardization method and device and electronic equipment
CN111190881A (en) * 2019-11-13 2020-05-22 深圳市华傲数据技术有限公司 Data management method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090031882A1 (en) * 2004-07-09 2009-02-05 Sony Deutschland Gmbh Method for Classifying Music
CN109031954A (en) * 2018-08-03 2018-12-18 北京深度奇点科技有限公司 Method, welding method and equipment are determined based on the welding parameter of intensified learning
CN111190881A (en) * 2019-11-13 2020-05-22 深圳市华傲数据技术有限公司 Data management method and system
CN111078639A (en) * 2019-12-03 2020-04-28 望海康信(北京)科技股份公司 Data standardization method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251745A (en) * 2023-11-17 2023-12-19 山东顺国电子科技有限公司 Deep learning big data intelligent standard management method, system and storage medium

Similar Documents

Publication Publication Date Title
CN108549954B (en) Risk model training method, risk identification device, risk identification equipment and risk identification medium
US11650968B2 (en) Systems and methods for predictive early stopping in neural network training
CN109492230B (en) Method for extracting insurance contract key information based on interested text field convolutional neural network
WO2023226423A1 (en) Auxiliary chip design method and apparatus, device and nonvolatile storage medium
CN111931809A (en) Data processing method and device, storage medium and electronic equipment
CN116132104A (en) Intrusion detection method, system, equipment and medium based on improved CNN-LSTM
CN113554175B (en) Knowledge graph construction method and device, readable storage medium and terminal equipment
CN116304721A (en) Data standard making method and system for big data management based on data category
CN113608916A (en) Fault diagnosis method and device, electronic equipment and storage medium
CN112651296A (en) Method and system for automatically detecting data quality problem without prior knowledge
CN114897085A (en) Clustering method based on closed subgraph link prediction and computer equipment
CN113066528B (en) Protein classification method based on active semi-supervised graph neural network
CN117131449A (en) Data management-oriented anomaly identification method and system with propagation learning capability
CN117131348A (en) Data quality analysis method and system based on differential convolution characteristics
CN117392406A (en) Low-bit-width mixed precision quantization method for single-stage real-time target detection model
CN117009518A (en) Similar event judging method integrating basic attribute and text content and application thereof
CN113469237B (en) User intention recognition method, device, electronic equipment and storage medium
CN112906824B (en) Vehicle clustering method, system, device and storage medium
CN114077663A (en) Application log analysis method and device
CN110728615B (en) Steganalysis method based on sequential hypothesis testing, terminal device and storage medium
CN117194963B (en) Industrial FDC quality root cause analysis method, device and storage medium
CN112035338B (en) Coverage rate calculation method of stateful deep neural network
CN113723835B (en) Water consumption evaluation method and terminal equipment for thermal power plant
CN116432835A (en) Customer loss early warning and attributing method, device, computer equipment and storage medium
CN115964953A (en) Power grid digital resource modeling management method based on meta-learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230623