CN117893326A - Post-casting management informatization system and post-casting management method - Google Patents

Post-casting management informatization system and post-casting management method Download PDF

Info

Publication number
CN117893326A
CN117893326A CN202410076977.5A CN202410076977A CN117893326A CN 117893326 A CN117893326 A CN 117893326A CN 202410076977 A CN202410076977 A CN 202410076977A CN 117893326 A CN117893326 A CN 117893326A
Authority
CN
China
Prior art keywords
data
model
enterprise
training
management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410076977.5A
Other languages
Chinese (zh)
Inventor
郑兴林
赵捷毅
赵越杨
林振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Industrial Investment Development Co ltd
Original Assignee
Guizhou Industrial Investment Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Industrial Investment Development Co ltd filed Critical Guizhou Industrial Investment Development Co ltd
Priority to CN202410076977.5A priority Critical patent/CN117893326A/en
Publication of CN117893326A publication Critical patent/CN117893326A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to artificial intelligence technology, and provides a post-casting management informatization system and a post-casting management method, comprising the following steps: constructing a post-casting management informatization system; the post-casting management informatization system collects enterprise operation data; analyzing the collected enterprise operation data by using a first deep learning model; mining potential operation problems of the enterprise according to the analysis result; the potential of the enterprise at different stages of operation is assessed.

Description

Post-casting management informatization system and post-casting management method
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a post-casting management informatization system and a post-casting management method.
Background
In the investment process, many enterprises face the problems of irregular operation data, complex and complex operation data and no unified management. Furthermore, due to the lack of long-term data accumulation and management capabilities, the data required for post-consumer management is often patched, e.g., different enterprises typically operate with image-like, text-like, and plain data-like data, which often is difficult to normalize with confusion, resulting in inefficiency of data management and inaccuracy of post-consumer management. In addition, the post-investment management lacks an intelligent management method, so that potential management risks of an enterprise cannot be perceived in advance through deep analysis of the enterprise management data, and further intelligent evaluation of subsequent investment potential cannot be performed.
Disclosure of Invention
The application provides a post-casting management informatization system and a post-casting management method, which aim to solve the problems of low efficiency of data management and low intelligent degree of post-casting management in the prior art.
In view of the above, the present application provides a post-casting management informatization system and a post-casting management method.
The embodiment of the application provides a post-casting management method, which comprises the following steps:
Constructing a post-casting management informatization system;
The post-casting management informatization system collects enterprise operation data;
Analyzing the collected enterprise operation data by using a first deep learning model;
mining potential operation problems of the enterprise according to the analysis result;
The potential of the enterprise at different stages of operation is assessed.
Optionally, analyzing the collected enterprise business data using a first deep learning model, including:
Preprocessing the enterprise operation data;
Dividing the preprocessed enterprise business data into first image class data and first sequence class data;
Constructing a convolutional neural network CNN model and a cyclic neural network RNN model;
Training the CNN model by using historical image class data and training the RNN model by using historical sequence class data;
analyzing the first image class data by using the trained CNN model, and outputting a first enterprise operating condition and a first decision suggestion;
analyzing the first sequence type data by using the trained RNN model, and outputting a second enterprise operating condition and a second decision suggestion;
And fusing the first enterprise operating condition, the first decision proposal, the second enterprise operating condition and the second decision proposal to generate an enterprise operating analysis report.
Optionally, training the CNN model with historical image class data includes:
setting initial values for weights of the models;
setting a cross entropy loss function;
setting an Adam optimizer and setting a learning rate;
Setting the batch size;
dividing historical image data into set batch sizes for forward propagation to obtain prediction output;
comparing the predicted output of the CNN model with the real label by using a loss function, and calculating the loss;
Back propagation is performed, comprising: calculating the gradient of each layer by using a chain rule, and updating the weight and bias of the CNN model by using an optimizer according to the gradient;
Evaluating performance of the CNN model using a validation set at the end of each training period;
training the RNN model with historical sequence-like data, comprising:
initializing parameters of the RNN model;
inputting sequence data of a batch into the RNN model;
calculating each time step of the sequence data, wherein for each time step, the hidden state of the current input and the previous time step is used to generate the hidden state of the current time step;
Obtaining a predicted output through a full connection layer in the last time step;
Comparing the predicted output of the RNN model with the real label by using a loss function, and calculating loss;
performing back propagation;
The performance of the RNN model is evaluated using a validation set.
Optionally, mining potential business problems of the enterprise according to the analysis result, including:
Acquiring a historical enterprise operation report;
Preprocessing the historical enterprise operation report, and dividing the preprocessed historical enterprise operation report into a second training set and a second testing set;
Constructing a second deep learning model;
training the second deep learning model by using the second training set;
And excavating potential business problems of the enterprise by using the trained second deep learning model.
Optionally, mining potential business problems of the enterprise using the trained second deep learning model, including:
inputting the business data of the enterprise into the trained second deep learning model, and outputting a prediction result;
comparing the prediction result with data in a normal operation mode, and identifying abnormal data;
Based on the anomaly data, determining potential business problems for the enterprise, the potential business problems being a combination of one or more of inventory backlog, customer churn, finance, supply chain interruption, market competition.
Optionally, the second depth model includes a teacher model and a student model, and then a second depth learning model is constructed, and training the second depth learning model by using the second training set includes:
Constructing a teacher model and a student model;
Training the teacher model using the second training set;
training the student model using an output of the teacher model as a soft target;
In the process of training the student model, a temperature scaling technology is used for adjusting probability output of the teacher model, and the student model is trained iteratively after the probability output of the teacher model is adjusted each time, so that potential management problems of the enterprise are mined through the student model.
Optionally, the deep learning model includes a parallel model of CNN and long-short-term memory LSTM model, and mining the potential business problems of the enterprise by using the trained second deep learning model includes:
analyzing the image management data by using the CNN model, and identifying the data characteristics of the image management data;
analyzing the sequence management data by using the LSTM model, and identifying the data characteristics of the sequence management data;
and fusing the recognition results of the CNN model and the LSTM model, and analyzing the potential management problem of the enterprise.
Optionally, evaluating the potential of the enterprise at different business stages includes:
Collecting historical operating data;
preprocessing the historical operation data;
Constructing a machine learning model, and training the machine learning model by utilizing the preprocessed historical operation data;
And evaluating the potential of the enterprise in different operation stages by using the machine learning model.
Optionally, after the post-casting management informatization system collects the enterprise operation data, the method further includes:
Constructing a large-scale storage system, and establishing a data redundancy backup mechanism in the storage system;
Monitoring the health status of the storage node;
after any storage node is down, determining a data block to be migrated;
and selecting an optimal data migration path, and starting data migration operation so as to migrate the data blocks needing to be migrated.
The embodiment of the application also provides a post-casting management informatization system, wherein a computer program is stored in the system, and the computer program realizes the steps of the method when being executed by a processor.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
the embodiment of the application provides an informatization system for post-investment management and a management method thereof. By the system and the method, the problems of irregular data, lack of unified management and long-term data accumulation in the investment process of enterprises are solved. Meanwhile, a cloud-edge computing-terminal architecture, a multi-source heterogeneous data standardization and large-capacity data storage technology and an AI technology are introduced, so that intelligent management of a post-casting enterprise is realized.
Drawings
FIG. 1 is a schematic flow chart of a post-casting management method provided by the application;
FIG. 2 is a detailed flowchart of S103 provided by the present application;
FIG. 3 is a flow chart providing mining of potential business problems for the enterprise in accordance with the present application;
FIG. 4 is a flowchart of the method for mining potential business problems of an enterprise using a trained second deep learning model;
FIG. 5 is a detailed flowchart of S105 provided by the present application;
Fig. 6 is a schematic structural diagram of a post-casting management informatization system provided by the application.
Detailed Description
The application provides a post-casting management informatization system and a post-casting management method, which solve the problems of irregular data, lack of unified management and long-term data accumulation in the investment process of enterprises. Meanwhile, a cloud-edge computing-terminal architecture, a multi-source heterogeneous data standardization and large-capacity data storage technology and an AI technology are introduced, so that intelligent management of a post-casting enterprise is realized.
Example 1
As shown in fig. 1, the present application provides a post-casting management method, which includes:
s101, constructing a post-casting management informatization system;
the post-investment management informatization system is a system for managing and controlling an enterprise after investment, and is divided into a pre-investment, a mid-investment and a post-investment according to time, namely, the pre-investment, the mid-investment and the post-investment, and the post-investment enterprise needs to acquire the operation data of the enterprise and effectively manage and monitor by utilizing an informatization tool.
The construction of the post-casting management informatization system needs to be communicated with people in different parts, effective requirements are defined, and the cloud-edge computing-terminal architecture is adopted, so that the real-time performance and high-efficiency processing of data are ensured. Wherein,
The system selects an appropriate cloud service provider and sets a cloud computing policy. And designing edge computing nodes to ensure the rapid processing of data at edges. Finally, the type and configuration of the terminal equipment need to be determined, and stable collection of data is ensured.
In addition, the system can realize multi-source heterogeneous data standardization and mass data storage technology. For example, the system requires designing a data normalization process, ensuring data consistency, and selecting and implementing appropriate mass data storage techniques.
Data normalization is the conversion of data from different sources or collection modes into a unified format or structure for subsequent processing and analysis. In post-casting management, data standardization is particularly important because investment-related data comes from a number of different enterprises or departments, which differ in data format and structure.
The following is a simple data normalization scheme A1-A5:
step A1: data source identification and classification
All data sources are identified, such as financial systems, production systems, sales systems, etc., of the enterprise.
These data sources are classified, for example, by data type (numerical, text, etc.), data frequency (day, month, year, etc.).
Step A2: defining a standard data format
According to the requirements of post-casting management, a unified data format or structure is defined, which can be a structure of a database table, an Excel template and the like.
This format is ensured to cover all the data fields required for post-casting management and has a certain extensibility to accommodate future demand changes.
Step A3: data cleansing and conversion
The original data is cleaned to remove extraneous, duplicate, erroneous data.
The cleaned data is converted into the standard format defined in the previous step. This requires some data conversion tools or scripts.
Step A4: data verification
And verifying whether the converted data is consistent with the original data or not, and ensuring the accuracy of data conversion.
Simple statistical analysis is performed on the normalized data to check whether abnormal values or non-logical data exist, such as negative sales, etc.
Step A5: data storage and backup
And storing the standardized data into a database of the post-casting informatization system.
And the data are backed up regularly, so that the safety of the data is ensured.
Therefore, through the data standardization flow, consistency and accuracy of data used for post-casting management can be ensured, and a solid foundation is provided for subsequent data analysis and decision.
Further, with the rapid growth of enterprise data volumes, conventional relational databases have been difficult to meet large data storage and processing requirements. Therefore, selection of an appropriate mass data storage technology is critical to the stable operation of the post-casting management information system.
The following are specific implementation steps B1-B5 for data storage and backup using mass storage technology:
Step B1: assessing data volume and growth rate
The demand for data storage is assessed based on the current system data volume and the predicted future growth rate.
The required storage capacity is determined by considering factors such as data backup, disaster tolerance, historical data storage and the like.
Step B2: selecting data storage technology
Depending on the data type and access frequency, a suitable data storage technique is selected. For example, for structured data, a relational database may be selected; for large amounts of log or text data, a NoSQL database or distributed file system may be selected.
In view of data security, high availability and expandability, the advantages and disadvantages of various data storage technologies are evaluated.
Step B3: deploying and optimizing a data storage system
And performing system deployment according to the selected data storage technology. For example, a distributed file system HDFS, a columnar storage database HBase, etc. are deployed.
And (3) performing performance optimization on the data storage system to ensure quick reading and writing and stable access of the data.
Step B4: data migration and synchronization
Data in the existing system is migrated to the new data storage system.
And setting a data synchronization mechanism to ensure the data consistency in the new and old systems.
Step B5: monitoring and maintaining data storage system
And the data storage system is monitored in real time, so that the stable operation of the system is ensured.
And data backup is carried out regularly, so that data loss is prevented.
And (3) periodically maintaining the data storage system, such as cleaning out expired data, expanding storage space and the like.
Through the steps, the data in the post-casting management informatization system can be ensured to be stored safely, efficiently and stably, and solid data support is provided for post-casting management of enterprises.
S102, the post-casting management informatization system collects enterprise operation data;
The post-casting management informatization system deploys a data acquisition terminal to acquire enterprise operation data, monitors the acquisition state of the data in real time and ensures the data integrity.
In the embodiment of the application, the intelligent data acquisition technology can be adopted to acquire the enterprise internal data with high efficiency.
For an enterprise, there are a large number of data resources within the enterprise, such as ERP systems, CRM systems, production management systems, and the like. In order to ensure the effectiveness of post-casting management, efficient and accurate acquisition of these data is required.
Step C1: determining the range and frequency of data acquisition
And determining the type, the source and the frequency of the data to be acquired according to the requirements of post-casting management. For example, sales data needs to be collected daily, inventory data needs to be collected monthly, and so on.
The key data are collected preferentially in consideration of timeliness and importance of the data.
Step C2: selecting an appropriate data acquisition tool or platform
And selecting a proper data acquisition tool or platform according to the characteristics of the data source and the interface type. For example, for a relational database, an ETL tool may be selected; for Web pages, a Web crawler tool may be selected.
In view of data security and acquisition efficiency, the advantages and disadvantages of various tools or platforms are evaluated.
Step C3: design data acquisition process
And designing a data acquisition flow and steps according to the characteristics of the data source. For example, log in to the system first, then query the data, last save the data, etc.
For data needing to be collected regularly, an automatic data collection task is designed, and timely updating of the data is ensured.
Step C4: implementing data acquisition
Data is collected using a selected tool or platform.
And monitoring the progress and result of data acquisition, and ensuring the integrity and accuracy of the data.
In addition, it should be noted that the data has different types, including picture, text, and sequence data, and different data storage formats.
Step C5: data validation and quality control
And verifying the acquired data, and ensuring the consistency and accuracy of the data.
Quality checks are performed on the data using data quality control tools, such as checking the integrity, uniqueness, consistency, etc. of the data.
Step C6: data integration and storage
And integrating the acquired data with the existing data to ensure the consistency of the data.
And storing the integrated data into a database of the post-casting informatization system, and providing data support for subsequent data analysis and application.
Through the steps, the data in the enterprise can be ensured to be efficiently and accurately collected, and powerful data support is provided for post-casting management.
Alternatively, downtime of a node is a common problem in large data stores. To ensure data persistence and high availability, when one node is down, a data migration mechanism needs to be quickly started to migrate the affected data to other healthy nodes. Therefore, if downtime occurs, after data is collected, a data migration mechanism after downtime of a node based on big data storage is needed to be used for realizing data storage, which is a very important step, specifically as follows:
the method comprises the steps of M1, constructing a large-scale storage system, and establishing a data redundancy backup mechanism in the storage system;
The use of techniques such as the data replication mechanism in Hadoop's HDFS ensures that each piece of data is backed up on multiple nodes.
An appropriate replica factor, typically 3, is set to ensure high availability and fault tolerance of the data.
M2, monitoring the health state of the storage node;
the health status of each node is monitored in real time using a tool such as Hadoop's yan or other cluster management tool.
M3, after any storage node is down, determining a data block to be migrated;
And when the downtime of the node is detected, immediately triggering a data migration mechanism.
And analyzing the data stored on the downtime node, and determining the data blocks needing to be migrated.
Using the metadata of the HDFS, the affected data blocks and their duplicate locations on other nodes are quickly determined.
When a node is down, the primary task is to quickly determine which data is affected for subsequent data recovery and migration operations. The following are specific steps of how to determine the affected data:
Step M31: querying metadata
Large data storage systems, such as HDFS, maintain file-to-data block mappings and data block-to-node mappings in a NameNode or other metadata service.
And inquiring the metadata, and listing all data blocks on the downtime node.
Step M32: checking copy condition of data block
For each affected data block, the number of its copies on other nodes is queried.
And marking data blocks with copies only on the downtime node or with the number of the copies being lower than a preset threshold value.
Step M33: evaluating data recovery priority
A recovery priority is set for each affected data block based on the importance of the data, the access frequency and the traffic requirements.
For example, frequently accessed critical traffic data should have a higher restoration priority.
Step M34: considering data relevance
Consider the association of affected data with other data. For example, if certain partitions of a database table are affected, the restoration of the entire table may need to be considered.
Ensuring the integrity and consistency of the data is ensured in the recovery process.
Step M35: generating data recovery and migration plans
In combination with the above information, a recovery and migration plan is generated for each affected data block.
The plan should include the target storage locations of the data, migration paths, and expected recovery times.
Through the steps, the embodiment of the invention can rapidly and accurately determine the affected data on the down node, and a specific plan is made for the subsequent data recovery and migration operation.
M4. selecting an optimal data migration path, and starting a data migration operation so as to migrate the data block to be migrated.
And selecting an optimal data migration path according to the size of the data block, the network bandwidth and the load condition of the target node.
The data migration process is initiated to ensure that the data has sufficient copies on other healthy nodes.
Data migration is a critical step in ensuring data availability and integrity. When one node is down, the affected data needs to be quickly migrated to other healthy nodes. The following is a specific step of data migration:
step M41: selecting a target node
And selecting the optimal target node according to the load balance of the cluster and the storage capacity of the node.
It is ensured that the target node has sufficient resources to accept the migrated data.
Step M42: initializing data transmission
Data transfer is initiated using a built-in tool or other data migration tool of a large data storage system (e.g., HDFS).
The optimal data transmission strategy is selected according to the size of the data and the network bandwidth, such as parallel transmission or compression of the data.
Step M42: data transfer using built-in tools or other data migration tools of a large data storage system (e.g., HDFS)
In big data ecosystems, and in particular HDFS, a series of built-in tools are provided to aid in data migration. The following are the detailed steps for data transmission using these tools:
step M42.1: determining data sources and targets
And listing the data files or the catalogs which need to be migrated on the downtime node.
One or more healthy nodes are selected as targets for data migration.
Step M42.2: using the command distcp
"Distcp" is a tool provided by Hadoop, specifically for large-scale data replication.
A 'distcp' command is constructed, specifying the source path and the target path.
For example: hadoop DISTCP HDFS:// source-cluster/data hdfs:// target-cluster/data%
Step M42.3: consider data compression
Compression of data prior to transmission may be considered if the amount of data is large or the network bandwidth is limited.
The size of the data transmission is reduced using a compression format such as 'gzip' or 'snappy'.
Step M42.4: parallel transmission
"Distcp" supports parallel transmission, i.e., copying multiple files or data blocks simultaneously.
And setting a proper parallel level according to the network bandwidth and cluster resource conditions, for example, designating the number of parallel tasks by using an'm' option.
Step M42.5: handling failed transmissions
All failed data transmissions are recorded by distcp'.
If a failure occurs, the 'distcp' command may be re-run and the failure of a single file ignored using the 'i' option.
Step M42.6: other data migration tools
In addition to 'distcp', there are other third party tools such as 'rsync' or migration tools provided by a particular cloud service.
And selecting a tool which is most suitable for the current environment and the data volume to perform data migration.
Through the steps, the embodiment of the invention can utilize the HDFS or other built-in tools of the big data storage system to carry out data migration, and ensure that the data is quickly and accurately recovered after the node is down.
Step M43: monitoring data migration progress
And the progress of data migration is monitored in real time, and the integrity and the speed of data transmission are ensured.
Countermeasures are rapidly taken if network failures or other problems are encountered.
Step M44: verifying integrity of migrated data
And after the data migration is completed, carrying out integrity check on the data on the target node.
Methods such as checksum are used to ensure that the data is not corrupted during migration.
Step M45: updating data copy policies
And according to the new data layout, the copy strategy of the data is adjusted, and the durability and high availability of the data are ensured.
The number of copies of the data may be increased or the distribution of the copies may be changed, if necessary.
Through the steps, the embodiment of the invention can ensure that the affected data is quickly and accurately migrated to other healthy nodes under the condition of downtime of the nodes, thereby ensuring the availability and the integrity of the data.
S103, analyzing the collected enterprise operation data by using a first deep learning model;
As shown in fig. 2, in S103, the collected enterprise business data is analyzed by using a first deep learning model, which includes the following steps:
D1. preprocessing the enterprise operation data;
and cleaning, converting and standardizing the data according to the requirements of the deep learning model.
Unstructured data such as images, sounds, text, etc. are converted into a format acceptable to the model using appropriate methods.
D2. Dividing the preprocessed enterprise business data into first image class data and first sequence class data;
In the preprocessing process, the structural data can be defined as first sequence data, the non-structural data such as images can be defined as first image data, and the text data can be converted into the sequence data and the image data by using a data format conversion method, so that the text data can be converted into the structural data, namely the first sequence data, and can be defined as the first image data according to service requirements.
D3. Constructing a convolutional neural network CNN model and a cyclic neural network RNN model;
deep learning is a sub-area of machine learning that automatically learns features from data and performs efficient analysis. In post-casting management, deep learning can help the embodiment of the invention to deeply analyze business data of enterprises and mine the value behind the data.
And selecting a proper deep learning model according to the characteristics and analysis tasks of the data. The deep learning model may include CNN and RNN models, for example, for image data, a convolutional neural network may be selected; for sequence data, a recurrent neural network may be selected.
For CNN models, a CNN architecture needs to be designed, including convolutional layers, pooling layers, fully-connected layers, and the like. Appropriate activation functions, loss functions and optimizers are selected.
For RNN models, the RNN architecture needs to be designed, and conventional RNN, LSTM, GRU or other variants may be selected. An appropriate number of hidden layers, neurons, and output layers are added. Appropriate activation functions, loss functions and optimizers are selected.
D4. Training the CNN model by using historical image class data and training the RNN model by using historical sequence class data;
wherein training the CNN model using historical image class data comprises the following steps E1-E8:
E1. Setting initial values for weights of the models;
Initial values are set for the weights of the model using, for example, xavier or He initialization methods. This helps to ensure that the weights are within the proper range, thereby speeding training and improving model convergence.
E2. Setting a cross entropy loss function;
for multi-classification tasks, a cross entropy loss function is typically selected.
For a bi-classification task, a binary cross entropy loss function may be selected.
Other loss functions, such as mean square error, finger loss, etc., may also be selected depending on the task requirements.
In embodiments of the present invention, a cross entropy loss function is preferably chosen as the loss function of the model.
E3. Setting an Adam optimizer and setting a learning rate;
Common optimizers are SGD, adam, RMSprop, etc. Each optimizer has its advantages and characteristics, which is chosen depending on the specific needs of the task. The embodiment of the application preferentially selects the most commonly used Adam optimizer.
A learning rate is set. Too much learning rate results in model non-convergence, while too little learning rate results in slow training.
E4. Setting the batch size;
Training data is intended to be divided into batches for forward and reverse propagation. Batch size can affect the training speed and memory usage of the model.
E5. dividing historical image data into set batch sizes for forward propagation to obtain prediction output;
The embodiment of the application can also process the image data by using an image enhancement technology, such as rotation, translation, scaling, overturning and the like, so that the generalization capability of the model can be improved, and the risk of overfitting is reduced.
In addition, embodiments of the present application divide data into training, validation and test sets. A common segmentation ratio is 80% for training, 10% for verification, 10% for testing.
In the forward propagation process, inputting data of a batch into a model, and obtaining prediction output through calculation of each layer.
E6. comparing the predicted output of the CNN model with the real label by using a loss function, and calculating the loss;
And comparing the predicted output of the model with the real label by using the cross entropy loss function, and calculating the loss.
E7. back propagation is performed, comprising: calculating the gradient of each layer by using a chain rule, and updating the weight and bias of the CNN model by using an optimizer according to the gradient;
E8. At the end of each training period, the performance of the CNN model is evaluated using a validation set.
At the end of each training period, the performance of the model is evaluated using the validation set. This can help monitor the overfit and decide whether to stop training or adjust the learning rate in advance.
If the validation loss does not improve significantly over consecutive periods, it may be considered to stop training in advance to prevent overfitting.
In addition, in D4, the RNN model is trained using historical sequence class data, and for RNN (recurrent neural network) models, the process of training using the historical sequence class data is similar to training other types of neural networks, but there are some sequence data and RNN specific notes. Specifically comprises the steps F1-F7:
F1. initializing parameters of the RNN model;
similar to CNN, parameters such as a loss function, an optimizer, a learning rate, etc. need to be set. It is also necessary to provide an appropriate number of hidden layers, neurons and output layers.
F2. inputting sequence data of a batch into the RNN model;
In the embodiment of the present invention, since the sequence data is input, time sequence data, text data, audio data, etc. are input, and these data need to be converted into numeric data, for example, by text encoding, feature extraction, etc.
In addition, the data needs to be normalized or standardized to be within a proper range. If desired, the long sequences may be truncated or padded to meet the input requirements of the model.
Finally, the sequence data is converted into training samples using a sliding window or other strategy. Data is divided into training, validation and test sets.
After the partitioning, one batch of sequence data (one batch of sequence data in the training set) is input into the RNN model, and the forward propagation process is started.
F3. Calculating each time step of the sequence data, wherein for each time step, the hidden state of the current input and the previous time step is used to generate the hidden state of the current time step;
During forward propagation of the RNN unit, each time step of the sequence is computed. For each time step, the hidden state of the current input and the previous time step are used to generate the hidden state of the current time step.
F4. obtaining a predicted output through a full connection layer in the last time step;
after the last time step, if it is a classification task, the prediction output can be obtained through a fully connected layer.
F5. comparing the predicted output of the RNN model with the real label by using a loss function, and calculating loss;
the loss is calculated using the predicted output of the loss function comparison model and the real label. For the sequence tag task, the penalty is calculated at each time step and then averaged.
F6. Performing back propagation;
Reverse propagation, as opposed to forward propagation, begins at the last time step of the sequence and gradually returns to the first time step. Which calculates the gradient for each time step and each layer using the chain law. Based on these gradients and the selected optimizers, such as SGD, adam, etc., the weights and biases of the model are updated.
After back propagation, a learning rate adjustment is required, which is a critical hyper-parameter. A larger learning rate may be initially selected and may be gradually decreased as training progresses.
A learning rate decay strategy may be used or an adaptive learning rate optimizer such as Adam may be used.
F7. the performance of the RNN model is evaluated using a validation set.
At the end of each training period, the performance of the model is evaluated using the validation set. This can not only monitor the training progress of the model, but can also help detect overfitting.
According to the verification result, strategies such as learning rate, dropout addition or early stop can be adjusted.
D5. analyzing the first image class data by using the trained CNN model, and outputting a first enterprise operating condition and a first decision suggestion;
and applying the trained deep learning model to actual business data for deep analysis.
Decision suggestions, such as market trend predictions, customer behavior analysis, etc., are provided to the enterprise based on the output of the model.
For example, a first business operation condition:
enterprise a is a home electronics product manufacturer.
Recent sales have been somewhat downgraded, mainly because their latest products have not been widely appreciated by consumers.
Cost management is good but development effort is low.
First decision advice:
increasing research and development investment, improving existing products, and developing new products to meet market demands.
-Enhancing marketing strategies, increasing brand awareness.
Consider co-developing new products in collaboration with other enterprises.
D6. analyzing the first sequence type data by using the trained RNN model, and outputting a second enterprise operating condition and a second decision suggestion;
similarly, RNNs also output business conditions and decision advice, such as:
second business operation status:
Enterprise B is a company that provides online educational services.
The number of users steadily increases, but the user retention is lower.
Although the lesson content is rich, the user interface is not friendly enough, resulting in a reduced user experience.
Second decision advice:
optimizing the user interface and experience of the online platform.
Enhancing interactions with users, for example through community forums or online questioning and answering.
-Providing more customized learning paths and recommendations to meet the needs of different users.
D7. And fusing the first enterprise operating condition, the first decision proposal, the second enterprise operating condition and the second decision proposal to generate an enterprise operating analysis report.
The CNN model outputs a first enterprise operating condition and a first decision proposal, and the RNN outputs a second enterprise operating condition and a second decision proposal;
and fusing the first enterprise operating condition, the first decision suggestion, the second enterprise operating condition and the second decision suggestion to generate an enterprise operating analysis report.
If the CNN model outputs one enterprise and the RNN model outputs another enterprise, the CNN model and the RNN model are directly spliced to obtain the fused enterprise operation analysis report. If the CNN model and the RNN model output the business condition and the decision advice of the same enterprise, the same parts of the CNN model and the RNN model are de-duplicated, and different parts of the CNN model and the RNN model are combined and spliced in the report to form a total enterprise business analysis report.
For example:
enterprise business analysis report 1:
enterprise a business analysis:
enterprise a, a manufacturer of household appliances, has recently faced the challenge of market undershoot. Despite its proper cost management, product innovation and market feedback has shown to require further improvement. To reverse this trend, enterprise a is suggested to increase the development investment, not only to improve existing products, but also to develop new products to meet the changing market demands. Moreover, enterprise a is expected to increase its market share and competitiveness by enhancing marketing and collaboration with other enterprises.
Enterprise business analysis report 2:
Enterprise B business analysis:
the online education service provider, enterprise B, has succeeded in attracting new users, but faces the problem of low user retention. After extensive analysis, it was found that while it provided rich curriculum content, the user interface of the online platform was not fully appreciated. In order to improve user retention and satisfaction, enterprise B needs to fully optimize its online platform and enhance interaction with users. By providing a more personalized learning experience, enterprise B is expected to further consolidate its status in the online educational market.
Enterprise business analysis report 3:
Reporting time: second quarter of 2023
1. Sales conditions
Total sales: 5000 ten thousand RMB
Homonymy increases: 10 percent of
Ring ratio increase: 5%
2. Customer analysis
New customer number: 1000 family
Number of lost clients: 200 families
The major customer industry: manufacturing, service, and retail industries
3. Product case
Hot-pin products: product A, product B, product C
Inventory backlog product: product X, product Y
4. Financial condition
Total asset: 10000 ten thousand RMB
Liability ratio: 50 percent of
Net profit: 500 ten thousand RMB
5. Market trend
Market share: 15%
Competitor case: competitor 1 increased by 5% and competitor 2 decreased by 2%
Market prediction: market demand is expected to increase by 10% for the next quarter
6. Business advice
The production and sales of the products A, B and C are enhanced, and the market share is improved.
And sales promotion or adjustment of production strategies are carried out for the products X and Y which are backlogged in the inventory.
Strengthen the cooperation with manufacturing industry, service industry and retail industry and expand the customer base.
The report is analyzed by a deep learning method based on the operation data of a certain industrial manufacturing limited company, and decision support is provided for enterprises. The embodiment of the invention can provide the report to the relevant departments for decision support. In addition, reporting verification aspects can verify the accuracy of the information by comparison with field data. The system can also be adjusted according to the field verification result.
S104, mining potential operation problems of the enterprise according to the analysis result;
there are many potential problems in the business process, such as low production efficiency, stock backlog, unsmooth funds turnover, etc. By utilizing the AI technology, the potential problems can be mined from a large amount of operation data, and decision support is provided for enterprises.
As shown in fig. 3, according to the analysis result, the potential business problems of the enterprise are mined, including steps G1-G5:
G1. acquiring a historical enterprise operation report;
For a business, the business reports of different historic periods are different, so that different business reports need to be acquired from different historic periods. These historical, different times of business operations reports may help the AI model better learn and mine potential business operations issues.
G2. preprocessing the historical enterprise operation report, and dividing the preprocessed historical enterprise operation report into a second training set and a second testing set;
The data were cleaned, converted and normalized according to AI model requirements.
The data set is partitioned into a second training set and a second test set to facilitate training and validation of the model.
G3. Constructing a second deep learning model;
according to the characteristics of the management problem, a proper AI model is selected. For example, for inventory forecasting problems, a time series forecasting model may be selected; for customer churn prediction, a classification model may be selected.
G4. Training the second deep learning model by using the second training set;
The AI model is trained using the second training set data.
Parameters of the model, such as learning rate, regularization parameters and the like, are adjusted to optimize the performance of the model.
The following takes neural networks as examples:
Neural networks are a core model in the AI field, and are particularly excellent in complex data analysis and prediction tasks. The following is a training procedure using a neural network as an example:
step G41: neural network structural design
And determining the structure of the neural network, such as the number of layers, the number of neurons of each layer and the like according to the characteristics of the data and the task requirements.
An appropriate activation function is selected, such as ReLU, sigmoid, tanh, etc.
If the task is classification, the number of neurons of the output layer is determined and softmax is used as an activation function.
Step G42: initializing neural network parameters
The weights of the network are initialized using small random numbers or specific initialization techniques (e.g., xavier initialization, he initialization).
The initialization bias term is 0 or a small positive value.
Step G43: selecting a loss function and an optimizer
A loss function is selected according to the task type, such as mean square error (regression task) or cross entropy loss (classification task).
Optimizers such as gradient descent, adam, RMSprop, etc. are selected and set with learning rates and other parameters.
Step G44: neural network training
Training data is used to feed the neural network.
In each epoch, a forward propagation calculation is performed to calculate a predicted value, and then an error is calculated by a loss function.
The network weights are updated using a back propagation algorithm.
Repeating the steps until the preset epoch number or the loss function value is lower than a certain threshold value.
Step G45: verification and adjustment
The performance of the neural network is evaluated using the validation data set.
And according to the verification result, adjusting the network structure, parameters or optimizer settings, and then retraining.
To prevent overfitting, techniques such as regularization, dropout, etc. can be used.
Step G46: model preservation and deployment
After training, the structure and weight parameters of the neural network are saved.
According to actual requirements, the model is deployed into a corresponding application environment, and support is provided for subsequent data analysis and decision making.
By adopting the neural network to carry out model training, nonlinear relations and complex modes in the data can be effectively captured, and a powerful tool is provided for enterprise management data analysis.
In addition, the AI model needs to be validated using the test set data.
And evaluating the accuracy, the robustness and the generalization capability of the model according to the verification result.
G5. And excavating potential business problems of the enterprise by using the trained second deep learning model.
As shown in fig. 4, in G4, mining the potential business problem of the enterprise by using the trained second deep learning model includes:
G41. Inputting the business data of the enterprise into the trained second deep learning model, and outputting a prediction result;
the predicted outcome depends on the goals of the model, such as the specific goals of "sales," "number of orders," "liability," etc.
G42. Comparing the prediction result with data in a normal operation mode, and identifying abnormal data;
And analyzing the output result of the model to find out data points which are significantly different from the normal operation mode. The difference value comparison mode can be adopted, and if the difference value exceeds a preset threshold value, the difference value is defined as an abnormal value.
G43. Based on the anomaly data, determining potential business problems for the enterprise, the potential business problems being a combination of one or more of inventory backlog, customer churn, finance, supply chain interruption, market competition.
According to the prediction result and data analysis of the model, the following potential management problems are mined:
1. inventory backlog problem: the inventory of certain products is far above normal due to over-production or reduced market demand.
2. Customer churn problem: customer reduction is accelerated over time due to product quality issues or competitor policy changes.
3. Financial risk: the liability rate of the enterprise suddenly rises, facing financial risks.
4. Supply chain interruption: the reduced stock volume of certain critical raw materials is a problem for the supplier or a disruption in the supply chain.
5. Market competition is exacerbated: the market share of enterprises suddenly drops, and competitors push out more competitive products.
According to the mined management problems, a detailed report is written, and clear and accurate management information is provided for a decision maker.
Suggestions and improvements are made in the report to help the enterprise solve the problem.
The AI model is utilized to analyze the actual operation data, so that potential operation problems can be mined, and powerful decision support can be provided for enterprises.
In another embodiment, the second depth model includes a teacher model and a student model, and constructing a second depth learning model, and training the second depth learning model by using the second training set includes:
H1. Constructing a teacher model and a student model;
In many applications, large models (teacher models) are too bulky and computationally intensive to deploy in resource-constrained environments. By using a teacher-student architecture, embodiments of the present invention can transfer knowledge of a large model to a smaller, lighter weight model (student model) without sacrificing too much accuracy.
H2. Training the teacher model using the second training set;
a depth and complex model is trained on a large amount of data to achieve high accuracy.
This model will serve as a source of knowledge, providing guidance to the student model.
Ensuring that the training data set is representative can cover various scenarios of mining potential business problems of the enterprise.
The teacher model may be a deep convolutional neural network or a deep cyclic neural network.
The network is trained until satisfactory accuracy is achieved. Specific training procedures are described in the above examples.
H3. training the student model using an output of the teacher model as a soft target;
a lightweight network architecture is selected as the student model, and the output of the teacher model is used as the "soft target" to train the student model. For each student model, not only the original label, but also the teacher model's predictions are used as additional soft targets in the training process.
Lightweight network architectures typically have fewer parameters and simpler architecture, which makes them more computationally and storage efficient, suitable for deployment in resource-constrained environments. The following are examples of several common lightweight network architectures:
1.MobileNet
MobileNet use depth-separable convolution to reduce computational effort and model size.
It is particularly suitable for mobile and embedded vision applications.
2.SqueezeNet
SqueezeNet use a convolution of 1x1 (or referred to as a point convolution) to reduce the number of parameters.
By using smaller convolution kernels and deeper network structures, it greatly reduces model size while maintaining the same accuracy.
3.ShuffleNet
ShuffleNet introduces point convolution and channel shuffling operations, optimizing the computational efficiency of the model.
Channel shuffling can increase the diversity of features, thereby increasing the representational capacity of the model.
4.EfficientNet
EFFICIENTNET achieve better performance by uniformly expanding the depth, width and resolution of the network.
The method uses a compound scaling method, and can obtain efficient models under different resource constraints.
5.TinyML
TinyML is a lightweight deep learning framework designed specifically for microcontrollers and edge devices.
It provides a series of optimization tools that can further reduce the size and computational effort of the model.
When a proper lightweight network architecture is selected as a student model, the characteristics of complexity, calculation and storage requirements and target tasks of the model need to be considered. For example, mobileNet or SqueezeNet are good choices if the goal is to make image classification on an embedded device.
H4. in the process of training the student model, a temperature scaling technology is used for adjusting probability output of the teacher model, and the student model is trained iteratively after the probability output of the teacher model is adjusted each time, so that potential management problems of the enterprise are mined through the student model.
The probability distribution output by the teacher model is adjusted to be softer, so that training of the student model is facilitated.
In the course of teacher-student knowledge distillation, temperature scaling techniques are a common method used to make the output probability distribution of the teacher model more "soft". Such a "soft" probability distribution contains more information, which aids in the training of the student model.
The following are specific implementation steps of the temperature scaling technique:
step H41: computing raw probability distribution
The training data is propagated forward using the teacher model to yield the original output of the model, typically un-normalized logits.
Step H42: using temperature values
The original logits is divided by a temperature value (T). The temperature value (T) is typically greater than 1, which may make the probability distribution more "soft".
Step H43: calculating a "soft" probability distribution
The adjusted logits was converted to a probability distribution using a softmax function.
Step H44: training with "soft" probabilities as targets
The "soft" probability distribution described above is used as a target in training the student model, rather than the original hard tag.
Typically, the loss function is the cross entropy between the probability distribution of the student model and the "soft" probability distribution of the teacher model.
By using the temperature scaling technology, the student model not only learns the original tag information of the data, but also learns the knowledge of the teacher model, which is helpful for improving the generalization capability and accuracy of the student model.
Lightweight student models, such as mobile devices, embedded systems, etc., are deployed in a target environment and used to mine potential business problems for an enterprise.
The student model is particularly suitable for real-time analysis and prediction in a resource-limited environment due to the lightweight characteristic of the student model. The following is a specific step of how to use student models to mine potential business problems for enterprises:
step J1: data preprocessing
Business data to be analyzed, such as historical business analysis reports, are derived from the enterprise management system.
And cleaning and preprocessing the data, such as processing missing values, abnormal values, data normalization and the like, so that the data meets the input requirements of the student model.
Step J2: model prediction
And inputting the preprocessed business data into the student model.
Output results of the model, which are predicted values of the business index, scores of abnormality detection, and the like, are recorded.
Step J3: analyzing the predicted results
And analyzing the prediction result of the student model to find out data points or areas which are significantly different from the normal operation mode.
Based on these differences, a business problem is determined.
Step J3: analyzing the predicted results
Analyzing the prediction results of the student model is a key step in mining potential business problems of enterprises. In this step, embodiments of the present invention may parse the output of the model in detail to determine the business problem.
Step J3.1: collecting predictive data
And comparing the prediction result of the student model with actual business data.
A difference value is generated for each predicted result, such as a difference between the predicted value and the actual value, a degree of deviation of the predicted trend from the actual trend, and the like.
Step J3.2: setting a threshold value
And setting a threshold value for each operation index according to the historical data and the business experience. When the difference value exceeds these thresholds, it indicates that there is a business problem.
Step J3.3: identifying outliers
Statistical methods or machine learning methods are used to identify outliers in the predicted data.
Outliers are due to inaccurate model predictions and also to problems with actual operations.
Step J3.4: analysis of cause of abnormality
For each data point identified as anomalous, the cause behind it is analyzed in depth.
For example, if sales suddenly drop, because new products are not released successfully, market competition is increased, market policies are wrong, etc.
Step J3.5: general management problems
From the above analysis, business problems faced by the enterprise are summarized.
Such as improper inventory management, product quality issues, poor marketing strategies, etc.
Through the steps, the embodiment of the invention can analyze the prediction result of the student model in detail, dig out the management problem and provide valuable reference for the next decision.
In another embodiment, assuming that the deep learning model includes a parallel model of CNN and long-short-term memory LSTM model, the potential business problems of the enterprise are mined by using the trained second deep learning model, including steps K1-K3:
K1. analyzing the image management data by using the CNN model, and identifying the data characteristics of the image management data;
Specific processes are described with reference to E1-E8, the trained CNN model is used for forward propagation of business data, and the output of the middle layer is extracted as the data characteristics of image type business data, such as financial data, human resource data and the like.
K2. analyzing the sequence management data by using the LSTM model, and identifying the data characteristics of the sequence management data;
LSTM (Long Short-Term Memory) is a variant of RNN specifically designed to address Long-Term dependency problems. LSTM can better capture long-term patterns in time series data through its unique gating mechanism.
The following are specific steps for processing business data using the LSTM model:
step K21: data integration and preprocessing
And acquiring sequence management data, and integrating all the collected data.
Normalization and normalization processes are performed for the LSTM model so as to be within a suitable range of values.
Step K22: construction of LSTM model
A basic LSTM architecture is designed, comprising an input layer, a plurality of LSTM layers, and an output layer.
The appropriate number of LSTM cells, activation functions, and other parameters are selected.
Step K23: training LSTM model
The LSTM model is trained using the integrated business data.
And optimizing model parameters by adopting time series cross-validation or other methods to ensure that the model cannot be fitted.
Step K24: model evaluation
The performance of the LSTM model is evaluated on a test set or validation set.
Using Mean Square Error (MSE), mean Absolute Error (MAE), or other suitable evaluation criteria.
And adjusting the model structure or parameters according to the evaluation result.
Step K25: extracting time series patterns
And predicting the integrated business data by using the trained LSTM model.
And analyzing the output of the model to identify long-term trends and patterns in the business data.
These patterns may help businesses understand the historical performance and future trends of their business activities.
Through the steps, the LSTM model not only can help enterprises capture long-term patterns in business data, but also can provide valuable insights for decision makers about future business trends.
K3. and fusing the recognition results of the CNN model and the LSTM model, and analyzing the potential management problem of the enterprise.
The fusion process may be a weighted fusion or other fusion. Illustratively, the prediction results of the CNN and LSTM are integrated to obtain a comprehensive business data prediction. Considering that CNN captures mainly short term variations, while LSTM captures long term trends, the combined results should take into account both aspects of information. Therefore, when the embodiment of the invention combines the prediction results of the RNN and LSTM models, the embodiment of the invention can obtain deep understanding of enterprise business data and identify possible business problems from short-term and long-term trends.
The following are specific steps for analyzing management problems:
Step K31: merging model outputs
And integrating the prediction results of the RNN and the LSTM to obtain a comprehensive operation data prediction.
Considering that RNNs capture mainly short term fluctuations, while LSTM captures long term trends, the combined results should take into account both aspects of information.
Step K32: setting an early warning threshold
And setting an early warning threshold value for each operation index according to the historical operation data and the business experience.
When the predicted outcome exceeds these thresholds, this may be an indication of an operational problem.
Step K33: identifying anomalies and deviations
Comparing the predicted result with the actual business data, identifying any significant deviation.
The cause of these deviations is analyzed to determine whether it is due to model errors or operational problems that are actually present.
Step K34: general management problems
From the above analysis, business problems that enterprises may face are summarized, such as:
1. Sales were not as good as expected: the predicted sales are far higher than actual and may be due to new product release failures, market competition exacerbations, or improper marketing strategies.
2. Stock backlog: the predicted inventory level continues to be lower than practical, potentially resulting in excessive inventory and high inventory costs.
3. The production efficiency is reduced: the LSTM model predicts lower than practical long-term throughput, possibly due to production line problems or reduced staff efficiency.
Step K35: providing improved advice
Corresponding advice and improvement measures are provided for each identified business problem.
For example, to increase sales, market research may be suggested, understand consumer demand, optimize products, or adjust marketing strategies.
Through the steps, the embodiment of the invention can effectively utilize the prediction results of the CNN and LSTM models, deeply analyze and mine potential business problems of enterprises, and provide specific suggestions for decision makers.
S105, evaluating the potential of the enterprise in different operation stages.
Assessing the potential of an enterprise is an important link in post-casting management. By means of a machine learning algorithm, the growth rule of the enterprise can be learned from historical data, and the potential of the enterprise at different stages can be predicted.
As shown in fig. 5, S105 specifically includes L1 to L4:
l1: collecting historical operating data;
Similar to S104, a large amount of historical business data such as sales, number of customers, market share, etc. needs to be collected first.
L2: preprocessing the historical operation data;
according to the quality and integrity of the data, cleaning and preprocessing the data, wherein the preprocessing comprises the following steps:
meaningful features are extracted from the raw data, such as sales growth rate, customer growth rate, etc.
And (3) carrying out normalization or standardization treatment on the characteristics to ensure the stability and accuracy of the model.
L3: constructing a machine learning model, and training the machine learning model by utilizing the preprocessed historical operation data;
and selecting a proper machine learning model according to the characteristics of the evaluation task. For example, for regression tasks, models such as linear regression, decision tree regression, etc. may be selected; for classification tasks, models such as logistic regression, support vector machines, etc. may be selected.
An appropriate model complexity is selected taking into account the amount of data and the feature dimensions.
The machine learning model is trained using the training set data.
And verifying the model by using the verification set data, and evaluating the accuracy and generalization capability of the model.
Taking a decision tree as an example, a decision tree is a commonly used machine learning model that segments and classifies data through a tree structure. In the scenario of evaluating enterprise potential, decision trees may help embodiments of the present invention predict enterprise potential at different stages based on different characteristics, such as sales, number of customers, etc.
Step L31: selection decision tree algorithm
And selecting a proper decision tree algorithm according to the characteristics of the evaluation task. For example, for classification tasks, algorithms of CART, ID3, C4.5, etc. may be selected; for the regression task, a regression tree may be selected.
Step L32: setting decision tree parameters
Parameters such as depth of the decision tree, minimum sample number of leaf nodes, feature selection criteria and the like are set to control complexity of the model.
Using a cross-validation method, the optimal combination of parameters is found to achieve the best model performance.
Step L33: training decision tree models
Training the decision tree model using the training set data.
The training process of the model, such as the change of a loss function, the depth of a tree and the like, is monitored to ensure the stable convergence of the model.
Step L34: verification decision tree model
And verifying the decision tree model by using the verification set data.
And (5) evaluating indexes such as accuracy, recall rate, F1 score and the like of the model, and ensuring generalization capability of the model.
Step L35: model interpretation and visualization
And visualizing the model by utilizing the tree structure of the decision tree to help a decision maker understand the decision logic of the model.
The prediction results of the model are further interpreted using model interpretation tools, such as SHAP, LIME, etc.
Through the steps, the potential of the enterprise at different stages can be accurately estimated by utilizing the decision tree model, and powerful decision support is provided for the enterprise.
L4: and evaluating the potential of the enterprise in different operation stages by using the machine learning model.
And applying the trained model to actual data to predict the potential of the enterprise at different stages.
And providing a growth strategy and decision advice for the enterprise according to the output of the model.
Model interpretation tools, such as SHAP, LIME, etc., are used to interpret the predicted results of the model to help the decision maker understand the decision logic of the model.
Taking a Guizhou science and technology enterprise as an example, the embodiment of the invention will show how to predict the potential of the enterprise at different stages using the trained AI model.
Some science and technology Co.Ltd
A certain technology limited company in Guizhou is a technology enterprise in Guizhou, and is mainly used for developing intelligent hardware and AI technology.
Step L41: data acquisition and preprocessing
The operation data is obtained from internal systems such as ERP system, CRM system, etc. of certain technology limited company in Guizhou.
Preprocessing the data, such as normalizing, filling missing values and the like, so as to ensure that the quality of the data is consistent with the input requirement of the model.
Step L42: model prediction
And inputting the processed data into a trained AI model.
Predictive results of the model are obtained, which represent potential scores of the enterprise at different stages.
Step L43: potential assessment and resolution
Analyzing the potential of a certain technology limited company in Guizhou according to the prediction result of the model:
1. An initial stage: the score was 80/100. The enterprises have higher innovation capability and market response speed in the stage, but the team construction and management experience accumulation is also required to be enhanced.
2. A growth stage: the score was 85/100. Enterprises have established a certain brand awareness in the market and have stable client groups, but the research and development investment needs to be enhanced, and the technology is kept leading.
3. Maturation stage: the score was 75/100. Businesses have stable revenues and profits, but face challenges of market saturation and increased competition, and need to find new growth points.
4. And (3) a decay stage: the score was 60/100. The growth of enterprises slows down and needs to be transformed or new business models are sought.
Step L44: providing decision advice
And providing decision suggestions for a certain scientific and technological limited company in Guizhou according to the result of the potential evaluation.
For example, development investment is increased, cooperation with universities and research institutions is enhanced, overseas markets are developed, and the like.
Through the steps, the embodiment of the invention not only can evaluate the potential of a certain technology limited company in Guizhou at different stages, but also can provide targeted decision suggestions for the technology limited company, thereby helping the development and the continuous growth of the technology limited company.
The embodiment of the application also provides a post-casting management informatization system, wherein a computer program is stored in the system, and the computer program realizes the steps of the method when being executed by a processor.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
the embodiment of the application provides an informatization system for post-investment management and a management method thereof. By the system and the method, the problems of irregular data, lack of unified management and long-term data accumulation in the investment process of enterprises are solved. Meanwhile, a cloud-edge computing-terminal architecture, a multi-source heterogeneous data standardization and large-capacity data storage technology and an AI technology are introduced, so that intelligent management of a post-casting enterprise is realized.
Example two
Based on the same inventive concept as the post-casting management method in the foregoing embodiments, the present application further provides a computer-readable storage medium and/or system having a computer program stored thereon, which when executed by a processor, implements the method as in the first embodiment.
Example III
The present application also provides a post-casting management information system 6000, as shown in fig. 6, including a memory 64 and a processor 61, where the memory stores computer executable instructions, and the processor implements the method when running the computer executable instructions on the memory. In practical applications, the system may also include other necessary elements, including but not limited to any number of input systems 62, output systems 63, processors 61, controllers, memories 64, etc., and all methods that can implement the embodiments of the present application are within the scope of the present application.
The memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (readonly memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read only memory, CD-to-ROM) for the associated instructions and data.
The input system 62 is for inputting data and/or signals and the output system 63 is for outputting data and/or signals. The output system 63 and the input system 62 may be separate devices or may be a single device.
The processor may include one or more processors, including for example one or more central processing units (central processing unit, CPU), which in the case of a CPU, may be a single-core CPU or a multi-core CPU. The processor may also include one or more special purpose processors, which may include GPUs, FPGAs, etc., for acceleration processing.
The memory is used to store program codes and data for the network device.
The processor is used to call the program code and data in the memory to perform the steps of the method embodiments described above. Reference may be made specifically to the description of the method embodiments, and no further description is given here.
In the several embodiments provided by the present application, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the division of the unit is merely a logic function division, and there may be another division manner when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. The coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a read-only memory (ROM), or a random-access memory (random access memory, RAM), or a magnetic medium such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium such as a digital versatile disk (DIGITAL VERSATILEDISC, DVD), or a semiconductor medium such as a Solid State Disk (SSD), or the like.
The specification and figures are merely exemplary illustrations of the present application and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, the present application is intended to include such modifications and alterations insofar as they come within the scope of the application or the equivalents thereof.

Claims (10)

1. A post-casting management method, comprising:
Constructing a post-casting management informatization system;
The post-casting management informatization system collects enterprise operation data;
Analyzing the collected enterprise operation data by using a first deep learning model;
mining potential operation problems of the enterprise according to the analysis result;
The potential of the enterprise at different stages of operation is assessed.
2. The method of claim 1, wherein analyzing the collected enterprise business data using a first deep learning model comprises:
Preprocessing the enterprise operation data;
Dividing the preprocessed enterprise business data into first image class data and first sequence class data;
Constructing a convolutional neural network CNN model and a cyclic neural network RNN model;
Training the CNN model by using historical image class data and training the RNN model by using historical sequence class data;
analyzing the first image class data by using the trained CNN model, and outputting a first enterprise operating condition and a first decision suggestion;
analyzing the first sequence type data by using the trained RNN model, and outputting a second enterprise operating condition and a second decision suggestion;
And fusing the first enterprise operating condition, the first decision proposal, the second enterprise operating condition and the second decision proposal to generate an enterprise operating analysis report.
3. The method of claim 2, wherein training the CNN model with historical image class data comprises:
setting initial values for weights of the models;
setting a cross entropy loss function;
setting an Adam optimizer and setting a learning rate;
Setting the batch size;
dividing historical image data into set batch sizes for forward propagation to obtain prediction output;
comparing the predicted output of the CNN model with the real label by using a loss function, and calculating the loss;
Back propagation is performed, comprising: calculating the gradient of each layer by using a chain rule, and updating the weight and bias of the CNN model by using an optimizer according to the gradient;
Evaluating performance of the CNN model using a validation set at the end of each training period;
training the RNN model with historical sequence-like data, comprising:
initializing parameters of the RNN model;
inputting sequence data of a batch into the RNN model;
calculating each time step of the sequence data, wherein for each time step, the hidden state of the current input and the previous time step is used to generate the hidden state of the current time step;
Obtaining a predicted output through a full connection layer in the last time step;
Comparing the predicted output of the RNN model with the real label by using a loss function, and calculating loss;
performing back propagation;
The performance of the RNN model is evaluated using a validation set.
4. The method of claim 1, wherein mining potential business problems for the enterprise based on the analysis results comprises:
Acquiring a historical enterprise operation report;
Preprocessing the historical enterprise operation report, and dividing the preprocessed historical enterprise operation report into a second training set and a second testing set;
Constructing a second deep learning model;
training the second deep learning model by using the second training set;
And excavating potential business problems of the enterprise by using the trained second deep learning model.
5. The method of claim 4, wherein mining potential business problems of the enterprise using the trained second deep learning model comprises:
inputting the business data of the enterprise into the trained second deep learning model, and outputting a prediction result;
comparing the prediction result with data in a normal operation mode, and identifying abnormal data;
Based on the anomaly data, determining potential business problems for the enterprise, the potential business problems being a combination of one or more of inventory backlog, customer churn, finance, supply chain interruption, market competition.
6. The method of claim 4, wherein the second depth model comprises a teacher model and a student model, and wherein constructing a second depth learning model, training the second depth learning model using the second training set, comprises:
Constructing a teacher model and a student model;
Training the teacher model using the second training set;
training the student model using an output of the teacher model as a soft target;
In the process of training the student model, a temperature scaling technology is used for adjusting probability output of the teacher model, and the student model is trained iteratively after the probability output of the teacher model is adjusted each time, so that potential management problems of the enterprise are mined through the student model.
7. The method of claim 4, wherein the deep learning model comprises a parallel model of CNN and long-short term memory LSTM model, and wherein mining potential business problems of the enterprise using the trained second deep learning model comprises:
analyzing the image management data by using the CNN model, and identifying the data characteristics of the image management data;
analyzing the sequence management data by using the LSTM model, and identifying the data characteristics of the sequence management data;
and fusing the recognition results of the CNN model and the LSTM model, and analyzing the potential management problem of the enterprise.
8. The method of claim 1, wherein evaluating the potential of the enterprise at different business stages comprises:
Collecting historical operating data;
preprocessing the historical operation data;
Constructing a machine learning model, and training the machine learning model by utilizing the preprocessed historical operation data;
And evaluating the potential of the enterprise in different operation stages by using the machine learning model.
9. The method of claim 1, wherein after the post-administration management information system collects enterprise business data, the method further comprises:
Constructing a large-scale storage system, and establishing a data redundancy backup mechanism in the storage system;
Monitoring the health status of the storage node;
after any storage node is down, determining a data block to be migrated;
and selecting an optimal data migration path, and starting data migration operation so as to migrate the data blocks needing to be migrated.
10. A post-casting management informatization system, characterized in that a computer program is stored in the system, which, when being executed by a processor, implements the steps of the method of any of claims 1-9.
CN202410076977.5A 2024-01-18 2024-01-18 Post-casting management informatization system and post-casting management method Pending CN117893326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410076977.5A CN117893326A (en) 2024-01-18 2024-01-18 Post-casting management informatization system and post-casting management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410076977.5A CN117893326A (en) 2024-01-18 2024-01-18 Post-casting management informatization system and post-casting management method

Publications (1)

Publication Number Publication Date
CN117893326A true CN117893326A (en) 2024-04-16

Family

ID=90645551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410076977.5A Pending CN117893326A (en) 2024-01-18 2024-01-18 Post-casting management informatization system and post-casting management method

Country Status (1)

Country Link
CN (1) CN117893326A (en)

Similar Documents

Publication Publication Date Title
US10977293B2 (en) Technology incident management platform
US10636007B2 (en) Method and system for data-based optimization of performance indicators in process and manufacturing industries
US20170330109A1 (en) Predictive drift detection and correction
US11372896B2 (en) Method and apparatus for grouping data records
Azzeh A replicated assessment and comparison of adaptation techniques for analogy-based effort estimation
US20220284235A1 (en) Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US20220122010A1 (en) Long-short field memory networks
US11526261B1 (en) System and method for aggregating and enriching data
US20200027010A1 (en) Decentralized Distributed Machine Learning
US11620162B2 (en) Resource allocation optimization for multi-dimensional machine learning environments
US20220100772A1 (en) Context-sensitive linking of entities to private databases
US20230123527A1 (en) Distributed client server system for generating predictive machine learning models
CN116340726A (en) Energy economy big data cleaning method, system, equipment and storage medium
US11995667B2 (en) Systems and methods for business analytics model scoring and selection
US20230308360A1 (en) Methods and systems for dynamic re-clustering of nodes in computer networks using machine learning models
Jeyaraman et al. Practical Machine Learning with R: Define, build, and evaluate machine learning models for real-world applications
Dodin et al. Bombardier aftermarket demand forecast with machine learning
McCreadie et al. Next-Generation Personalized Investment Recommendations
Sassi et al. A rough set-based Competitive Intelligence approach for anticipating competitor’s action
CN117893326A (en) Post-casting management informatization system and post-casting management method
CN112927012A (en) Marketing data processing method and device and marketing model training method and device
JP2021170244A (en) Learning model construction system and method of the same
US11763236B1 (en) Systems and methods for synchronizing processing statement deployment across diverse computer networks using a three-tiered artificial intelligence architecture
CN117422314B (en) Enterprise data evaluation method and equipment based on big data analysis
CN117787569B (en) Intelligent auxiliary bid evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination