CN117153421A - Data monitoring method and device based on neural network algorithm - Google Patents

Data monitoring method and device based on neural network algorithm Download PDF

Info

Publication number
CN117153421A
CN117153421A CN202311008658.2A CN202311008658A CN117153421A CN 117153421 A CN117153421 A CN 117153421A CN 202311008658 A CN202311008658 A CN 202311008658A CN 117153421 A CN117153421 A CN 117153421A
Authority
CN
China
Prior art keywords
neural network
network model
data set
training
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311008658.2A
Other languages
Chinese (zh)
Inventor
穆显显
张苑琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiji Computer Corp Ltd
Original Assignee
Taiji Computer Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiji Computer Corp Ltd filed Critical Taiji Computer Corp Ltd
Priority to CN202311008658.2A priority Critical patent/CN117153421A/en
Publication of CN117153421A publication Critical patent/CN117153421A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/80ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Abstract

The application provides a data monitoring method and a device based on a neural network algorithm, and relates to the field of artificial intelligence, wherein the method comprises the following steps: obtaining a history day new positive data set, and screening the history day new positive data set according to feature selection to obtain a target history data set; dividing a target historical data set based on preset partition window parameters, and carrying out data preprocessing on the divided target historical data set; constructing a candidate deep learning neural network model based on the convolutional neural network model, the two-way long-short-term memory network model and the attention mechanism, training the candidate deep learning neural network model through a training set until training is finished, and outputting the deep learning neural network model; and inputting the newly-increased positive data into a deep learning neural network model to generate a prediction result. The method can realize the prediction of epidemic situation and improve the efficiency and convenience of epidemic risk prediction of the target area.

Description

Data monitoring method and device based on neural network algorithm
Technical Field
The application relates to the field of artificial intelligence, in particular to a data monitoring method and device based on a neural network algorithm.
Background
Epidemic situation risk prediction is one of the key problems in the field of public health safety, and prediction of epidemic situation space risk data can enable urban epidemic situation risks to be more visual, enable people to more clearly determine the risk degree of the area, and be beneficial to epidemic situation supervision and re-engineering re-learning decisions.
At present, epidemic situation trend prediction is often performed by manually collecting epidemic situation information and predicting epidemic situation trend based on the epidemic situation information in a manual mode. Because epidemic trend is affected by a plurality of factors, accurate prediction of epidemic trend cannot be performed, and epidemic prevention and control can only be performed passively. In order to reduce serious harm caused by epidemic situation, accurate prediction of epidemic situation trend is necessary.
Disclosure of Invention
Aiming at the problems, a data monitoring method and a data monitoring device based on a neural network algorithm are provided, the deep learning neural network model is utilized to extract features from a data set by combining the deep learning neural network model, and a attention mechanism is introduced to obtain a prediction result of the data set based on the selected features, so that the accurate prediction of epidemic risk is realized.
The first aspect of the present application provides a data monitoring method based on a neural network algorithm, including:
obtaining a history day new positive data set, and screening the history day new positive data set according to feature selection to obtain a target history data set;
dividing the target historical data set based on preset partition window parameters, carrying out data preprocessing on the divided target historical data set, and dividing the target historical data set into a training set and a testing set according to a preset proportion;
constructing a candidate deep learning neural network model based on a convolutional neural network model, a two-way long-short-term memory network model and an attention mechanism, training the candidate deep learning neural network model through the training set until training is finished, and outputting the deep learning neural network model;
and inputting the newly added positive data into the deep learning neural network model to generate a prediction result.
Optionally, the obtaining the history day new positive data set, screening the history day new positive data set according to feature selection, to obtain a target history data set, including:
summarizing the historical day newly-added positive data of the local area, the historical day newly-added positive data of the external personnel and the historical day newly-added positive data of the overseas input personnel to generate a historical day newly-added positive data set;
calculating the proportion of the history date newly-increased positive historical data of local areas, external personnel and overseas input personnel to the history date newly-increased positive data set respectively;
and sequencing the proportion of local areas, external personnel and overseas input personnel from large to small, and selecting the history date newly-added positive history data corresponding to the preset number of proportion with the top ranking as the target history data set.
Optionally, the data preprocessing of the divided target historical data set includes:
the target historical dataset is processed by min-max normalization.
Optionally, the training the candidate deep learning neural network model through the training set includes:
extracting features from the training set by a convolutional neural network model;
performing sequence prediction on the training set through a two-way long-short-term memory network model, and evaluating the influence of different characteristics by comparing input and output results;
the error is calculated by the attention mechanism and a corresponding weight is set for each feature, updating the weights.
Optionally, the method further comprises:
inputting the test set into the deep learning neural network model to generate a test result;
calculating a mean square error according to the test result and the true value in the test set;
and evaluating the prediction performance of the deep learning neural network model according to the magnitude of the mean square error.
The second aspect of the present application proposes a data monitoring device based on a neural network algorithm, including:
the acquisition module is used for acquiring a history day new positive data set, and screening the history day new positive data set according to feature selection to acquire a target history data set;
the data processing module is used for dividing the target historical data set based on preset partition window parameters, carrying out data preprocessing on the divided target historical data set, and dividing the target historical data set into a training set and a testing set according to a preset proportion;
the training module is used for constructing a candidate deep learning neural network model based on the convolutional neural network model, the two-way long-short-term memory network model and the attention mechanism, training the candidate deep learning neural network model through the training set until the training is finished, and outputting the deep learning neural network model;
and the output module is used for inputting the day-new positive data into the deep learning neural network model to generate a prediction result.
A third aspect of the application proposes a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of the first aspects described above when executing the computer program.
A fourth aspect of the application proposes a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the first aspects above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
by combining deep learning with a neural network model, features are extracted from the positive data set by using the deep learning neural network model, and a attention mechanism is introduced to obtain a prediction result of the positive data set based on the selected features, so that accurate prediction of epidemic situation is realized, and the epidemic situation risk prediction efficiency and convenience of a target area are improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart illustrating a data monitoring method based on a neural network algorithm, according to an embodiment of the present application;
FIG. 2 is a block diagram of a data monitoring device based on a neural network algorithm, according to an embodiment of the present application;
fig. 3 is a block diagram of an electronic device.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
Fig. 1 is a flowchart of a data monitoring method based on a neural network algorithm, according to an embodiment of the present application, including:
step 101, obtaining a new positive data set of the history date, and screening the new positive data set of the history date according to feature selection to obtain a target history data set.
In the embodiment of the application, the history day newly-increased positive data set is composed of various data, including history day newly-increased positive data of local areas, history day newly-increased positive data of external personnel and history day newly-increased positive data of overseas input personnel.
In order to improve the overall value of epidemic situation data, the feature with the largest information quantity and the strongest correlation can be selected from a plurality of features forming a historical daily positive data set, and the method can be realized by applying statistical and machine learning methods, such as correlation analysis, mutual information or feature importance ranking.
According to the embodiment of the application, the proportion of the history day newly-added positive historical data of the local area, the external personnel and the overseas input personnel to the history day newly-added positive data set is calculated respectively, the proportion of the local area, the external personnel and the overseas input personnel is ordered from large to small, and the history day newly-added positive historical data corresponding to the preset number of proportions with the top ranking is selected as the target history data set.
In a possible embodiment, taking Beijing areas as an example, the proportion of the historical day newly added positive historical data of each Beijing area, the foreign Beijing personnel and the foreign Beijing personnel to the historical day newly added positive data set is calculated respectively, the proportion of each Beijing area, the foreign Beijing personnel and the foreign Beijing personnel is ranked from large to small, and the historical day newly added positive historical data corresponding to the proportion of the preset number of the arranged Beijing areas, the foreign Beijing personnel and the foreign Beijing personnel is selected as the target historical data set.
It should be noted that the selected features should accurately describe the relationships between the data while minimizing redundancy and noise, and by selecting the most relevant features, the efficiency and accuracy of the model can be improved, overfitting prevented, and the underlying process of generating the data better understood.
Step 102, dividing a target historical data set based on preset partition window parameters, carrying out data preprocessing on the divided target historical data set, and dividing the target historical data set into a training set and a testing set according to a preset proportion.
In the prior art, a partition window is a key parameter that determines how input data is divided into smaller subsets or partitions for neural network processing. Partition windows, which are defined as time intervals or spatial regions of a fixed length that determine the size and number of partitions based on input data, which may be set to a fixed size or adaptive, adjust window size according to data characteristics, are typically used when processing time series data or other data having a temporal or spatial structure, and may significantly affect the performance and efficiency of a neural network model.
It should be noted that once defined, the input data is divided into overlapping or non-overlapping partitions of equal size, and the neural network processes these partitions independently before combining the results to obtain the final output, this approach has many advantages, including improved efficiency, reduced memory requirements, and the ability to process large data sets, however, dividing the window can also have a significant impact on the accuracy and performance of the neural network model. If the window size is too small, important temporal or spatial relationships in the data may be missed, resulting in reduced accuracy. Conversely, if the window size is too large, the model may become too complex and the data is fitted, resulting in reduced generalization performance.
To determine the optimal partition window for a given data set, various factors such as data set size, temporal or spatial resolution, and potential pattern complexity should be considered. Grid search, cross-validation and other optimization methods can be used to achieve this, in summary, the partitioning window of the neural network model is a key parameter that plays an important role in how to partition input data for the neural network processing. The selection of the optimal partition window should take into account various factors to ensure optimal performance and efficiency of the model.
In one embodiment of the application, the window length is set to 7 days, the prediction period is set to 1 day, the sliding window length is set to 1, and new positive population data after 1 day is predicted by examining the data from the previous 7 days.
In addition, the application also preprocesses the divided data.
It should be noted that an important aspect of data preprocessing is normalization, a technique for scaling data to a small range of values, which is used to ensure that all features contribute equally to the model, and that the model is not subject to features having large values or ranges.
The current conventional preprocessing process includes min-max normalization, which scales the data to a range between 0 and 1, Z-score normalization, which expands the data to an average value of 0 and a standard deviation of 1, and decimal scaling normalization, which includes dividing the data by a power of 10, which scales it to a range between-1 and 1.
In the embodiment of the application, the target historical data set is processed through the minimum-maximum normalization, so that the influence of abnormal values and noise in the data is reduced, and the performance of the neural network model can be improved.
And 103, constructing a candidate deep learning neural network model based on the convolutional neural network model, the two-way long-short-term memory network model and the attention mechanism, training the candidate deep learning neural network model through a training set until training is finished, and outputting the deep learning neural network model.
Convolutional neural networks (Convolutional Neural Networks, CNN) are a class of feedforward neural networks that contain convolutional computations and have a deep structure, and are one of the representative algorithms for deep learning. CNNs are capable of learning mappings between inputs and outputs, and do not require precise mathematical expressions, and by training convolutional neural networks, they are capable of mapping between inputs to outputs.
The CNN is used for extracting and classifying features of input image or audio data, and can automatically learn and identify the features of edges, textures, shapes and the like in the image, so that the classification or identification of the image is realized. Compared with the traditional machine learning algorithm, the CNN has better recognition accuracy and stronger robustness.
In addition, long Short-Term Memory (LSTM) is a manual time-loop network, and although the loop neural network can process time-series data and has a certain Memory capacity, the problems of gradient disappearance, gradient explosion and the like are also common, and the LSTM network can effectively avoid the problems due to the characteristic of partially forgetting historical data. In addition to better predicting time series data, LSTM neural networks can also be used in other areas, such as natural language processing and speech recognition, because the structure of LSTM neural networks enables them to handle long-term dependencies in input sequences, which are common in natural language and speech signals. Therefore, long and short term memory networks have become a very important type of neural network in deep learning.
In addition, attention mechanism (Attention) is an important technology for neural network modeling, and its excellent performance has been widely used in a plurality of prediction tasks such as machine translation, question-answering, emotion analysis, part-of-speech tagging, and the like. The attention mechanism has the advantages of improving the prediction precision, better explaining the neural network, solving the problems of increased input length, ignored important factors, long-sequence input sequence processing and the like. Thus, the mechanism of attention has become a widely focused area of research.
The attention mechanism focuses on important information more accurately by weighting different parts of the input data. In the conventional neural network model, all input data is considered equally important. And the attention mechanism can give different weight values according to different parts of the input data so as to more accurately place the center of gravity on important information. The Attention mechanism separates the input data into query vectors and key-value pairs. The query vector is used to represent the object of interest, while the key-value pairs are used to represent the various portions of the input data. By calculating the similarity between the query vector and the key value pairs, the weight values of the different parts can be obtained so as to pay more attention to important information more accurately.
In the embodiment of the application, the convolutional neural network model, the two-way long-short-term memory network model and the attention mechanism are used for constructing the candidate deep learning neural network model, and the characteristics of space and time are captured and focused on the characteristics which have important influence on the prediction result, so that the accuracy and reliability of the model are improved, and powerful support is provided for epidemic situation monitoring and prediction.
In one embodiment of the application, the processing of the training set in the training process includes:
extracting features from the training set through a convolutional neural network model;
performing sequence prediction on the training set through a two-way long-short-term memory network model, and evaluating the influence of different characteristics by comparing input results with output results;
the error is calculated by the attention mechanism and a corresponding weight is set for each feature, updating the weights.
In addition, it can be understood that the training of the model is a repeated iterative process, and the training is performed by continuously adjusting the network parameters of the model until the loss function value of the whole model is smaller than a preset value, or the loss function value of the whole model is not changed or the change amplitude is slow, and the model converges, so that a trained model is obtained.
Alternatively, if the preset training times are reached, the training may be considered to be finished.
Alternatively, if the preset training time is reached, the training may be considered to be finished.
In the embodiment of the application, the deep learning neural network model is also evaluated through the test set.
It should be noted that model evaluation is an important part of deep learning model construction, and can help evaluate the performance and prediction ability of the model, and in model evaluation, training loss and verification loss are two important indexes, which can help understand the performance of the model in training and testing processes.
The two losses, namely training loss and verification loss, are calculated by using loss functions, and in the training process, the model minimizes the difference between the predicted value and the true value by optimizing the loss functions so as to improve the performance of the model. When the model learns and trains the data in the data set, the model is continuously optimized, the training loss is used for measuring the performance of the model on the training set, and the smaller the value of the training loss is, the better the performance capability on the training set is, and the better the generalization capability of the model is; similarly, when the model tests the data to be verified, the verification loss is used for measuring the performance of the model on the verification data, and the smaller the value is, the better the generalization capability of the model is.
In the embodiment of the application, the training loss and the verification loss are calculated by using the same calculation method, namely the loss function is used for calculating, and the MSE error function is used for measuring the accuracy of the deep learning neural network model. A measure reflecting the degree of difference between the estimated quantity and the estimated quantity is the expected value of the square of the difference between the estimated value and the true value, and the following is the calculation formula of the mean square error:
wherein, the value range of MSE is 0 to positive infinity, the smaller the value is, the smaller the prediction error of the model is, and the better the performance of the model is. In the training process, the model updates parameters through a back propagation algorithm, so that training loss is continuously reduced, training loss is calculated for each iteration to monitor the whole training process, and verification loss is not used for updating the model parameters and is only used for evaluating the performance of the model.
And 104, inputting the newly added positive data into a deep learning neural network model to generate a prediction result.
In the embodiment of the application, the newly-added positive data are input into the deep learning neural network model to generate the prediction result, and the prediction result is used for reflecting the epidemic situation prediction result.
According to the embodiment of the application, the deep learning neural network model is utilized to extract the features from the positive data set by combining the deep learning neural network model, and the attention mechanism is introduced to obtain the prediction result of the positive data set based on the selected features, so that the accurate prediction of epidemic situation is realized, and the epidemic situation risk prediction efficiency and convenience of a target area are improved.
Fig. 2 is a block diagram of a data monitoring apparatus based on a neural network algorithm, according to an embodiment of the present application, including:
an obtaining module 210, configured to obtain a history day new positive data set, and screen the history day new positive data set according to feature selection to obtain a target history data set;
the data processing module 220 is configured to divide a target historical data set based on a preset partition window parameter, perform data preprocessing on the divided target historical data set, and divide the target historical data set into a training set and a test set according to a preset proportion;
the training module 230 is configured to construct a candidate deep learning neural network model based on the convolutional neural network model, the two-way long-short-term memory network model and the attention mechanism, train the candidate deep learning neural network model through the training set, and output the deep learning neural network model until the training is finished;
and the output module 240 is used for inputting the newly added positive data into the deep learning neural network model to generate a prediction result.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 3 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 3, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, such as a voice instruction response method. For example, in some embodiments, the voice instruction response method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the voice instruction response method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the voice instruction response method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (8)

1. The data monitoring method based on the neural network algorithm is characterized by comprising the following steps of:
obtaining a history day new positive data set, and screening the history day new positive data set according to feature selection to obtain a target history data set;
dividing the target historical data set based on preset partition window parameters, carrying out data preprocessing on the divided target historical data set, and dividing the target historical data set into a training set and a testing set according to a preset proportion;
constructing a candidate deep learning neural network model based on a convolutional neural network model, a two-way long-short-term memory network model and an attention mechanism, training the candidate deep learning neural network model through the training set until training is finished, and outputting the deep learning neural network model;
and inputting the newly added positive data into the deep learning neural network model to generate a prediction result.
2. The method of claim 1, wherein obtaining the historical day new positive dataset, and screening the historical day new positive dataset according to feature selection, comprises:
summarizing the historical day newly-added positive data of the local area, the historical day newly-added positive data of the external personnel and the historical day newly-added positive data of the overseas input personnel to generate a historical day newly-added positive data set;
calculating the proportion of the history date newly-increased positive historical data of local areas, external personnel and overseas input personnel to the history date newly-increased positive data set respectively;
and sequencing the proportion of local areas, external personnel and overseas input personnel from large to small, and selecting the history date newly-added positive history data corresponding to the preset number of proportion with the top ranking as the target history data set.
3. The method of claim 1, wherein the data preprocessing the partitioned target historical data set comprises:
the target historical dataset is processed by min-max normalization.
4. The method of claim 1, wherein training the candidate deep learning neural network model through the training set comprises:
extracting features from the training set by a convolutional neural network model;
performing sequence prediction on the training set through a two-way long-short-term memory network model, and evaluating the influence of different characteristics by comparing input and output results;
the error is calculated by the attention mechanism and a corresponding weight is set for each feature, updating the weights.
5. The method according to claim 1, characterized in that the method further comprises:
inputting the test set into the deep learning neural network model to generate a test result;
calculating a mean square error according to the test result and the true value in the test set;
and evaluating the prediction performance of the deep learning neural network model according to the magnitude of the mean square error.
6. A data monitoring device based on a neural network algorithm, comprising:
the acquisition module is used for acquiring a history day new positive data set, and screening the history day new positive data set according to feature selection to acquire a target history data set;
the data processing module is used for dividing the target historical data set based on preset partition window parameters, carrying out data preprocessing on the divided target historical data set, and dividing the target historical data set into a training set and a testing set according to a preset proportion;
the training module is used for constructing a candidate deep learning neural network model based on the convolutional neural network model, the two-way long-short-term memory network model and the attention mechanism, training the candidate deep learning neural network model through the training set until the training is finished, and outputting the deep learning neural network model;
and the output module is used for inputting the day-new positive data into the deep learning neural network model to generate a prediction result.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of claims 1-5 when executing the computer program.
8. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1-5.
CN202311008658.2A 2023-08-10 2023-08-10 Data monitoring method and device based on neural network algorithm Pending CN117153421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311008658.2A CN117153421A (en) 2023-08-10 2023-08-10 Data monitoring method and device based on neural network algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311008658.2A CN117153421A (en) 2023-08-10 2023-08-10 Data monitoring method and device based on neural network algorithm

Publications (1)

Publication Number Publication Date
CN117153421A true CN117153421A (en) 2023-12-01

Family

ID=88883396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311008658.2A Pending CN117153421A (en) 2023-08-10 2023-08-10 Data monitoring method and device based on neural network algorithm

Country Status (1)

Country Link
CN (1) CN117153421A (en)

Similar Documents

Publication Publication Date Title
CN113657465B (en) Pre-training model generation method and device, electronic equipment and storage medium
WO2022068623A1 (en) Model training method and related device
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
CN114118287A (en) Sample generation method, sample generation device, electronic device and storage medium
CN116684330A (en) Traffic prediction method, device, equipment and storage medium based on artificial intelligence
CN113409898B (en) Molecular structure acquisition method and device, electronic equipment and storage medium
CN113642727A (en) Training method of neural network model and processing method and device of multimedia information
CN111461306A (en) Feature evaluation method and device
CN113961765B (en) Searching method, searching device, searching equipment and searching medium based on neural network model
CN115577310A (en) Abnormal object identification method and device, electronic equipment and storage medium
CN117153421A (en) Data monitoring method and device based on neural network algorithm
CN115907775A (en) Personal credit assessment rating method based on deep learning and application thereof
CN113516185B (en) Model training method, device, electronic equipment and storage medium
CN114610953A (en) Data classification method, device, equipment and storage medium
CN114861800A (en) Model training method, probability determination method, device, equipment, medium and product
CN114037058B (en) Pre-training model generation method and device, electronic equipment and storage medium
CN114066278B (en) Method, apparatus, medium, and program product for evaluating article recall
CN117649695B (en) Face image generation method, device, equipment and storage medium
CN114581751B (en) Training method of image recognition model, image recognition method and device
CN114037057B (en) Pre-training model generation method and device, electronic equipment and storage medium
CN113761379B (en) Commodity recommendation method and device, electronic equipment and medium
CN117934137A (en) Bad asset recovery prediction method, device and equipment based on model fusion
CN113962319A (en) Method and apparatus for training predictive models and predictive data
CN115600129A (en) Information identification method and device, electronic equipment and storage medium
CN117635342A (en) Investment portfolio optimization method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination