CN117591372A - Data processing method, electronic device and storage medium - Google Patents

Data processing method, electronic device and storage medium Download PDF

Info

Publication number
CN117591372A
CN117591372A CN202311625887.9A CN202311625887A CN117591372A CN 117591372 A CN117591372 A CN 117591372A CN 202311625887 A CN202311625887 A CN 202311625887A CN 117591372 A CN117591372 A CN 117591372A
Authority
CN
China
Prior art keywords
data
model
historical
performance
historical data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311625887.9A
Other languages
Chinese (zh)
Inventor
彭港
汤海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202311625887.9A priority Critical patent/CN117591372A/en
Publication of CN117591372A publication Critical patent/CN117591372A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3024Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The disclosure provides a data processing method, electronic equipment and a storage medium. The data processing includes: acquiring first historical data and second historical data of first equipment, wherein the first historical data consists of second historical data of different types, and the second historical data is used for representing historical performance parameters of the first equipment; training a first model for predicting first device performance based on the first historical data; performing secondary training on the first model according to the second historical data to obtain a second model; and acquiring real-time data of the first equipment, and inputting the real-time data into the second model to obtain the performance of the first equipment.

Description

Data processing method, electronic device and storage medium
Technical Field
The present disclosure relates to a data processing method, an electronic device, and a storage medium.
Background
The base station is typically deployed on a server as a core device of a communication network. Therefore, the hardware performance of the server is a key for ensuring the normal operation of the base station. For example, the CPU temperature in the server is an important index affecting the performance of the server, and when the CPU temperature is too high, the performance of the server is reduced, resulting in a slow running rate of the base station, and seriously affecting the data service in the coverage area of the base station.
Disclosure of Invention
One aspect of the present disclosure provides a data processing method, including: acquiring first historical data and second historical data of first equipment, wherein the first historical data consists of second historical data of different types, and the second historical data is used for representing historical performance parameters of the first equipment; training a first model for predicting first device performance based on the first historical data; performing secondary training on the first model according to the second historical data to obtain a second model; and acquiring real-time data of the first equipment, and inputting the real-time data into the second model to obtain the performance of the first equipment.
According to an embodiment of the present disclosure, acquiring first history data of a first device includes: transmitting a first data acquisition instruction to the second device so that the second device acquires first historical data of the first device from the first device according to the first data acquisition instruction; training a first model for predicting first device performance based on the first historical data, comprising: and sending a first data processing instruction to the second device so that the second device trains a first model for predicting the performance of the first device according to the first historical data.
According to an embodiment of the present disclosure, obtaining second history data of a first device includes: transmitting a second data acquisition instruction to the third device so that the third device acquires second historical data of the first device from the first device according to the second data acquisition instruction; performing secondary training on the first model according to the second historical data to obtain a second model, including: and sending a second data processing instruction to the third device so that the third device performs secondary training on the first model according to the second historical data to obtain a second model.
According to the embodiment of the disclosure, the third device is an edge device of the first device, or the third device and the first device are edge devices of each other.
According to an embodiment of the present disclosure, acquiring real-time data of a first device includes: transmitting a third data acquisition instruction to the third device so that the third device acquires real-time data from the first device according to the third data acquisition instruction; inputting the real-time data into a second model to obtain the performance of the first device, including: and sending a third data processing instruction to the third device so that the third device inputs the real-time data into the second model to obtain the performance of the first device.
According to an embodiment of the present disclosure, the first history data includes a plurality of different types of second history data; transmitting a second data acquisition instruction to a third device, comprising: and sending second data acquisition instructions to a plurality of third devices of different types, so that the third devices of different types acquire second historical data of corresponding types from the first device according to the second data acquisition instructions.
According to an embodiment of the present disclosure, acquiring real-time data of a first device includes: and sending a third data acquisition instruction to a plurality of different types of third devices so that the different types of third devices acquire the corresponding types of real-time data from the first device according to the third data acquisition instruction.
According to an embodiment of the disclosure, the method further includes compressing the first model, and transmitting the compressed first model to the third device.
Another aspect of the present disclosure also provides an electronic device, including: one or more processors; and a storage device for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method described above.
Another aspect of the present disclosure also provides an electronic device having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described method.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a data processing method according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a data processing method according to an embodiment of the disclosure;
FIG. 3A schematically illustrates a training process of a first model according to an embodiment of the present disclosure;
FIG. 3B schematically illustrates a training process of a second model according to an embodiment of the present disclosure;
FIG. 3C schematically illustrates a performance prediction process of a first device according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a block diagram of a data processing apparatus according to an embodiment of the present disclosure; and
fig. 5 schematically illustrates a schematic block diagram of an example electronic device that may be used to implement the methods of embodiments of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
Thus, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon, the computer program product being usable by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a computer-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
Fig. 1 schematically illustrates an application scenario 100 of a data processing method according to an embodiment of the present disclosure. It should be noted that fig. 1 is merely an example of a scenario in which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, the implementation purpose of the application scenario 100 is: the performance of the server 101, such as CPU temperature, CPU utilization, CPU over-frequency data, accelerator card temperature, memory space, server fan speed, system process number, throughput, ambient temperature, etc., can be predicted in real time.
A plurality of edge devices 102 are communicatively or electrically connected to the server 101, the edge devices 102 being capable of collecting historical performance data of the server 101. In the application scenario 100, the edge device 102 may be an internet of things device (e.g., a sensor, a monitoring camera, an intelligent home device), an edge server, an embedded system (e.g., an industrial control system, an intelligent transportation system, a medical device), and an intelligent device (e.g., a smart phone, a tablet computer). Different edge devices 102 are used to collect different performance data, e.g., edge device 1021 is used to collect the CPU temperature of server 101, edge device 1022 is used to collect the CPU utilization … of server 101, and edge device 102n is used to collect the ambient temperature of server 101.
The cloud device 103 is communicatively connected or electrically connected to the plurality of edge devices, and is configured to obtain historical performance data of the server 101 sent by the plurality of edge devices 102. Cloud device 103 performs model training according to the historical performance data to obtain a performance prediction model for predicting the performance of server 101. The cloud device 103 sends the performance prediction model to each edge device 102, and each edge device 102 trains the performance prediction model again according to the historical performance data collected by itself to obtain an adjusted performance prediction model. For example, the cloud device 103 obtains the CPU temperature data of the server 101 collected by the edge device 1021, the CPU utilization data … of the server 101 collected by the edge device 1022, and the ambient temperature data of the server 101 collected by the edge device 102 n. The cloud device 103 trains a large-scale performance prediction model according to the CPU temperature data and the CPU utilization rate data … environmental temperature data, and sends the performance prediction model to each edge device 102. The edge device 1021 performs secondary training on the performance prediction model by utilizing the CPU temperature data to obtain a performance prediction model with higher correlation with the CPU temperature data; the edge device 1022 performs secondary training on the performance prediction model by using the CPU utilization data to obtain a performance prediction model … with higher correlation with the CPU utilization data, and the edge device 102n performs secondary training on the performance prediction model by using the ambient temperature data to obtain a performance prediction model with higher correlation with the ambient temperature data.
After training a performance prediction model with higher correlation with the data collected by each edge device 102, corresponding data of the server 101 are collected in real time, and the real-time data are input into the performance prediction model to obtain a corresponding performance prediction result. For example, the edge device 1021 collects real-time CPU temperature data and inputs the data into the performance prediction model to obtain a CPU temperature prediction result of the server 101, the edge device 1022 collects real-time CPU utilization data and inputs the data into the performance prediction model to obtain a CPU utilization prediction result … of the server 101, and the edge device 102n collects real-time ambient temperature data and inputs the data into the performance prediction model to obtain an ambient temperature prediction result of the server 101.
Fig. 2 schematically illustrates a flow chart of a data processing method according to an embodiment of the present disclosure.
Specifically, as shown in fig. 2, the method includes operations S201 to S204.
In operation S201, first historical data and second historical data of a first device are obtained, wherein the first historical data is composed of second historical data of different types, and the second historical data is used for characterizing historical performance parameters of the first device.
In the embodiment of the disclosure, the first device may be a server, or may be an electronic device with other specific data processing capabilities or data storage capabilities. Taking table 1 as an example, the first historical data is a data set formed by a plurality of different second historical data, and the second historical data is performance parameter data generated by the first device in operation, such as CPU temperature, CPU utilization, CPU over-frequency data, accelerator card temperature, storage space, server fan rotation speed, system process number, throughput, environment temperature, and the like in the application scenario 100. Thus, one type of second history data is used to characterize one performance parameter of the first device, the first history data being a grouping of different types of second history data.
TABLE 1
In operation S202, a first model for predicting first device performance is trained based on first historical data.
In the embodiment of the disclosure, the training process includes feature extraction of performance parameters to form a feature data stream, and then the feature data stream is divided into a training sample set and a test sample set after normalization processing, wherein the training set is used for model training, and the test set is used for model verification. The present embodiment may employ, but is not limited to, the LSTM model. Taking CPU temperature as an example, in the LSTM model training process, verifying the LSTM model by using a test set, calculating the mean square error between the predicted value and the true value of the server CPU temperature, judging whether the mean square error is smaller than an expected threshold value, if so, completing training, otherwise, collecting the first historical data again for training.
In the embodiment of the disclosure, after the first model is trained, the first model may be subjected to compression processing, for example, a neural network-based weight pruning algorithm may be used to prune the trained first model by weight, and specifically, the weight pruning is performed by introducing a regularization term into the objective function, so that the weight tends to be sparse, or the weight is lower than a certain predefined threshold, thereby achieving the purpose of model compression. Assume that an LSTM model has been trained and that the weighting parameters of the model are as follows:
Layer 1:
LSTM weights:[0.2,0.3,0.4,0.5]
LSTM biases:[0.1,0.2,0.3,0.4]
Dense weights:[0.6,0.7,0.8,0.9]
Dense biases:[0.5]
Layer 2:
LSTM weights:[0.3,0.4,0.5,0.6]
LSTM biases:[0.2,0.3,0.4,0.5]
Dense weights:[0.7,0.8,0.9,1.0]
Dense biases:[0.6]
in performing weight pruning, the embodiments of the present disclosure set a partial weight parameter in a model to zero to reduce complexity and storage space of the model, for example:
Layer 1:
LSTM weights:[0.2,0.0,0.4,0.0]
LSTM biases:[0.1,0.0,0.3,0.0]
Dense weights:[0.6,0.0,0.8,0.0]
Dense biases:[0.5]
Layer 2:
LSTM weights:[0.0,0.4,0.0,0.6]
LSTM biases:[0.0,0.3,0.0,0.5]
Dense weights:[0.7,0.0,0.9,0.0]
Dense biases:[0.6]
it can be seen that in the first model after pruning, the partial weight parameter is set to zero, so that the complexity and the storage space of the model are reduced, and the miniaturized device is convenient for processing and storing the model. For example, in the application scenario 100, because some edge devices 102 have limited storage capacity and processing capacity, the cloud device 103 compresses the trained model, and sends the first model with a smaller volume to the edge devices 102, which is beneficial for the edge devices 102 to store and retrain the first model.
In operation S203, the first model is trained for the second time according to the second history data, to obtain a second model.
In the embodiments of the present disclosure, it has been mentioned above that the first model is trained from the first historical data, the training data of which is a plurality of types of data. The operation carries out secondary training on the first model by utilizing the data of the single type, and a performance prediction model with higher correlation with the second historical data type can be obtained. Thus, after the second model is trained for the second time by using the second historical data of different types, a plurality of second models of different types are obtained and are used for predicting different performances of the first device.
It can be appreciated that the training of the first model features by the embodiments of the present disclosure is: because the data size is large, the output of the model can reflect the overall performance of the first device, but because the data of various types are fused, the model can be weaker in prediction of a specific performance; therefore, the embodiment of the disclosure re-trains the first model by using the specific type of performance data to obtain the second model, so that the performance prediction capability of the second model in the specific type is enhanced.
In operation S204, real-time data of the first device is acquired, and the real-time data is input to the second model, thereby obtaining performance of the first device.
According to the embodiment of the disclosure, the initial training is performed by utilizing the overall performance data, and the secondary training is performed by utilizing the specific type of performance data, so that the prediction result given by the model can more embody the objective condition of the first equipment, and the prediction accuracy is improved. In addition, after the first training, the embodiment of the disclosure compresses the obtained first model, so that the complexity and the storage space of the model are reduced, and the model is conveniently subjected to secondary training and storage by miniaturized equipment.
The method illustrated in fig. 2 is further described below with reference to fig. 3A-3C in conjunction with the exemplary embodiment.
As shown in FIG. 3A, the training process of the first model includes operations S311-S312.
In operation S311, a first data acquisition instruction is transmitted to the second device to cause the second device to acquire first history data of the first device from the first device according to the first data acquisition instruction.
In the embodiment of the present disclosure, the second device may be, for example, the cloud device 103 in the application scenario 100, and the main body that sends the first data acquisition instruction to the second device may be the second device itself, the first electronic device, or another third device, for example, the edge device 102 in the application scenario 100. The process of the second device obtaining the first history data may be: the second device directly accesses the first device to acquire, the first device pushes the first historical data to the second device, or the third device acquires the first historical data from the first device and forwards the first historical data to the second device (or the second device accesses the third device to acquire).
In operation S312, first data processing instructions are sent to the second device to cause the second device to train a first model for predicting performance of the first device based on the first historical data.
In the embodiment of the present disclosure, on the basis of operation S311, the main body that sends the first data processing instruction to the second device may be the second device itself, the first device, or other third devices, such as the edge device 102 in the application scenario 100. Since the training process of the first model is described in detail in the operation illustrated in fig. 2, a detailed description thereof will be omitted.
As shown in FIG. 3B, the training process of the second model includes operations S321-S322.
In operation S321, a second data acquisition instruction is transmitted to the third device, so that the third device acquires second history data of the first device from the first device according to the second data acquisition instruction.
In the embodiment of the present disclosure, the third device is an edge device of the first device, for example, the third device is an edge device 102 of the server 101 in the application scenario 100. In the application scenario 100, the embodiment of the present disclosure has been explained in detail on the edge device 102, but it is worth mentioning that the third device and the first device may be edge devices, for example, in a server cluster, the server B may be an edge device of the server a, for predicting the performance of the server a, and accordingly, the server a may be an edge device of the server B, for predicting the performance of the server C, and then may even be derived, where multiple servers may be edge devices of one server, and one server may also be edge devices of multiple servers.
In the embodiment of the present disclosure, the main body that transmits the second data acquisition instruction to the third device may be one of the third device itself, the first device, and the second device. The process of the third device obtaining the second history data may be: the third device directly accesses the first device to acquire and the first device pushes the first device to the third device.
Specifically, a second data acquisition instruction is sent to a plurality of third devices of different types, so that the third devices of different types acquire second historical data of corresponding types from the first device according to the second data acquisition instruction, as shown in table 2:
TABLE 2
As can be seen, 4 third devices: the internet of things sensor, the edge server, the PLC (programmable logic controller) and the embedded system respectively acquire 4 types of performance data (second historical data) from the first device: CPU temperature, CPU utilization, CPU over-frequency and system process number.
In operation S322, a second data processing instruction is sent to the third device, so that the third device performs secondary training on the first model according to the second history data to obtain a second model.
In the embodiment of the present disclosure, the body that transmits the second data processing instruction to the third device may be one of the third device itself, the first device, and the second device on the basis of operation S321. Since the training process of the second model is described in detail in the operation illustrated in fig. 2, a detailed description thereof will be omitted.
As shown in fig. 3C, the performance prediction process of the first device includes operations S331 to S332.
In operation S331, a third data acquisition instruction is transmitted to a plurality of different types of third devices, so that the different types of third devices acquire the corresponding types of real-time data from the first device according to the third data acquisition instruction.
In the embodiment of the present disclosure, the main body that transmits the third data acquisition instruction to the third device may be one of the third device itself, the first device, and the second device. The process of the third device obtaining the real-time data of the corresponding type may be: the third device directly accesses the first device to acquire and the first device pushes the first device to the third device.
In table 2, the sensors of the internet of things acquire the CPU temperature data of the first device (server 101) in real time, the edge server acquires the CPU utilization data of the first device (server 101) in real time, the PLC acquires the CPU over-frequency data of the first device (server 101) in real time, and the embedded system acquires the system process data of the first device (server 101) in real time.
In operation S332, a third data processing instruction is sent to the third device, so that the third device inputs the real-time data into the second model, resulting in the performance of the first device.
In the embodiment of the present disclosure, the main body that transmits the third data processing instruction to the third device may be one of the third device itself, the first device, and the second device. In connection with the example of table 2 above, as shown in table 3 below:
internet of things sensor Edge server PLC Embedded system
Time point CPU temperature (DEG C) CPU utilization (%) CPU over-frequency Number of system processes
2022-01-01 10:00 60 30 0 100
2022-01-01 11:00 65 35 0 110
2022-01-01 12:00 70 40 1 120
2022-01-01 13:00 75 45 1 130
2022-01-01 14:00 80 50 1 140
2022-06-01 14:00 75 50 1 140
2022-06-01 15:00 80 60 1 150
2022-06-01 16:00 85 65 1 160
TABLE 3 Table 3
Taking the third device as the sensor of the Internet of things as an example, if the current time is 2022-06-01-14:00, the CPU temperature of the first device at the current time is 75 Is input into a second model trained by the sensor of the Internet of things, the second model is output at the future time 2022-06-01:15:00, and the CPU temperature of the first equipment is 80 At future time 2022-06-01 16:00, the CPU temperature of the first device is 85
Taking the third device as an edge server as an example, if the current time is 2022-06-01:14:00, inputting 50% of the CPU utilization rate of the first device at the current time into a second model trained by the edge server, outputting the second model at the future time 2022-06-01:15:00, 60% of the CPU utilization rate of the first device at the future time 2022-06-01:16:00, and 65% of the CPU utilization rate of the first device.
According to the embodiment of the disclosure, the second historical data with more complex data is placed in the second device such as the cloud device 103 for primary training, and then the second training and storage are performed in the edge device 102 with weaker data processing and storage capacity, so that computing resources, storage space and operation strategies can be reasonably deployed, real-time prediction of the performance of the cluster server can be realized, and further the effect of real-time response is achieved.
Fig. 4 schematically shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the data processing apparatus 400 includes a data acquisition module 410, a first training module 420, a second training module 430, and a performance prediction module 440. The data processing apparatus 400 may perform the method described above with reference to fig. 2-3C to implement the performance prediction of the first device.
Specifically, the data obtaining module 410 is configured to obtain first historical data and second historical data of the first device; the first training module 420 is configured to train a first model for predicting the performance of the first device according to the first history data; the second training module 430 is configured to perform secondary training on the first model according to the second historical data to obtain a second model; the performance prediction module 440 is configured to obtain real-time data of the first device, and input the real-time data to the second model to obtain the performance of the first device.
It will be appreciated that the data processing apparatus 400, including the data acquisition module 410, the first training module 420, the second training module 430, and the performance prediction module 440, may be incorporated in one module or any one of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to an embodiment of the invention, at least one of the data processing apparatus 400 including the data acquisition module 410, the first training module 420, the second training module 430, and the performance prediction module 440 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or any other reasonable manner of integrating or packaging the circuitry, or in hardware or firmware, or in a suitable combination of software, hardware, and firmware implementations. Alternatively, at least one of the data processing apparatus 400 including the data acquisition module 410, the first training module 420, the second training module 430, and the performance prediction module 440 may be at least partially implemented as computer program modules, which when executed by a computer, may perform the functions of the respective modules.
Fig. 5 shows a schematic block diagram of an example electronic device 500 that may be used to implement methods of embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for device operation can also be stored. The computing unit 501, ROM502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in electronic device 500 are connected to I/O interface 505, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the various methods and processes described above, such as an application running method. For example, in some embodiments, the application running method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM502 and/or the communication unit 509. When a computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the application running method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the application execution method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. A data processing method, comprising:
acquiring first historical data and second historical data of first equipment, wherein the first historical data consists of second historical data of different types, and the second historical data is used for representing historical performance parameters of the first equipment;
training a first model for predicting the first device performance based on the first historical data;
performing secondary training on the first model according to the second historical data to obtain a second model;
and acquiring real-time data of the first equipment, and inputting the real-time data into the second model to obtain the performance of the first equipment.
2. The data processing method of claim 1, wherein the acquiring the first historical data of the first device comprises: sending a first data acquisition instruction to a second device, so that the second device acquires first historical data of the first device from the first device according to the first data acquisition instruction;
the training a first model for predicting the first device performance according to the first historical data comprises: and sending a first data processing instruction to the second device so that the second device trains a first model for predicting the performance of the first device according to the first historical data.
3. The data processing method according to claim 1 or 2, wherein the acquiring the second history data of the first device includes: transmitting a second data acquisition instruction to a third device, so that the third device acquires second historical data of the first device from the first device according to the second data acquisition instruction;
and performing secondary training on the first model according to the second historical data to obtain a second model, wherein the secondary training comprises the following steps: and sending a second data processing instruction to the third device so that the third device performs secondary training on the first model according to the second historical data to obtain a second model.
4. A data processing method according to claim 3, wherein the third device is an edge device of the first device, or the third device and the first device are edge devices of each other.
5. The data processing method according to claim 3 or 4, wherein the acquiring real-time data of the first device includes: transmitting a third data acquisition instruction to the third device so that the third device acquires real-time data from the first device according to the third data acquisition instruction;
the inputting the real-time data into the second model to obtain the performance of the first device comprises the following steps: and sending a third data processing instruction to the third device so that the third device inputs the real-time data to the second model to obtain the performance of the first device.
6. The data processing method of claim 5, wherein the first history data comprises a plurality of different types of second history data;
the sending a second data acquisition instruction to the third device includes:
and sending second data acquisition instructions to a plurality of third devices of different types, so that the third devices of different types acquire second historical data of corresponding types from the first device according to the second data acquisition instructions.
7. The data processing method of claim 6, wherein the acquiring real-time data of the first device comprises:
and sending a third data acquisition instruction to a plurality of different types of third devices so that the different types of third devices acquire the corresponding types of real-time data from the first devices according to the third data acquisition instruction.
8. A data processing method according to claim 3, further comprising:
compressing the first model, and sending the compressed first model to the third device.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-8.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-8.
CN202311625887.9A 2023-11-30 2023-11-30 Data processing method, electronic device and storage medium Pending CN117591372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311625887.9A CN117591372A (en) 2023-11-30 2023-11-30 Data processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311625887.9A CN117591372A (en) 2023-11-30 2023-11-30 Data processing method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117591372A true CN117591372A (en) 2024-02-23

Family

ID=89913010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311625887.9A Pending CN117591372A (en) 2023-11-30 2023-11-30 Data processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117591372A (en)

Similar Documents

Publication Publication Date Title
US11763599B2 (en) Model training method and apparatus, face recognition method and apparatus, device, and storage medium
CN112801164A (en) Training method, device and equipment of target detection model and storage medium
CN110598802A (en) Memory detection model training method, memory detection method and device
CN113469340A (en) Model processing method, federal learning method and related equipment
WO2024198769A1 (en) Model training method, power prediction method, and device
CN113191479B (en) Method, system, node and storage medium for joint learning
CN112784778A (en) Method, apparatus, device and medium for generating model and identifying age and gender
CN112580733B (en) Classification model training method, device, equipment and storage medium
CN114722937B (en) Abnormal data detection method and device, electronic equipment and storage medium
US20230252070A1 (en) Method and apparatus for training retrieval model, retrieval method and apparatus, device and medium
CN112925297B (en) Automatic driving algorithm verification method, device, equipment, storage medium and product
CN113627361B (en) Training method and device for face recognition model and computer program product
US20220130495A1 (en) Method and Device for Determining Correlation Between Drug and Target, and Electronic Device
CN113162787B (en) Method for fault location in a telecommunication network, node classification method and related devices
JP7389860B2 (en) Security information processing methods, devices, electronic devices, storage media and computer programs
CN113626612A (en) Prediction method and system based on knowledge graph reasoning
CN113902010A (en) Training method of classification model, image classification method, device, equipment and medium
CN111090877B (en) Data generation and acquisition methods, corresponding devices and storage medium
CN113591709B (en) Motion recognition method, apparatus, device, medium, and product
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
CN112561061A (en) Neural network thinning method, apparatus, device, storage medium, and program product
CN116739154A (en) Fault prediction method and related equipment thereof
CN117591372A (en) Data processing method, electronic device and storage medium
CN116450384A (en) Information processing method and related device
CN117554748B (en) Method, device, equipment and storage medium for detecting fault line of power distribution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination