CN117113137A - Power model matching method and device, storage medium and electronic equipment - Google Patents

Power model matching method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117113137A
CN117113137A CN202310987415.1A CN202310987415A CN117113137A CN 117113137 A CN117113137 A CN 117113137A CN 202310987415 A CN202310987415 A CN 202310987415A CN 117113137 A CN117113137 A CN 117113137A
Authority
CN
China
Prior art keywords
power model
disturbance
sample
detected
power
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310987415.1A
Other languages
Chinese (zh)
Inventor
那琼澜
苏丹
王艺霏
李信
张海明
任建伟
马跃
邢宁哲
尚芳剑
李欣怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Jibei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Jibei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Information and Telecommunication Branch of State Grid Jibei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202310987415.1A priority Critical patent/CN117113137A/en
Publication of CN117113137A publication Critical patent/CN117113137A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The application discloses a power model matching method and device, a storage medium and electronic equipment, and relates to the technical field of power. The method comprises the following steps: obtaining a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the sample to be detected is applied with disturbance templates corresponding to each power model in a power model library; determining a prediction result of a sample to be detected by each power model and a disturbance sample corresponding to each power model; and determining a target power model from the power model library according to the prediction result. By applying the disturbance template to the sample to be detected to generate the disturbance sample, the condition that classification errors easily occur on the decision boundary of the power model can be quantified. The to-be-detected sample is matched with the decision boundary statistical characteristic (disturbance sample) of the power model, so that the situation that the classification error occurs at the decision boundary of the power model can be focused, and the matched power model in the power model library is used as a target power model, so that the reuse of the power model is realized.

Description

Power model matching method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of power technologies, and in particular, to a method and apparatus for matching a power model, a storage medium, and an electronic device.
Background
At present, in order to meet the classification requirements in the power system, clustering, classification and other technologies in the field of artificial intelligence are applied to construct a classification model of the power system. However, each time a classification model is built, manpower and material resources are consumed, and the classification effect of the built classification model cannot be guaranteed. Therefore, how to implement multiplexing of the power model is a problem that needs to be solved at present.
Disclosure of Invention
In view of the above problems, the application provides a power model matching method, a device, a storage medium and electronic equipment, which solve the problem of how to realize multiplexing of power models.
In order to solve the technical problems, the application provides the following scheme:
in a first aspect, the present application provides a power model matching method, including: obtaining a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the sample to be detected is applied with disturbance templates corresponding to each power model in a power model library; determining a prediction result of a sample to be detected by each power model and a disturbance sample corresponding to each power model; and determining a target power model from the power model library according to the prediction result.
With reference to the first aspect, in one possible implementation manner, a disturbance template of the power model is generated according to summary information of the power model; applying a disturbance template of the power model to training data in a training data set of the power model to obtain a disturbance template sample; calculating a disturbance change value pushing a disturbance template sample to a decision boundary of the power model; and superposing the disturbance change value to the disturbance template.
With reference to the first aspect, in another possible implementation manner, a sample to be detected is predicted according to the power model to obtain a first prediction result, and a disturbance sample corresponding to the power model is predicted according to the power model to obtain a second prediction result; and determining the prediction precision of the power model according to the first prediction result and the second prediction result.
With reference to the first aspect, in another possible implementation manner, a power model with the greatest prediction accuracy in the power model library is taken as the target power model.
With reference to the first aspect, in another possible implementation manner, a plurality of power models in a power model library are obtained, where the plurality of power models are power models with prediction accuracy greater than a first threshold in the power model library; calculating the similarity between a sample to be detected and training samples of a plurality of power models; and taking the power model with the largest similarity among the plurality of power models as a target power model.
With reference to the first aspect, in another possible implementation manner, a projection operator is controlled to be less than or equal to an applied disturbance amplitude, where the projection operator is used to indicate a norm sphere onto which the disturbance template is projected.
With reference to the first aspect, in another possible implementation manner, the norm sphere uses a world coordinate system origin as a center and uses a disturbance amplitude as a radius.
In a second aspect, the present application provides a power model matching apparatus, comprising: the device comprises an acquisition module, a prediction module and a determination module.
The acquisition module is used for acquiring a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the disturbance templates corresponding to each power model in the power model library are applied to the sample to be detected.
And the prediction module is used for determining a prediction result of the sample to be detected by each power model and the disturbance sample corresponding to each power model.
And the determining module is used for determining a target power model from the power model library according to the prediction result.
With reference to the second aspect, in one possible implementation manner, the apparatus further includes: the disturbance module is used for generating a disturbance template of the power model according to the abstract information of the power model; applying a disturbance template of the power model to training data in a training data set of the power model to obtain a disturbance template sample; calculating a disturbance change value pushing a disturbance template sample to a decision boundary of the power model; and superposing the disturbance change value to the disturbance template.
With reference to the second aspect, in another possible implementation manner, the prediction module is specifically configured to predict a sample to be detected according to the power model to obtain a first prediction result, and predict a disturbance sample corresponding to the power model according to the power model to obtain a second prediction result; and determining the prediction precision of the power model according to the first prediction result and the second prediction result.
With reference to the second aspect, in another possible implementation manner, the determining module is specifically configured to use, as the target power model, a power model with the greatest prediction accuracy in the power model library.
With reference to the second aspect, in another possible implementation manner, the determining module is specifically configured to obtain a plurality of power models in the power model library, where the plurality of power models are power models in the power model library with prediction accuracy greater than a first threshold; calculating the similarity between a sample to be detected and training samples of a plurality of power models; and taking the power model with the largest similarity among the plurality of power models as a target power model.
With reference to the second aspect, in another possible implementation manner, the perturbation module is further configured to control a projection operator to be less than or equal to an applied perturbation amplitude, where the projection operator is configured to indicate a norm sphere onto which the perturbation template is projected.
With reference to the second aspect, in another possible implementation manner, the norm sphere uses a world coordinate system origin as a center and uses a disturbance amplitude as a radius.
In order to achieve the above object, according to a third aspect of the present application, there is provided a storage medium including a stored program, wherein the device in which the storage medium is controlled to execute the above-described power model matching method of the first aspect when the program runs.
To achieve the above object, according to a fourth aspect of the present application, there is provided an electronic device including at least one processor, and at least one memory, bus connected to the processor; the processor and the memory complete communication with each other through a bus; the processor is configured to invoke the program instructions in the memory to perform the power model matching method of the first aspect described above.
By means of the technical scheme, the technical scheme provided by the application has at least the following advantages:
according to the power model matching method, the device, the storage medium and the electronic equipment, the disturbance sample is generated by applying the disturbance template to the sample to be detected, so that the situation that classification errors are easy to occur at the decision boundary of the power model can be quantified. The statistical characteristics (disturbance samples) of the decision boundary of the to-be-detected sample and the to-be-detected sample are matched, so that the situation that the classification error occurs at the decision boundary of the power model can be focused, whether the characteristics of the power model are matched with the characteristics of the to-be-detected sample or not is further determined, and the matched power model in the power model library is used as a target power model, so that the reuse of the power model is realized.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 shows a flow diagram of a power model matching method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a decision boundary versus perturbation provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a perturbation template provided by an embodiment of the present application;
fig. 5 shows a schematic structural diagram of a power model matching device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
In the embodiment of the application, the words of the terms of the first word, the second word and the like do not have a logical or time sequence dependency relationship, and the number and the execution sequence are not limited. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another element.
The meaning of the term "at least one" in embodiments of the present application means one or more, and the meaning of the term "plurality" in embodiments of the present application means two or more.
It should also be understood that the term "if" may be interpreted as "when … …" ("white" or "upon") or "in response to a determination" or "in response to detection". Similarly, the phrase "if a [ stated condition or event ] is detected" may be interpreted as "when a [ stated condition or event ] is determined" or "in response to a determination" or "when a [ stated condition or event ] is detected" or "in response to a detection of a [ stated condition or event ] depending on the context.
As described in the background art, in the actual application scenario, the direct complete training of a power model or the adjustment of an existing power model according to the user's needs consumes huge costs of manpower and material resources, and the prediction effect of a new power model or an adjusted power model obtained by training cannot be guaranteed. Therefore, the application selects the target power model from the power model library according to the user demand (namely the characteristics of the sample to be detected), thereby realizing the multiplexing of the power models.
The reuse of power models in implementing a power model library needs to take the following aspects into consideration:
firstly, long-term users of each power model library are limited by self collection equipment and collection environments, and user demands have some implicit rules, so that the implicit rules of the user demands can be mined according to samples to be detected uploaded by the users, and the matching degree of the power model prediction capacity can be determined according to the implicit rules of the user demands.
Second, the matching process of the user requirement and the power model prediction capability needs to be time response requirement, and the power model prediction capability is limited by both the training sample and the model parameter summary information (disturbance template), so that how the training sample and the model parameter summary information are effectively applied in quick matching and fine matching needs to be considered, in general, the matching method based on the training sample can be relatively slow in response time, and therefore, a reasonable solution should be the quick matching based on the model parameter summary information and the slow but fine matching based on the training sample.
Thirdly, summary information is provided for each model in the power model library in consideration of matching response time and future expandability of a matching method, the summary information comprises statistical characteristic description of model decision capability while ensuring brevity, and the robustness of the model, namely the situation that recognition errors are easy to occur near a decision boundary, is described in a key quantitative mode, so that a user can analyze which specific samples are easy to occur recognition errors for the model according to the model summary information by himself, and the user is ensured to select a model which is easy to occur obvious abnormal errors.
In view of this, an embodiment of the present application provides a power model matching method, which specifically includes: obtaining a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the sample to be detected is applied with disturbance templates corresponding to each power model in a power model library; determining a prediction result of a sample to be detected by each power model and a disturbance sample corresponding to each power model; and determining a target power model from the power model library according to the prediction result. By applying the disturbance template to the sample to be detected to generate the disturbance sample, the condition that classification errors easily occur on the decision boundary of the power model can be quantified. The statistical characteristics (disturbance samples) of the decision boundary of the to-be-detected sample and the to-be-detected sample are matched, the situation that classification errors occur at the decision boundary of the power model can be focused, whether the characteristics of the power model are matched with the characteristics of the to-be-detected sample or not is determined, and then the matched power model in the power model library is used as a target power model, so that multiplexing of the power model is realized.
The embodiment of the application also provides a power model matching device which can be used for executing the power model matching method. Alternatively, the power model matching device may be an electronic device with data processing capability, or a functional module in the electronic device, which is not limited thereto.
For example, the electronic device may be a server, which may be a single server, or may be a server cluster composed of a plurality of servers. As another example, the electronic device may be a terminal device such as a cell phone, tablet, desktop, laptop, handheld computer, notebook, ultra-mobile Personal Computer (UMPC), netbook, cell phone, personal digital assistant (Personal Digital Assistant, PDA), augmented Reality (Augmented Reality, AR), virtual Reality (VR) device, etc. For another example, the electronic device may also be a video recording device, a video monitoring device, or the like. The present application is not particularly limited to the specific form of the electronic apparatus.
Taking an electronic device as an example, as shown in fig. 1, fig. 1 is a hardware structure of an electronic device 100 according to the present application.
As shown in fig. 1, the electronic device 100 includes a processor 110, a communication line 120, and a communication interface 130.
Optionally, the electronic device 100 may also include a memory 140. The processor 110, the memory 140, and the communication interface 130 may be connected by a communication line 120.
The processor 110 may be a central processing unit (Central Processing Unit, CPU), a general purpose processor network processor (Network Processor, NP), a digital signal processor (Digital Signal Processing, DSP), a microprocessor, a microcontroller, a programmable logic device (Programmable Logic Device, PLD), or any combination thereof. The processor 110 may also be any other apparatus having a processing function, such as a circuit, a device, or a software module, without limitation.
In one example, processor 110 may include one or more CPUs, such as CPU0 and CPU1 in fig. 1.
As an alternative implementation, electronic device 100 includes multiple processors, e.g., processor 170 may be included in addition to processor 110. Communication line 120 is used to communicate information between various components included in electronic device 100.
A communication interface 130 for communicating with other devices or other communication networks. The other communication network may be an ethernet, a radio access network (Radio Access Network, RAN), a wireless local area network (Wireless Local Area Networks, WLAN), etc. The communication interface 130 may be a module, a circuit, a transceiver, or any device capable of enabling communication.
Memory 140 for storing instructions. Wherein the instructions may be computer programs.
The Memory 140 may be, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device capable of storing static information and/or instructions, an access Memory (Random Access Memory, RAM) or other type of dynamic storage device capable of storing information and/or instructions, an electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc storage, an optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disc storage medium or other magnetic storage device, etc.
It should be noted that the memory 140 may exist separately from the processor 110 or may be integrated with the processor 110. Memory 140 may be used to store instructions or program code or some data or the like. The memory 140 may be located in the electronic device 100 or may be located outside the electronic device 100, without limitation.
The processor 110 is configured to execute instructions stored in the memory 140 to implement a communication method according to the following embodiments of the present application. For example, when the electronic device 100 is a terminal or a chip in a terminal, the processor 110 may execute instructions stored in the memory 140 to implement steps performed by a transmitting end in the following embodiments of the present application.
As an alternative implementation, the electronic device 100 further comprises an output device 150 and an input device 160. The output device 150 may be a device capable of outputting data of the electronic apparatus 100 to a user, such as a display screen, a speaker, or the like. The input device 160 is a device capable of inputting data to the electronic apparatus 100, such as a keyboard, a mouse, a microphone, or a joystick.
It should be noted that the structure shown in fig. 1 does not constitute a limitation of the computing device, and the computing device may include more or less components than those shown in fig. 1, or may combine some components, or may be arranged in different components.
The power model matching device and the application scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation to the technical solution provided by the embodiments of the present application, and as a person of ordinary skill in the art can know, with the evolution of the power model matching device and the appearance of a new service scenario, the technical solution provided by the embodiments of the present application is equally applicable to similar technical problems.
Next, a power model matching method will be described in detail with reference to the accompanying drawings. Fig. 2 is a schematic flow chart of a power model matching method provided by the application. The method is applied to the power model matching device with the hardware structure shown in fig. 1, and specifically comprises the following steps:
step 210, obtaining a sample to be detected and a plurality of disturbance samples.
In order to enable a user to directly match a power model matched with sample characteristics of the power model in a power model library, the power model is further directly used, and the problems that a power model is retrained according to a user sample, huge manpower and material resource cost is required to be consumed, and a prediction effect cannot be guaranteed are avoided. The application combines the sample to be detected of the user with the abstract information of each power model in the power model library to obtain the disturbance sample. The situation that decision boundaries are easy to be confused, namely the power model identification errors, can be focused on through disturbance samples. In other words, it can be checked whether the characteristic response of the sample to be detected of the user is easily generated in the area where the decision boundary is easily confused, and these areas generally correspond to the situation that the decision boundary is not smooth and the fitting is over.
In order to quantify that the decision boundary is prone to confusion, i.e., power model identification errors, a disturbance template (i.e., disturbance noise) for each power model is calculated from all training samples in the training dataset for that power model in the power model library. The disturbance template can meet the condition that the minimum disturbance change value is smaller than the amplitude of the expected added disturbance under the condition that most training samples in the training data set of the power model are disturbed and identified incorrectly. The disturbance noise constructed by the method characterizes the cost of the error recognition of the sample in the area near the decision boundary.
FIG. 3 is a schematic diagram of a decision boundary and perturbation relationship provided by the present application. As shown in FIG. 3, S in the figure is a low-dimensional projection subspace, β 1 、β 2 、β 3 Are decision boundaries, x, in three dimensions in the low-dimensional subspace S, respectively 1,2,3 Is a data point, r, projected in the low-dimensional subspace S 1 、r 2 、r 3 Is disturbance noise, which can be used to determine the data point x 1,2,3 Respectively to three decision boundaries beta in the low-dimensional subspace S 1 、β 2 、β 3 Causing recognition errors. Thus, although each training sample in the application has different disturbance noise, the disturbance to the whole power model is not easy to form.
In the embodiment of the application, the abstract information of the power model is generated by calculating the disturbance template of the power model. The summary information of the power model may be the name, input size, output size, parameter number, etc. of each layer of the power model.
And obtaining a disturbance template of the electric power model by carrying out iterative training on the disturbance of each training sample in the training data set of the electric power model. Thus, the disturbance template of the power model can be gradually perfected in the iterative process. Specifically, a disturbance template of the power model is applied to training data in a training data set of the power model, and a disturbance template sample is obtained; calculating a disturbance change value pushing a disturbance template sample to a decision boundary of the power model; and superposing the disturbance change value to the disturbance template. Fig. 4 is a schematic diagram of a disturbance template according to the present application.
For example, training data set x= { X of power model 1 ,X 2 ,...,X m In each iterative training round, for each data X in training data set X i Applying disturbance v to obtain disturbance template sample X i +v, calculating the disturbance template sample X i Disturbance change value Deltav with +v pushed to decision boundary i To be trained on the current training sample X i The obtained disturbance change value Deltav i Superimposed to the generic disturbance model v.
Assuming a power modelThe current perturbation template is v, and the perturbation template v is used to perturb X i Cannot make the post-disturbance sample X i +v is power model->The recognition error cannot be fooled, and then the disturbance change value, i.e. the additional disturbance Δv with the minimum norm, is sought on the basis of the disturbance template v i So that the samples after disturbance can be identified as errors, and the disturbance template sample X can be calculated i Disturbance change value Deltav with +v pushed to decision boundary i The process is as follows:
at the moment of obtaining the disturbance change value Deltav i Then, the disturbance change value Deltav i Superimposed on the disturbance template v, at this time, a power model is obtainedThe disturbance template of (2) is v+Deltav i
To change the value Deltav at the disturbance i Satisfying constraint condition v|in the calculation process of (a) p And less than or equal to xi, namely that the minimum disturbance change value is less than or equal to the amplitude of the added disturbance. By varying the value Deltav of disturbance i Updating the perturbation template vThen, the updated disturbance template v+Deltav is also added i Projecting onto a norm sphere centered on the origin of the world coordinate system and having the amplitude of the added disturbance as a radius, i.e. letting the projection operator P p,ξ (v) The following conditions are satisfied:
from the above, the disturbance template update rule of the power model can be defined as v+.p p,ξ (v+Δv i )。
To improve the quality of the perturbation templates, the process of updating the perturbation templates may be iterated over the training data set a number of times. The termination condition for updating the disturbance template is based on a preset threshold delta, namely the prediction precision delta after the power model adds disturbance to the training samples in the training data set, namely the disturbance sample set { X } 1 +v,X 2 +v,...,X m +v upper power modelIs greater than 1-delta. The termination conditions for updating the perturbation templates are as follows:
and after obtaining the disturbance template of each power model according to the steps, applying the disturbance template of each power model to the sample to be detected to obtain a disturbance sample.
Step 220, determining a prediction result of the sample to be detected and the disturbance sample corresponding to each power model by each power model.
After obtaining the disturbance sample corresponding to each power model according to step 210, predicting the sample to be detected by using each power model to obtain a first prediction result, and predicting the disturbance sample corresponding to the power model by using the power model to obtain a second prediction result. And determining the prediction result difference of the sample to be detected and the disturbance sample by the power model according to the first prediction result and the second prediction result. I.e. the absolute value of the difference between the first prediction result and the second prediction result.
And 230, determining a target power model from the power model library according to the prediction result.
The smaller the difference of the prediction results of the sample to be detected and the disturbance sample is, the closer the electric power model is to the training sample of the electric power model. Because the electric power model is obtained by training according to training samples in the training data set, the prediction accuracy of predicting the sample to be detected by using the electric power model is high. The larger the difference of the prediction results of the sample to be detected and the disturbance sample is, the less the characteristic of the sample to be detected is close to the training sample of the power model. Therefore, in this case, it is not appropriate to use the power model as a target power model for detecting the sample to be detected, and the prediction accuracy of detecting the sample to be detected by the power model is low.
Therefore, in one embodiment, the power model with the highest prediction accuracy in the power model library is set as the target power model. At this time, the matching degree between the target power model and the sample to be detected is highest, because the absolute value of the difference between the first prediction result and the second prediction result is minimum, which means that the sample to be detected is least susceptible to the potential defect of the decision boundary of the power model, i.e. the prediction result of the sample to be detected in the power model is most robust.
In order to further improve the matching degree of the sample to be detected and the power model, a target power model is determined according to the similarity of the sample to be detected and the training sample of the power model.
Specifically, a plurality of power models in a power model library are obtained, wherein the power models are power models with prediction precision larger than a first threshold value in the power model library; calculating the similarity between a sample to be detected and training samples of a plurality of power models; and taking the power model with the largest similarity among the plurality of power models as a target power model.
In sum, the situation that classification errors easily occur on the decision boundary of the power model can be quantified by applying a disturbance template to the sample to be detected to generate a disturbance sample. The statistical characteristics (disturbance samples) of the decision boundary of the to-be-detected sample and the to-be-detected sample are matched, so that the situation that the classification error occurs at the decision boundary of the power model can be focused, whether the characteristics of the power model are matched with the characteristics of the to-be-detected sample or not is further determined, and the matched power model in the power model library is used as a target power model, so that the reuse of the power model is realized.
It will be appreciated that, in order to implement the functions of the above embodiments, the computer device includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as a piece or as computer software driven hardware depends upon the particular application scenario and design constraints imposed on the solution.
Further, as an implementation of the method embodiment shown in fig. 2, an embodiment of the present application provides a power model matching device, which is used for matching a power model required by a user in a power model library. The embodiment of the device corresponds to the foregoing method embodiment, and for convenience of reading, details of the foregoing method embodiment are not described one by one in this embodiment, but it should be clear that the device in this embodiment can correspondingly implement all the details of the foregoing method embodiment. As shown in fig. 5, the power model matching apparatus 500 includes: an acquisition module 510, a prediction module 520, and a determination module 530.
The obtaining module 510 is configured to obtain a sample to be detected and a plurality of disturbance samples, where the plurality of disturbance samples are samples after applying a disturbance template corresponding to each power model in the power model library to the sample to be detected.
And the prediction module 520 is configured to determine a prediction result of the sample to be detected and the disturbance sample corresponding to each power model.
A determining module 530 is configured to determine a target power model from the power model library according to the prediction result.
Further, as shown in fig. 5, the apparatus further includes: the disturbance module 540 is used for generating a disturbance template of the power model according to the abstract information of the power model; applying a disturbance template of the power model to training data in a training data set of the power model to obtain a disturbance template sample; calculating a disturbance change value pushing a disturbance template sample to a decision boundary of the power model; and superposing the disturbance change value to the disturbance template.
Further, as shown in fig. 5, the prediction module 520 is specifically configured to predict a sample to be detected according to the power model to obtain a first prediction result, and predict a disturbance sample corresponding to the power model according to the power model to obtain a second prediction result; and determining the prediction precision of the power model according to the first prediction result and the second prediction result.
Further, as shown in fig. 5, the determining module 530 is specifically configured to use, as the target power model, the power model with the highest prediction accuracy in the power model library.
Further, as shown in fig. 5, the determining module 530 is specifically configured to obtain a plurality of power models in the power model library, where the plurality of power models are power models in the power model library with prediction accuracy greater than a first threshold; calculating the similarity between a sample to be detected and training samples of a plurality of power models; and taking the power model with the largest similarity among the plurality of power models as a target power model.
Further, as shown in fig. 5, the perturbation module 540 is further configured to control the projection operator to be less than or equal to the applied perturbation amplitude, where the projection operator is configured to indicate the norm sphere onto which the perturbation template is projected.
Further, as shown in fig. 5, the norm sphere takes the origin of the world coordinate system as the center and the disturbance amplitude as the radius.
Further, the embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, where the obtaining module 510, the predicting module 520, the determining module 530, the disturbing module 540 and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions. The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory.
The embodiment of the application provides a storage medium, on which a program is stored, which when executed by a processor, implements the power model matching method.
The embodiment of the application provides a processor which is used for running a program, wherein the power model matching method is executed when the program runs.
The application also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of: obtaining a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the sample to be detected is applied with disturbance templates corresponding to each power model in a power model library; determining a prediction result of a sample to be detected by each power model and a disturbance sample corresponding to each power model; and determining a target power model from the power model library according to the prediction result.
Further, generating a disturbance template of the power model according to abstract information of the power model; applying a disturbance template of the power model to training data in a training data set of the power model to obtain a disturbance template sample; calculating a disturbance change value pushing a disturbance template sample to a decision boundary of the power model; and superposing the disturbance change value to the disturbance template.
Further, predicting a sample to be detected according to the electric power model to obtain a first prediction result, and predicting a disturbance sample corresponding to the electric power model according to the electric power model to obtain a second prediction result; and determining the prediction precision of the power model according to the first prediction result and the second prediction result.
Further, the power model with the highest prediction accuracy in the power model library is used as the target power model.
Further, a plurality of power models in a power model library are obtained, wherein the power models are power models with prediction precision larger than a first threshold value in the power model library; calculating the similarity between a sample to be detected and training samples of a plurality of power models; and taking the power model with the largest similarity among the plurality of power models as a target power model.
Further, the projection operator is controlled to be smaller than or equal to the applied disturbance amplitude, and the projection operator is used for indicating a norm sphere projected by the disturbance template.
Further, the norm sphere takes the origin of the world coordinate system as the center and the disturbance amplitude as the radius.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, the device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A method of power model matching, the method comprising:
obtaining a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the sample to be detected is subjected to disturbance templates corresponding to each power model in a power model library;
determining the prediction results of each power model on the sample to be detected and the disturbance sample corresponding to each power model;
and determining a target power model from the power model library according to the prediction result.
2. The method of claim 1, wherein prior to obtaining the sample to be detected and the plurality of perturbed samples, the method further comprises:
generating a disturbance template of the power model according to abstract information of the power model;
applying a disturbance template of the power model to training data in the training data set of the power model to obtain a disturbance template sample;
calculating a disturbance change value pushing the disturbance template sample to a decision boundary of the power model;
and superposing the disturbance change value to the disturbance template.
3. The method of claim 2, wherein determining the prediction result of each power model for the sample to be detected and the disturbance sample corresponding to each power model comprises:
predicting the sample to be detected according to the power model to obtain a first prediction result, and predicting a disturbance sample corresponding to the power model according to the power model to obtain a second prediction result;
and determining the prediction precision of the power model according to the first prediction result and the second prediction result.
4. A method according to claim 3, wherein determining a target power model from the power model library based on the prediction results comprises:
and taking the power model with the maximum prediction precision in the power model library as the target power model.
5. The method of claim 3, wherein determining a target power model from the power model library based on the prediction result, further comprises:
acquiring a plurality of power models in the power model library, wherein the power models are power models with prediction precision larger than a first threshold value in the power model library;
calculating the similarity between the sample to be detected and training samples of the plurality of power models;
and taking the power model with the largest similarity among the plurality of power models as the target power model.
6. The method of claim 2, wherein after superimposing the perturbation variation value onto the perturbation template, the method further comprises:
and controlling a projection operator to be smaller than or equal to the applied disturbance amplitude, wherein the projection operator is used for indicating a norm sphere projected by the disturbance template.
7. The method of claim 6, wherein the norm sphere is centered about a world coordinate system origin and the disturbance magnitude is a radius.
8. A power model matching device, the device comprising:
the acquisition module is used for acquiring a sample to be detected and a plurality of disturbance samples, wherein the disturbance samples are samples after the sample to be detected is applied with disturbance templates corresponding to each power model in a power model library;
the prediction module is used for determining the prediction result of each power model on the sample to be detected and the disturbance sample corresponding to each power model;
and the determining module is used for determining a target power model from the power model library according to the prediction result.
9. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the power model matching method according to any one of claims 1-7.
10. An electronic device comprising at least one processor, and at least one memory, bus coupled to the processor; the processor and the memory complete communication with each other through the bus; the processor is configured to invoke program instructions in the memory to perform the power model matching method of any of claims 1-7.
CN202310987415.1A 2023-08-07 2023-08-07 Power model matching method and device, storage medium and electronic equipment Pending CN117113137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310987415.1A CN117113137A (en) 2023-08-07 2023-08-07 Power model matching method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310987415.1A CN117113137A (en) 2023-08-07 2023-08-07 Power model matching method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117113137A true CN117113137A (en) 2023-11-24

Family

ID=88810260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310987415.1A Pending CN117113137A (en) 2023-08-07 2023-08-07 Power model matching method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117113137A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476629A (en) * 2020-03-06 2020-07-31 北京三快在线科技有限公司 Data prediction method and device, electronic equipment and storage medium
CN111753275A (en) * 2020-06-04 2020-10-09 支付宝(杭州)信息技术有限公司 Image-based user privacy protection method, device, equipment and storage medium
CN112183671A (en) * 2020-11-05 2021-01-05 四川大学 Target attack counterattack sample generation method for deep learning model
CN113657465A (en) * 2021-07-29 2021-11-16 北京百度网讯科技有限公司 Pre-training model generation method and device, electronic equipment and storage medium
CN114021861A (en) * 2021-12-15 2022-02-08 山东科技大学 Power load prediction method, device, terminal and storage medium
CN114970858A (en) * 2022-07-13 2022-08-30 贵州师范大学 Robustness improving method based on smooth neural network model weight loss terrain
CN115797731A (en) * 2023-02-02 2023-03-14 国能大渡河大数据服务有限公司 Target detection model training method, target detection model detection method, terminal device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476629A (en) * 2020-03-06 2020-07-31 北京三快在线科技有限公司 Data prediction method and device, electronic equipment and storage medium
CN111753275A (en) * 2020-06-04 2020-10-09 支付宝(杭州)信息技术有限公司 Image-based user privacy protection method, device, equipment and storage medium
CN112183671A (en) * 2020-11-05 2021-01-05 四川大学 Target attack counterattack sample generation method for deep learning model
CN113657465A (en) * 2021-07-29 2021-11-16 北京百度网讯科技有限公司 Pre-training model generation method and device, electronic equipment and storage medium
CN114021861A (en) * 2021-12-15 2022-02-08 山东科技大学 Power load prediction method, device, terminal and storage medium
CN114970858A (en) * 2022-07-13 2022-08-30 贵州师范大学 Robustness improving method based on smooth neural network model weight loss terrain
CN115797731A (en) * 2023-02-02 2023-03-14 国能大渡河大数据服务有限公司 Target detection model training method, target detection model detection method, terminal device and storage medium

Similar Documents

Publication Publication Date Title
Yin et al. Collaborative QoS prediction for mobile service with data filtering and SlopeOne model
EP3467723A1 (en) Machine learning based network model construction method and apparatus
CN110069709B (en) Intention recognition method, device, computer readable medium and electronic equipment
US11023778B2 (en) Techniques to embed a data object into a multidimensional frame
CN110782043B (en) Model optimization method, device, storage medium and server
US20200302283A1 (en) Mixed precision training of an artificial neural network
CN112231592A (en) Network community discovery method, device, equipment and storage medium based on graph
CN112200296B (en) Network model quantization method and device, storage medium and electronic equipment
CN112612887A (en) Log processing method, device, equipment and storage medium
CN111932041B (en) Model training method and device based on risk recognition and electronic equipment
US20210192322A1 (en) Method For Determining A Confidence Level Of Inference Data Produced By Artificial Neural Network
CN113569955A (en) Model training method, user portrait generation method, device and equipment
CN105357583A (en) Method and device for discovering interest and preferences of intelligent television user
CN115567371B (en) Abnormity detection method, device, equipment and readable storage medium
US20230016044A1 (en) Techniques for creating and utilizing multidimensional embedding spaces
CN116774986A (en) Automatic evaluation method and device for software development workload, storage medium and processor
CN117113137A (en) Power model matching method and device, storage medium and electronic equipment
CN113033817B (en) OOD detection method and device based on hidden space, server and storage medium
US11210155B1 (en) Performance data analysis to reduce false alerts in a hybrid cloud environment
CN112800227B (en) Training method of text classification model, equipment and storage medium thereof
US20230086609A1 (en) Securely designing and executing an automation workflow based on validating the automation workflow
CN117852507B (en) Restaurant return guest prediction model, method, system and equipment
US11386265B2 (en) Facilitating information technology solution templates
Han et al. Imbalanced Data Classification Algorithm Based on Integrated Sampling and Ensemble Learning
CN113139381B (en) Unbalanced sample classification method, unbalanced sample classification device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination