CN114511058A - Load element construction method and device for power consumer portrait - Google Patents
Load element construction method and device for power consumer portrait Download PDFInfo
- Publication number
- CN114511058A CN114511058A CN202210101090.8A CN202210101090A CN114511058A CN 114511058 A CN114511058 A CN 114511058A CN 202210101090 A CN202210101090 A CN 202210101090A CN 114511058 A CN114511058 A CN 114511058A
- Authority
- CN
- China
- Prior art keywords
- user
- characteristic vector
- active power
- load element
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010276 construction Methods 0.000 title claims abstract description 14
- 239000013598 vector Substances 0.000 claims abstract description 211
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 18
- 230000004913 activation Effects 0.000 claims abstract description 14
- 238000007477 logistic regression Methods 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000009499 grossing Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 17
- 230000006399 behavior Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000005611 electricity Effects 0.000 description 3
- 238000007476 Maximum Likelihood Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
- G06F16/316—Indexing structures
- G06F16/325—Hash tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a load element construction method and a device for a power consumer portrait, wherein the method comprises the following steps: collecting the total power of a household user and the active power of a single load element as a training set; carrying out thermal unique code coding on the total power of the family users and the active power of a single load element; carrying out Hash processing on the user characteristic vector and the active power characteristic vector; respectively inputting the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector into a first multilayer perceptron network and a second multilayer perceptron network for training, and respectively outputting each user embedded characteristic vector and each load element embedded characteristic vector; calculating the correlation between each user embedded feature vector and the load element embedded feature vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest user correlation degree; the method can improve the efficiency of establishing the user portrait model and has strong reliability.
Description
Technical Field
The invention relates to the technical field of electric power, in particular to a load element construction method and device for an electric power user portrait.
Background
With the continuous development of electronic technology and big data technology, the formation of household electricity consumption information is gradually changed into a datamation form, so that intention information can be obtained more intuitively and conveniently, time consumption is further reduced, and efficiency is improved. In order to find the target data more quickly and accurately in the network, various attribute information of the electricity consumption of the home users needs to be divided by using a related classification technology. Supply and demand information of the family user can be further known through portrait construction, data such as behavior habits and the like can be accurately positioned, and the comprehensive appearance of the user information can be conveniently known.
In the field of imaging technology, the embedded features are generated by utilizing the artificially constructed features through a deep learning method and are input into a neural network to predict load elements started by a user, and finally, the electricity utilization behavior habit display of a family user is formed. The load element opening estimation is the most important consideration of the model effect, and the user is very important to the historical opening behavior of the load element. However, most current power consumer representation models utilize some artificially constructed features as raw features input into the multi-layer neural network, and the features are derived from the user representation, the user historical behaviors, the load element attributes and the like.
For example, patent document CN112417308A discloses a method for generating a user portrait label based on power big data, which uses big data processing technology to generate a user characteristic label, and the basic database configuration of the big data processing technology is constructed around the client appeal, and the opinion and consultation data stream of channels such as power official phone, power intranet extranet, mobile phone APP, wechat public line, business hall opinion book and the like is imported into the basic database as the original source of label data, and the client is marked in the form of label through data analysis, so as to create the user portrait.
The method can integrate various source data, build a multi-dimensional and three-dimensional customer portrait by relying on a big data analysis technology, and describe the deep behavior characteristics of the user through the label. However, the method is biased to describe the user characteristics by adopting the consulting opinions provided by the user, lacks of acquisition of direct power utilization information, does not well integrate the structural characteristics and the node characteristics of a complex recommendation network scene, is poor in model reliability, is low in operation speed, and is low in user portrait model establishing efficiency.
Disclosure of Invention
The invention provides a load element construction method and device for a power user portrait, model training is carried out based on the total power of a family user and the active power of a single load element, the structural characteristics and the node characteristics of a complex recommendation network scene are fused, the efficiency of user portrait model construction is improved, and the reliability is high.
A load element construction method for power user portrayal, comprising:
collecting the total power of a household user and the active power of a single load element as a training set;
carrying out thermal one-code coding on the total power of the household users and the active power of a single load element to respectively obtain a user characteristic vector and an active power characteristic vector;
performing hash processing on the user characteristic vector and the active power characteristic vector respectively to obtain a low-dimensional user characteristic vector and a low-dimensional active power characteristic vector respectively;
inputting the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector into a first multilayer perceptron network and a second multilayer perceptron network respectively for training, and outputting each user embedded characteristic vector and each load element embedded characteristic vector respectively;
and calculating the correlation between each user embedded feature vector and the load element embedded feature vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest user correlation degree.
Further, the hot one-code coding is carried out on the total power of the home users, and comprises the following steps:
dividing the same total power of the family users into the same type of characteristics of the users;
representing the homogeneous characteristics of each user by binary system to obtain the characteristic vector of the user;
performing thermal unique code coding on the active power of the single load element to obtain an active power characteristic vector of the single load element, including:
dividing the active power of the same single load element into active power homogeneous characteristics;
and (4) representing each active power same-kind characteristic by binary to obtain an active power characteristic vector.
Further, performing hash processing on the user feature vector to obtain a low-dimensional user feature vector, including:
respectively adding a start mark and an end mark before and after the user feature vector;
setting a step length and a sliding window based on an N-GRAM algorithm, recording user characteristic vector slices which are stroked each time, recording all the user characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining a low-dimensional user characteristic vector;
carrying out hash processing on the active power characteristic vector to obtain a low-dimensional active power characteristic vector, comprising the following steps:
respectively adding a start mark and an end mark before and after the active power characteristic vector;
setting a step length and a sliding window based on an N-GRAM algorithm, recording active power characteristic vector slices which are stroked each time, recording all the active power characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining a low-dimensional active power characteristic vector.
And further, the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector are respectively input into a first multilayer perceptron network and a second multilayer perceptron network for training, and parameters of the first multilayer perceptron network and the second multilayer perceptron network are respectively obtained according to a minimized loss function.
Further, the parameters of the first multi-layer perceptron network and the second multi-layer perceptron network comprise a weight matrix and a bias term of each hidden layer.
Further, the load element embedding feature vector is calculated by the following formula:
li=f(Wili-1+bi)i=2,…,N-1;
yL=f(WNlN-1+bN);
wherein ,yLEmbedding feature vectors for load elements,/iIs the i-th hidden layer, W, of the first multi-layer perceptron networkiA weight matrix for the i-th hidden layer of the first multi-layer perceptron network, biRepresenting the bias item of the i-th hidden layer of the first multi-layer perceptron network, N being the number of hidden layers of the first multi-layer perceptron network, bNA bias term for an output layer of the first multi-layer perceptron network;
the user-embedded feature vector is calculated by the following formula:
lm=f(Wmlm-1+bm)m=2,…,M-1;
yU=f(WMlM-1+bM);
wherein ,yUEmbedding feature vectors for users,/mIs the m-th hidden layer, W, of the second multi-layer perceptron networkmA weight matrix for the mth hidden layer of the second multi-layer perceptron network, bmA bias term representing a hidden layer of an mth layer of the second multi-layer perceptron network, M being the mth layerNumber of hidden layers of two-multilayer perceptron network, bMAnd outputting the bias term of the layer for the second multi-layer perceptron network.
Further, the correlation of the user-embedded feature vector with the load element-embedded feature vector is calculated by the following formula:
wherein ,yUEmbedding feature vectors, y, for a userLA feature vector is embedded for the load element,representing the transpose of the user-embedded feature vector, and R (U, L) representing the correlation of the user-embedded feature vector with the load element-embedded feature vector.
Further, the posterior probability is calculated by the following formula:
wherein ,P(Lj| U) is a posterior probability, γ is a smoothing factor of the logistic regression activation function, L represents a set of load elements, U represents a set of users, L' represents the sum of all load elements in the set of load elements, j is "+" or "-", L+Representing a positive sample of the load element, L-Representing a load element negative example.
Further, the minimization loss function is represented by the following formula:
wherein Λ is a parameter of the multilayer perceptron network, P (L)+| U) represents the posterior probability, U represents the set of users, L+Representing a load cell positive sample.
A load element construction apparatus for power consumer portrayal, comprising:
the acquisition module is used for acquiring the total power of the household users and the active power of a single load element as a training set;
the coding module is used for carrying out hot one-code coding on the total power of the household users and the active power of a single load element to respectively obtain a user characteristic vector and an active power characteristic vector;
the processing module is used for respectively carrying out Hash processing on the user characteristic vector and the active power characteristic vector to respectively obtain a low-dimensional user characteristic vector and a low-dimensional active power characteristic vector;
the training module is used for respectively inputting the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector into a first multilayer perceptron network and a second multilayer perceptron network for training, and respectively outputting each user embedded characteristic vector and each load element embedded characteristic vector;
and the calculation module is used for calculating the correlation between each user embedded characteristic vector and the load element embedded characteristic vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest correlation degree with the user.
The invention provides a load element construction method and a device for a power consumer portrait, which at least have the following beneficial effects:
model training is carried out based on the total power of a family user and the active power of load elements, Hash processing and multilayer perceptron network training are carried out on the features obtained through hot unique code coding, the similarity between the user and each load is generated by means of a double-tower model, and finally the load element with the highest relevance degree to the user is obtained.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for constructing a load element for a power consumer representation according to the present invention.
FIG. 2 is a schematic structural diagram of an embodiment of a load element constructing apparatus for power consumer representation according to the present invention.
Description of the drawings: 1-acquisition module, 2-coding module, 3-processing module, 4-training module and 5-calculating module.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Referring to FIG. 1, in some embodiments, there is provided a load element construction method for a power user representation, comprising:
s1, collecting the total power of the household users and the active power of a single load element as a training set;
s2, carrying out hot one-code coding on the total power of the household users and the active power of a single load element to respectively obtain a user characteristic vector and an active power characteristic vector;
s3, performing hash processing on the user characteristic vector and the active power characteristic vector respectively to obtain a low-dimensional user characteristic vector and a low-dimensional active power characteristic vector respectively;
s4, inputting the low-dimensional user characteristic vectors and the low-dimensional active power characteristic vectors into a first multilayer perceptron network and a second multilayer perceptron network respectively for training, and outputting user embedded characteristic vectors and load element embedded characteristic vectors respectively;
s5, calculating the correlation between each user embedded feature vector and the load element embedded feature vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest user correlation degree.
Specifically, in step S2, the thermal unique code encoding of the total power of the home users includes:
s211, dividing the same total power of the household users into the same type of characteristics of the users;
s212, representing the same-class characteristics of each user by binary to obtain the characteristic vector of the user;
performing thermal unique code coding on the active power of the single load element to obtain an active power characteristic vector of the single load element, including:
s221, dividing active power of the same single load element into active power same-class characteristics;
s222, representing the same kind of characteristics of each active power by binary to obtain an active power characteristic vector.
In a specific embodiment, the total power of the household users and the active power of each load element are processed separately, a first multilayer perceptron network is used for processing the total power of the household users, a second multilayer perceptron network is used for processing the active power of each load element, and the first multilayer perceptron network and the second multilayer perceptron network form a double-tower model.
Specifically, the presented active power data is firstly used as a dictionary, the number of word banks is the dimension of the thermal unique code, the dictionary is a zero vector with the number of word banks × 1 dimension, only the place representing the current active power value is 1, for example, the power dictionary of a refrigerator is [0,1,2,3,4 … … ], and if the power at the current moment is 0, the thermal unique code coding result is [1,0,0 … ].
When the active power of the same single load element is divided into the same type of features, it is required to ensure that only one bit of each feature in each sample is valid, and the rest bits are invalid.
In step S3, performing hash processing on the user feature vector to obtain a low-dimensional user feature vector, including:
s311, respectively adding a start mark and an end mark before and after the user feature vector;
s312, setting step length and a sliding window based on an N-GRAM algorithm, recording user characteristic vector slices which are swept each time, recording all the user characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining low-dimensional user characteristic vectors;
carrying out hash processing on the active power characteristic vector to obtain a low-dimensional active power characteristic vector, comprising the following steps:
s321, respectively adding a start mark and an end mark before and after the active power characteristic vector;
s322, setting step length and a sliding window based on an N-GRAM algorithm, recording the active power characteristic vector slices which are stroked each time, recording all the active power characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining the low-dimensional active power characteristic vector. The algorithm can be represented by the following formula:
l1=W1x
where x represents the input eigenvector of the thermal unique code encoding, l1Denotes a first hidden layer, W1A weight matrix representing the first layer.
In a specific embodiment, taking an example that the load element is a refrigerator, if the active power of the refrigerator at a certain time is 258W, a mark #258# is added to the beginning and the end of a user feature vector, then the active power is divided into n numbers, and if n is 3, the n numbers are #25, 258 and 58#, so that a dictionary obtained by encoding the thermal unique code can be represented by a vector consisting of a plurality of 3 numbers, and the dimension of the dictionary is greatly reduced. For example, the active power at this time is 259W, the thermal unique code dictionary is [ … #25, … 258, 259, 260, … 59# … ], and the code after hash processing becomes [ … 1, … 0,1, 0 … 1 … ].
In step S4, the low-dimensional user feature vector and the low-dimensional active power feature vector are respectively input to the first multilayer perceptron network and the second multilayer perceptron network for training, and parameters of the first multilayer perceptron network and the second multilayer perceptron network are respectively obtained according to a minimum loss function.
The parameters of the first multi-layer perceptron network and the second multi-layer perceptron network comprise weight matrixes and bias items of all hidden layers.
The load element embedding feature vector is calculated by the following formula:
li=f(Wili-1+bi)i=2,…,N-1;
yL=f(WNlN-1+bN);
wherein ,yLEmbedding feature vectors for load elements,/iIs the i-th hidden layer, W, of the first multi-layer perceptron networkiA weight matrix for the i-th hidden layer of the first multi-layer perceptron network, biRepresenting the bias item of the i-th hidden layer of the first multi-layer perceptron network, N being the number of hidden layers of the first multi-layer perceptron network, bNA bias term for an output layer of the first multi-layer perceptron network;
the user-embedded feature vector is calculated by the following formula:
lm=f(Wmlm-1+bm)m=2,…,M-1;
yU=f(WMlM-1+bM);
wherein ,yUEmbedding feature vectors for users,/mIs the m-th hidden layer, W, of the second multi-layer perceptron networkmA weight matrix for the mth hidden layer of the second multi-layer perceptron network, bmRepresenting the bias item of the M hidden layer of the second multi-layer perceptron network, M being the number of hidden layers of the second multi-layer perceptron network, bMAnd outputting the bias term of the layer for the second multi-layer perceptron network.
The activation functions of the hidden layer and the output layer adopt tanh functions, and are expressed by the following formula:
in a specific implementation mode, the first multilayer perceptron network and the second multilayer perceptron network both comprise an input layer, a hidden layer and an output layer, after low-dimensional user characteristic vectors and low-dimensional active power characteristic vectors are input into the input layer, activation and embedded characteristic vector calculation is performed in the hidden layer and the output layer, and finally obtained 128-dimensional vectors are output as embedded characteristic vectors.
In step S5, the correlation between the user-embedded feature vector and the load element-embedded feature vector is calculated by the following formula:
wherein ,yUFor the user's embedded feature vector, yLA feature vector is embedded for the load element,representing the transpose of the user-embedded feature vector, and R (U, L) representing the correlation of the user-embedded feature vector with the load element-embedded feature vector.
In order to obtain parameters of the multilayer perceptron network and the embedded characteristic vector output network, including various weight matrixes and bias items, the parameters of the multilayer perceptron network and the embedded characteristic vector output network are obtained based on a maximum likelihood estimation method according to the condition probability that various loads are started under the premise that total power consumption data of given household users are improved to the maximum.
The posterior probability is calculated by the following formula:
wherein ,P(Lj| U) is a posterior probability, γ is a smoothing factor of the logistic regression activation function, L represents a set of load elements, U represents a set of users, L' represents the sum of all load elements in the set of load elements, j is "+" or "-", L+Representing a positive sample of the load element, L-Representing a load element negative example.
In one embodiment, the activation function may be a softmax function, which converts the cosine similarity between the user and the load element into a posterior probability, and the value of γ is generally set according to manual experience.
Ideally, L should contain all load elements that may participate in matching. For one-time query of family users and scene of load element opening behavior occurrence, L is used+Indicating that the load is switched onPositive samples, using negative samples to randomly select 4 different loads without opening behavior as negative samples, specifically expressed as
In training, the purpose of model parameter estimation is to maximize the probability of load element startup behavior occurring given the entire trained querying home user. Therefore, the minimization loss function, which is expressed by the following formula, employs a maximum likelihood estimation:
wherein Λ is a parameter of the multilayer perceptron network, P (L)+| U) represents the posterior probability, U represents the set of users, L+Representing a load cell positive sample.
The posterior probability obtained according to the initial parameters of the double-tower model is low in accuracy, so that the parameters of the model can be optimized through a random gradient descent method, the posterior probability is maximized, and the parameters of each layer of the double-tower model are updated.
And finally, obtaining the load element with the highest degree of association with the user, wherein the obtained embedded characteristic vector can be used for comparing the similarity of the user and the load element so as to find the load which can be started at the next moment, and can also be used for comparing the similarity between different family users, finding the same type of family users, collecting the information characteristics of the family users and the like.
Referring to FIG. 2, in some embodiments, there is provided a load element construction apparatus for power user representation, comprising:
the system comprises an acquisition module 1, a training set and a control module, wherein the acquisition module is used for acquiring the total power of a household user and the active power of a single load element as the training set;
the coding module 2 is used for carrying out hot one-code coding on the total power of the household users and the active power of a single load element to respectively obtain a user characteristic vector and an active power characteristic vector;
the processing module 3 is configured to perform hash processing on the user feature vector and the active power feature vector respectively to obtain a low-dimensional user feature vector and a low-dimensional active power feature vector respectively;
the training module 4 is used for inputting the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector into a first multilayer perceptron network and a second multilayer perceptron network respectively for training, and outputting each user embedded characteristic vector and each load element embedded characteristic vector respectively;
and the calculating module 5 is used for calculating the correlation between each user embedded characteristic vector and the load element embedded characteristic vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest user correlation degree.
Specifically, the encoding module 2 is further configured to perform thermal unique code encoding on the total power of the home users, and includes:
dividing the same total power of the family users into the same type of characteristics of the users;
representing the same kind of characteristics of each user by binary to obtain the characteristic vector of the user;
performing thermal unique code encoding on the active power of the single load element to obtain an active power characteristic vector of the single load element, including:
dividing active power of the same single load element into active power similar characteristics;
and (4) representing each active power same-kind characteristic by binary to obtain an active power characteristic vector.
The processing module 3 is further configured to perform hash processing on the user feature vector to obtain a low-dimensional user feature vector, and includes:
respectively adding a start mark and an end mark before and after the user feature vector;
setting a step length and a sliding window based on an N-GRAM algorithm, recording user characteristic vector slices which are stroked each time, recording all the user characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining a low-dimensional user characteristic vector;
carrying out hash processing on the active power characteristic vector to obtain a low-dimensional active power characteristic vector, comprising the following steps:
respectively adding a start mark and an end mark before and after the active power characteristic vector;
setting a step length and a sliding window based on an N-GRAM algorithm, recording active power characteristic vector slices which are stroked each time, recording all the active power characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining a low-dimensional active power characteristic vector.
The training module 4 is further configured to input the low-dimensional user feature vector and the low-dimensional active power feature vector to a first multilayer perceptron network and a second multilayer perceptron network respectively for training, and obtain parameters of the first multilayer perceptron network and the second multilayer perceptron network respectively according to a minimum loss function.
The parameters of the first multi-layer perceptron network and the second multi-layer perceptron network comprise weight matrixes and bias items of all hidden layers.
The load element embedding feature vector is calculated by the following formula:
li=f(Wili-1+bi)i=2,…,N-1;
yL=f(WNlN-1+bN);
wherein ,yLEmbedding feature vectors for load elements,/iIs the i-th hidden layer, W, of the first multi-layer perceptron networkiA weight matrix for the i-th hidden layer of the first multi-layer perceptron network, biRepresenting a bias item of an ith hidden layer of the first multi-layer perceptron network, wherein N is the number of hidden layers of the first multi-layer perceptron network;
the user-embedded feature vector is calculated by the following formula:
lm=f(Wmlm-1+bm)m=2,…,M-1;
yU=f(WMlM-1+bM);
wherein ,yUThe feature vectors are embedded for the user and,lmis the m-th hidden layer, W, of the second multi-layer perceptron networkmA weight matrix for the mth hidden layer of the second multi-layer perceptron network, bmAnd representing the bias item of the mth hidden layer of the second multi-layer perceptron network, wherein M is the number of the hidden layers of the second multi-layer perceptron network.
The calculating module 5 is further configured to calculate a correlation between the user embedded feature vector and the load element embedded feature vector, and calculate by using the following formula:
wherein ,yUFor the user's embedded feature vector, yLA feature vector is embedded for the load element,represents the transpose of the user-embedded feature vector, and R (U, L) represents the correlation of the user-embedded feature vector with the load element-embedded feature vector.
The posterior probability is calculated by the following formula:
wherein ,P(Lj| U) is a posterior probability, γ is a smoothing factor of the logistic regression activation function, L represents a set of load elements, U represents a set of users, L' represents the sum of all load elements in the set of load elements, j is "+" or "-", L+Representing a positive sample of the load element, L-Representing a load element negative example.
By setting L', traversal of load elements within the set L can be achieved.
The minimization loss function is expressed by the following formula:
wherein Λ is a parameter of the multilayer perceptron network, P (L)+| U) represents the posterior probability, U represents the set of users, L+Representing a load cell positive sample.
The load element construction method and device for the electric power user portrait provided by the embodiment are characterized in that model training is performed based on the total power of a family user and the active power of a load element, Hash processing and multilayer perceptron network training are performed on features obtained through thermal unique code coding, the similarity between a user and each load is generated by means of a double-tower model, and finally the load element with the highest relevance degree with the user is obtained.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A method of constructing a load element for a power consumer representation, comprising:
collecting the total power of a household user and the active power of a single load element as a training set;
carrying out thermal one-code coding on the total power of the household users and the active power of a single load element to respectively obtain a user characteristic vector and an active power characteristic vector;
performing hash processing on the user characteristic vector and the active power characteristic vector respectively to obtain a low-dimensional user characteristic vector and a low-dimensional active power characteristic vector respectively;
inputting the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector into a first multilayer perceptron network and a second multilayer perceptron network respectively for training, and outputting each user embedded characteristic vector and each load element embedded characteristic vector respectively;
and calculating the correlation between each user embedded feature vector and the load element embedded feature vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest user correlation degree.
2. The method of claim 1, wherein the hot unique code encoding of the total home user power comprises:
dividing the same total power of the family users into the same type of characteristics of the users;
representing the same kind of characteristics of each user by binary to obtain the characteristic vector of the user;
performing thermal unique code encoding on the active power of the single load element to obtain an active power characteristic vector of the single load element, including:
dividing the active power of the same single load element into active power homogeneous characteristics;
and (4) representing each active power same-kind characteristic by binary to obtain an active power characteristic vector.
3. The method of claim 1, wherein performing a hash process on the user feature vector to obtain a low-dimensional user feature vector comprises:
respectively adding a start mark and an end mark before and after the user feature vector;
setting a step length and a sliding window based on an N-GRAM algorithm, recording user characteristic vector slices which are stroked each time, recording all the user characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining a low-dimensional user characteristic vector;
carrying out hash processing on the active power characteristic vector to obtain a low-dimensional active power characteristic vector, comprising the following steps:
respectively adding a start mark and an end mark before and after the active power characteristic vector;
setting a step length and a sliding window based on an N-GRAM algorithm, recording active power characteristic vector slices which are stroked each time, recording all the active power characteristic vector slices as vector representation consisting of a plurality of numbers, and obtaining a low-dimensional active power characteristic vector.
4. The method according to claim 1, wherein the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector are input into a first multilayer perceptron network and a second multilayer perceptron network respectively for training, and parameters of the first multilayer perceptron network and the second multilayer perceptron network are obtained respectively according to a minimum loss function.
5. The method of claim 4, wherein the parameters of the first and second multi-layered perceptron networks include weight matrices and bias terms for hidden layers.
6. The method of claim 5, wherein the load element embedding feature vector is calculated by the formula:
li=f(Wili-1+bi)i=2,…,N-1;
yL=f(WNlN-1+bN);
wherein ,yLEmbedding feature vectors for load elements,/iIs the i-th hidden layer, W, of the first multi-layer perceptron networkiA weight matrix for the i-th hidden layer of the first multi-layer perceptron network, biRepresenting the bias item of the i-th hidden layer of the first multi-layer perceptron network, N being the number of hidden layers of the first multi-layer perceptron network, bNA bias term for an output layer of the first multi-layer perceptron network;
the user-embedded feature vector is calculated by the following formula:
lm=f(Wmlm-1+bm)m=2,…,M-1;
yU=f(WMlM-1+bM);
wherein ,yUEmbedding feature vectors for users,/mIs the m-th hidden layer, W, of the second multi-layer perceptron networkmA weight matrix for the mth hidden layer of the second multi-layer perceptron network, bmRepresenting the bias item of the M hidden layer of the second multi-layer perceptron network, M being the number of hidden layers of the second multi-layer perceptron network, bMAnd outputting the bias term of the layer for the second multi-layer perceptron network.
7. The method of claim 6, wherein the correlation of the user-embedded feature vector to the load element-embedded feature vector is calculated by the formula:
8. The method of claim 7, wherein the posterior probability is calculated by the following formula:
wherein ,P(Lj| U) is the posterior probability, and γ is the smoothing factor of the logistic regression activation functionSub, L represents a set of load elements, U represents a set of users, L' represents the sum of all load elements in the set of load elements, j is "+" or "-", L+Representing a positive sample of the load element, L-Representing a load element negative example.
10. A load cell construction apparatus for power consumer portrayal, comprising:
the acquisition module is used for acquiring the total power of the household users and the active power of a single load element as a training set;
the coding module is used for carrying out hot one-code coding on the total power of the household users and the active power of a single load element to respectively obtain a user characteristic vector and an active power characteristic vector;
the processing module is used for respectively carrying out Hash processing on the user characteristic vector and the active power characteristic vector to respectively obtain a low-dimensional user characteristic vector and a low-dimensional active power characteristic vector;
the training module is used for respectively inputting the low-dimensional user characteristic vector and the low-dimensional active power characteristic vector into a first multilayer perceptron network and a second multilayer perceptron network for training, and respectively outputting each user embedded characteristic vector and each load element embedded characteristic vector;
and the calculation module is used for calculating the correlation between each user embedded characteristic vector and the load element embedded characteristic vector based on cosine similarity, converting the correlation calculation result into posterior probability by using a logistic regression activation function, and obtaining the load element with the highest correlation degree with the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210101090.8A CN114511058B (en) | 2022-01-27 | 2022-01-27 | Load element construction method and device for electric power user portrait |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210101090.8A CN114511058B (en) | 2022-01-27 | 2022-01-27 | Load element construction method and device for electric power user portrait |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114511058A true CN114511058A (en) | 2022-05-17 |
CN114511058B CN114511058B (en) | 2023-06-02 |
Family
ID=81549435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210101090.8A Active CN114511058B (en) | 2022-01-27 | 2022-01-27 | Load element construction method and device for electric power user portrait |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114511058B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114881538A (en) * | 2022-06-24 | 2022-08-09 | 东南大学溧阳研究院 | Demand response user selection method based on perceptron |
CN115017333A (en) * | 2022-06-13 | 2022-09-06 | 四川大学 | Method for converting modeless data of material genetic engineering into knowledge map |
CN117910980A (en) * | 2024-03-19 | 2024-04-19 | 国网山东省电力公司信息通信公司 | Method, system, equipment and medium for managing electric power archive data |
Citations (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101292258A (en) * | 2005-08-23 | 2008-10-22 | 株式会社理光 | System and methods for creation and use of a mixed media environment |
JP2013182399A (en) * | 2012-03-01 | 2013-09-12 | Nippon Telegr & Teleph Corp <Ntt> | Load distribution program and load distribution device |
CN104777383A (en) * | 2015-04-16 | 2015-07-15 | 武汉阿帕科技有限公司 | Non-invasive electrical load monitoring and load decomposing device |
CN106936129A (en) * | 2017-03-23 | 2017-07-07 | 东北大学 | Electric load discrimination method based on multi-feature fusion and system |
CN107121210A (en) * | 2017-05-19 | 2017-09-01 | 四川成瑞科技有限公司 | Temp monitoring alarm, method and system |
CN107578288A (en) * | 2017-09-08 | 2018-01-12 | 东南大学 | A kind of non-intrusion type load decomposition method for considering user power utilization pattern differentials |
US20180307969A1 (en) * | 2017-04-20 | 2018-10-25 | Hitachi, Ltd. | Data analysis apparatus, data analysis method, and recording medium |
CN109101620A (en) * | 2018-08-08 | 2018-12-28 | 广州神马移动信息科技有限公司 | Similarity calculating method, clustering method, device, storage medium and electronic equipment |
CN109523057A (en) * | 2018-10-18 | 2019-03-26 | 国网山东省电力公司经济技术研究院 | A kind of regional power grid Methods of electric load forecasting considering economic transition background |
CN109598446A (en) * | 2018-12-09 | 2019-04-09 | 国网江苏省电力有限公司扬州供电分公司 | A kind of tariff recovery Warning System based on machine learning algorithm |
WO2019141040A1 (en) * | 2018-01-22 | 2019-07-25 | 佛山科学技术学院 | Short term electrical load predication method |
CN110212542A (en) * | 2019-05-28 | 2019-09-06 | 国网江苏省电力有限公司 | A kind of accurate cutting load method and device |
US20190325293A1 (en) * | 2018-04-19 | 2019-10-24 | National University Of Singapore | Tree enhanced embedding model predictive analysis methods and systems |
CN110472041A (en) * | 2019-07-01 | 2019-11-19 | 浙江工业大学 | A kind of file classification method towards the online quality inspection of customer service |
US20190384879A1 (en) * | 2018-06-13 | 2019-12-19 | State Grid Jiangsu Electric Power Co., Ltd. | Meteorology sensitive load power estimation method and apparatus |
CN110635833A (en) * | 2019-09-25 | 2019-12-31 | 北京邮电大学 | Power distribution method and device based on deep learning |
CN110928990A (en) * | 2019-10-31 | 2020-03-27 | 南方电网调峰调频发电有限公司 | Method special for recommending standing book data of power equipment based on user portrait |
CN110991263A (en) * | 2019-11-12 | 2020-04-10 | 华中科技大学 | Non-invasive load identification method and system for resisting background load interference |
CN111428355A (en) * | 2020-03-18 | 2020-07-17 | 东南大学 | Modeling method for power load digital statistics intelligent synthesis |
CN111489036A (en) * | 2020-04-14 | 2020-08-04 | 天津相和电气科技有限公司 | Resident load prediction method and device based on electrical appliance load characteristics and deep learning |
CN111612319A (en) * | 2020-05-11 | 2020-09-01 | 上海电力大学 | Load curve depth embedding clustering method based on one-dimensional convolution self-encoder |
CN111651668A (en) * | 2020-05-06 | 2020-09-11 | 上海晶赞融宣科技有限公司 | User portrait label generation method and device, storage medium and terminal |
CN111753058A (en) * | 2020-06-30 | 2020-10-09 | 北京信息科技大学 | Text viewpoint mining method and system |
CN111799782A (en) * | 2020-06-29 | 2020-10-20 | 中国电力科学研究院有限公司 | Power equipment power failure window period correction method and system based on machine learning |
CN111860977A (en) * | 2020-06-30 | 2020-10-30 | 清华大学 | Probability prediction method and probability prediction device for short-term load |
CN111949707A (en) * | 2020-08-06 | 2020-11-17 | 杭州电子科技大学 | Shadow field-based hidden Markov model non-invasive load decomposition method |
WO2021019831A1 (en) * | 2019-07-30 | 2021-02-04 | 特許庁長官が代表する日本国 | Management system and management method |
CN112364203A (en) * | 2020-11-30 | 2021-02-12 | 未来电视有限公司 | Television video recommendation method, device, server and storage medium |
AU2020104000A4 (en) * | 2020-12-10 | 2021-02-18 | Guangxi University | Short-term Load Forecasting Method Based on TCN and IPSO-LSSVM Combined Model |
CN112379159A (en) * | 2020-11-09 | 2021-02-19 | 北华航天工业学院 | Non-invasive household load decomposition method based on electric appliance operation mode |
CN112598303A (en) * | 2020-12-28 | 2021-04-02 | 宁波迦南智能电气股份有限公司 | Non-invasive load decomposition method based on combination of 1D convolutional neural network and LSTM |
WO2021083241A1 (en) * | 2019-10-31 | 2021-05-06 | Oppo广东移动通信有限公司 | Facial image quality evaluation method, feature extraction model training method, image processing system, computer readable medium, and wireless communications terminal |
CN113094198A (en) * | 2021-04-13 | 2021-07-09 | 中国工商银行股份有限公司 | Service fault positioning method and device based on machine learning and text classification |
CN113128494A (en) * | 2019-12-30 | 2021-07-16 | 华为技术有限公司 | Method, device and system for recognizing text in image |
WO2021208516A1 (en) * | 2020-04-17 | 2021-10-21 | 贵州电网有限责任公司 | Non-intrusive load disaggregation method |
CN113554241A (en) * | 2021-09-02 | 2021-10-26 | 国网山东省电力公司泰安供电公司 | User layering method and prediction method based on user electricity complaint behaviors |
CN113687182A (en) * | 2021-08-12 | 2021-11-23 | 湖南工学院 | Household load identification method, program and system based on noise reduction automatic encoder |
CN113761250A (en) * | 2021-04-25 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Model training method, merchant classification method and device |
CN113887833A (en) * | 2021-10-28 | 2022-01-04 | 西安热工研究院有限公司 | Distributed energy user side time-by-time load prediction method and system |
CN113935401A (en) * | 2021-09-18 | 2022-01-14 | 北京三快在线科技有限公司 | Article information processing method, article information processing device, article information processing server and storage medium |
-
2022
- 2022-01-27 CN CN202210101090.8A patent/CN114511058B/en active Active
Patent Citations (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101292258A (en) * | 2005-08-23 | 2008-10-22 | 株式会社理光 | System and methods for creation and use of a mixed media environment |
JP2013182399A (en) * | 2012-03-01 | 2013-09-12 | Nippon Telegr & Teleph Corp <Ntt> | Load distribution program and load distribution device |
CN104777383A (en) * | 2015-04-16 | 2015-07-15 | 武汉阿帕科技有限公司 | Non-invasive electrical load monitoring and load decomposing device |
CN106936129A (en) * | 2017-03-23 | 2017-07-07 | 东北大学 | Electric load discrimination method based on multi-feature fusion and system |
US20180307969A1 (en) * | 2017-04-20 | 2018-10-25 | Hitachi, Ltd. | Data analysis apparatus, data analysis method, and recording medium |
CN107121210A (en) * | 2017-05-19 | 2017-09-01 | 四川成瑞科技有限公司 | Temp monitoring alarm, method and system |
CN107578288A (en) * | 2017-09-08 | 2018-01-12 | 东南大学 | A kind of non-intrusion type load decomposition method for considering user power utilization pattern differentials |
WO2019141040A1 (en) * | 2018-01-22 | 2019-07-25 | 佛山科学技术学院 | Short term electrical load predication method |
US20190325293A1 (en) * | 2018-04-19 | 2019-10-24 | National University Of Singapore | Tree enhanced embedding model predictive analysis methods and systems |
US20190384879A1 (en) * | 2018-06-13 | 2019-12-19 | State Grid Jiangsu Electric Power Co., Ltd. | Meteorology sensitive load power estimation method and apparatus |
WO2019238096A1 (en) * | 2018-06-13 | 2019-12-19 | 国网江苏省电力有限公司 | Method and apparatus for estimating weather-sensitive load power |
CN109101620A (en) * | 2018-08-08 | 2018-12-28 | 广州神马移动信息科技有限公司 | Similarity calculating method, clustering method, device, storage medium and electronic equipment |
CN109523057A (en) * | 2018-10-18 | 2019-03-26 | 国网山东省电力公司经济技术研究院 | A kind of regional power grid Methods of electric load forecasting considering economic transition background |
CN109598446A (en) * | 2018-12-09 | 2019-04-09 | 国网江苏省电力有限公司扬州供电分公司 | A kind of tariff recovery Warning System based on machine learning algorithm |
CN110212542A (en) * | 2019-05-28 | 2019-09-06 | 国网江苏省电力有限公司 | A kind of accurate cutting load method and device |
CN110472041A (en) * | 2019-07-01 | 2019-11-19 | 浙江工业大学 | A kind of file classification method towards the online quality inspection of customer service |
WO2021019831A1 (en) * | 2019-07-30 | 2021-02-04 | 特許庁長官が代表する日本国 | Management system and management method |
CN110635833A (en) * | 2019-09-25 | 2019-12-31 | 北京邮电大学 | Power distribution method and device based on deep learning |
CN110928990A (en) * | 2019-10-31 | 2020-03-27 | 南方电网调峰调频发电有限公司 | Method special for recommending standing book data of power equipment based on user portrait |
WO2021083241A1 (en) * | 2019-10-31 | 2021-05-06 | Oppo广东移动通信有限公司 | Facial image quality evaluation method, feature extraction model training method, image processing system, computer readable medium, and wireless communications terminal |
CN110991263A (en) * | 2019-11-12 | 2020-04-10 | 华中科技大学 | Non-invasive load identification method and system for resisting background load interference |
CN113128494A (en) * | 2019-12-30 | 2021-07-16 | 华为技术有限公司 | Method, device and system for recognizing text in image |
CN111428355A (en) * | 2020-03-18 | 2020-07-17 | 东南大学 | Modeling method for power load digital statistics intelligent synthesis |
CN111489036A (en) * | 2020-04-14 | 2020-08-04 | 天津相和电气科技有限公司 | Resident load prediction method and device based on electrical appliance load characteristics and deep learning |
WO2021208516A1 (en) * | 2020-04-17 | 2021-10-21 | 贵州电网有限责任公司 | Non-intrusive load disaggregation method |
CN111651668A (en) * | 2020-05-06 | 2020-09-11 | 上海晶赞融宣科技有限公司 | User portrait label generation method and device, storage medium and terminal |
CN111612319A (en) * | 2020-05-11 | 2020-09-01 | 上海电力大学 | Load curve depth embedding clustering method based on one-dimensional convolution self-encoder |
CN111799782A (en) * | 2020-06-29 | 2020-10-20 | 中国电力科学研究院有限公司 | Power equipment power failure window period correction method and system based on machine learning |
CN111860977A (en) * | 2020-06-30 | 2020-10-30 | 清华大学 | Probability prediction method and probability prediction device for short-term load |
CN111753058A (en) * | 2020-06-30 | 2020-10-09 | 北京信息科技大学 | Text viewpoint mining method and system |
CN111949707A (en) * | 2020-08-06 | 2020-11-17 | 杭州电子科技大学 | Shadow field-based hidden Markov model non-invasive load decomposition method |
CN112379159A (en) * | 2020-11-09 | 2021-02-19 | 北华航天工业学院 | Non-invasive household load decomposition method based on electric appliance operation mode |
CN112364203A (en) * | 2020-11-30 | 2021-02-12 | 未来电视有限公司 | Television video recommendation method, device, server and storage medium |
AU2020104000A4 (en) * | 2020-12-10 | 2021-02-18 | Guangxi University | Short-term Load Forecasting Method Based on TCN and IPSO-LSSVM Combined Model |
CN112598303A (en) * | 2020-12-28 | 2021-04-02 | 宁波迦南智能电气股份有限公司 | Non-invasive load decomposition method based on combination of 1D convolutional neural network and LSTM |
CN113094198A (en) * | 2021-04-13 | 2021-07-09 | 中国工商银行股份有限公司 | Service fault positioning method and device based on machine learning and text classification |
CN113761250A (en) * | 2021-04-25 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Model training method, merchant classification method and device |
CN113687182A (en) * | 2021-08-12 | 2021-11-23 | 湖南工学院 | Household load identification method, program and system based on noise reduction automatic encoder |
CN113554241A (en) * | 2021-09-02 | 2021-10-26 | 国网山东省电力公司泰安供电公司 | User layering method and prediction method based on user electricity complaint behaviors |
CN113935401A (en) * | 2021-09-18 | 2022-01-14 | 北京三快在线科技有限公司 | Article information processing method, article information processing device, article information processing server and storage medium |
CN113887833A (en) * | 2021-10-28 | 2022-01-04 | 西安热工研究院有限公司 | Distributed energy user side time-by-time load prediction method and system |
Non-Patent Citations (7)
Title |
---|
PEDRO ALONSO等: ""HyperEmbed:Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing Enabled Embedding of n-gram Statistics"", 《2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS》 * |
PO-SEN HUANG等: ""Learning deep structured semantic models for web search using clickthrough data"", 《PROCEEDINGS OF THE 22ND ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT》 * |
ZHONG C等: ""Research on Electricity Consumption Behavior of Electric Power Users Based on Tag Technology and Clustering Algorithm"", 《PROCEEDINGS OF THE 2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING》 * |
余辉等: ""在线技术供需文本匹配方法研究综述"", 《情报科学》 * |
杨劲男等: ""多电器混叠场景NILM负荷事件优化匹配方法"", 《电网技术》 * |
赵晋泉等: ""电力用户用电特征选择与行为画像"", 《电网技术》 * |
顾洁等: ""考虑集群辨识的海量用户负荷分层概率预测"", 《电力系统自动化》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115017333A (en) * | 2022-06-13 | 2022-09-06 | 四川大学 | Method for converting modeless data of material genetic engineering into knowledge map |
CN114881538A (en) * | 2022-06-24 | 2022-08-09 | 东南大学溧阳研究院 | Demand response user selection method based on perceptron |
CN114881538B (en) * | 2022-06-24 | 2022-12-13 | 东南大学溧阳研究院 | Demand response user selection method based on perceptron |
CN117910980A (en) * | 2024-03-19 | 2024-04-19 | 国网山东省电力公司信息通信公司 | Method, system, equipment and medium for managing electric power archive data |
CN117910980B (en) * | 2024-03-19 | 2024-06-11 | 国网山东省电力公司信息通信公司 | Method, system, equipment and medium for managing electric power archive data |
Also Published As
Publication number | Publication date |
---|---|
CN114511058B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114511058B (en) | Load element construction method and device for electric power user portrait | |
CN112214685B (en) | Knowledge graph-based personalized recommendation method | |
Taneja et al. | Modeling user preferences using neural networks and tensor factorization model | |
CN109544306B (en) | Cross-domain recommendation method and device based on user behavior sequence characteristics | |
CN113343125B (en) | Academic accurate recommendation-oriented heterogeneous scientific research information integration method and system | |
CN110263160B (en) | Question classification method in computer question-answering system | |
CN108573399B (en) | Merchant recommendation method and system based on transition probability network | |
CN113569001A (en) | Text processing method and device, computer equipment and computer readable storage medium | |
CN113177141B (en) | Multi-label video hash retrieval method and device based on semantic embedded soft similarity | |
CN109033107A (en) | Image search method and device, computer equipment and storage medium | |
CN111737578A (en) | Recommendation method and system | |
CN110245310B (en) | Object behavior analysis method, device and storage medium | |
CN113505307B (en) | Social network user region identification method based on weak supervision enhancement | |
CN112712127A (en) | Image emotion polarity classification method combined with graph convolution neural network | |
CN112256965A (en) | Neural collaborative filtering model recommendation method based on lambdamat | |
CN111159242B (en) | Client reordering method and system based on edge calculation | |
CN117635238A (en) | Commodity recommendation method, device, equipment and storage medium | |
CN115408603A (en) | Online question-answer community expert recommendation method based on multi-head self-attention mechanism | |
CN114239730B (en) | Cross-modal retrieval method based on neighbor ordering relation | |
CN117216281A (en) | Knowledge graph-based user interest diffusion recommendation method and system | |
CN115203529A (en) | Deep neural network recommendation model and method based on multi-head self-attention mechanism | |
CN111597401A (en) | Data processing method, device, equipment and medium based on graph relation network | |
CN117112891A (en) | Sequence recommendation method for multiple operation behaviors of user | |
CN114996566A (en) | Intelligent recommendation system and method for industrial internet platform | |
CN116452269A (en) | Dynamic graph neural network recommendation method based on multiple behaviors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |