CN110687392A - Power system fault diagnosis device and method based on neural network - Google Patents

Power system fault diagnosis device and method based on neural network Download PDF

Info

Publication number
CN110687392A
CN110687392A CN201910821925.5A CN201910821925A CN110687392A CN 110687392 A CN110687392 A CN 110687392A CN 201910821925 A CN201910821925 A CN 201910821925A CN 110687392 A CN110687392 A CN 110687392A
Authority
CN
China
Prior art keywords
layer
unit
neural network
layers
power system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910821925.5A
Other languages
Chinese (zh)
Other versions
CN110687392B (en
Inventor
李良
庞振江
于同伟
王峥
丁岳
葛维春
黄旭
卢岩
杨文�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd
Beijing Smartchip Microelectronics Technology Co Ltd
National Network Information and Communication Industry Group Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd
Beijing Smartchip Microelectronics Technology Co Ltd
National Network Information and Communication Industry Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Liaoning Electric Power Co Ltd, Beijing Smartchip Microelectronics Technology Co Ltd, National Network Information and Communication Industry Group Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201910821925.5A priority Critical patent/CN110687392B/en
Priority claimed from CN201910821925.5A external-priority patent/CN110687392B/en
Publication of CN110687392A publication Critical patent/CN110687392A/en
Application granted granted Critical
Publication of CN110687392B publication Critical patent/CN110687392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/08Locating faults in cables, transmission lines, or networks
    • G01R31/081Locating faults in cables, transmission lines, or networks according to type of conductors
    • G01R31/086Locating faults in cables, transmission lines, or networks according to type of conductors in power transmission or distribution networks, i.e. with interconnected conductors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a device and a method for diagnosing faults of a power system based on a neural network, wherein the device is provided with a pipeline accelerator based on an AlexNet neural network model, and an off-chip memory of the device is used for storing acquired power system data and the AlexNet neural network model; the multiplication matrix unit is used for executing convolution operation; the modified linear unit is used for carrying out activation operation; the pooling unit is used for carrying out normalization and pooling operations; the complementary multiplier is used for carrying out multiplication operation; the on-chip memory is used for storing data output by the supplementary multiplier; the feeding unit is used for distributing data in the off-chip memory and the on-chip memory to the multiplication matrix unit; the controller is used for controlling the data processing process of each unit according to the neural network model. The power system fault diagnosis device and method can extract the high-dimensional characteristics of the current more accurately, are more sensitive to small current changes, and are higher in diagnosis reliability.

Description

Power system fault diagnosis device and method based on neural network
Technical Field
The present invention relates to the field of power detection technologies, and in particular, to a power system fault diagnosis apparatus and method based on a neural network.
Background
In the construction process of an intelligent power grid taking an extra-high voltage power grid as a backbone grid in China, the scale of a power system is enlarged, the voltage level is improved, and power equipment with larger capacity and higher voltage level is objectively required to be configured. The high-capacity transformer puts higher requirements on relay protection when put into operation, and the traditional protection means and protection method are seriously challenged. Longitudinal differential protection has been used as the main protection of transformers for a long time, long-term operation experience shows that longitudinal differential protection can effectively distinguish internal faults and external faults of the transformers, and the protection difficulty lies in how to prevent false operation caused by inrush current.
With the application of new technology in relay protection, the accuracy of the relay protection action of the power grid in China is improved year by year. Compared with other protections such as a line and the like, the correct action rate of transformer protection is still slightly low, and in addition, the extra-high voltage power grid puts higher requirements on transformer protection, the traditional protection means and protection method are severely challenged, so that the research and exploration of a digital transformer protection scheme with high speed, reliability and sensitivity and advanced technology has important theoretical and engineering values.
The artificial neural network is a network formed by widely interconnecting a large number of neurons, and is a simulation of the human brain neural network, and the structure of the network and the weight of the neuron connection embody the information of the network. The artificial neural network has strong adaptability, high-speed computing capability and self-learning capability, has good fault tolerance and is very suitable for a nonlinear system. As the transformer is a nonlinear system, a large number of scholars at home and abroad develop research on the application of the artificial neural network in transformer protection, and distinguish excitation inrush current and short-circuit current by utilizing the strong mode recognition function of the transformer, the method is as shown in figure 1, an analog-digital converter collects signals from a mutual inductor, then wavelet transformation preprocessing is carried out on the collected signals, energy characteristic values are extracted, and then the signals are input into a classical three-layer neural network for comprehensive analysis to distinguish the excitation inrush current and the inter-region faults. The inventors found that the diagnostic reliability of this method is poor.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to provide a device and a method for diagnosing faults of a power system based on a neural network, wherein an AlexNet-oriented assembly line accelerator is designed, so that high-dimensional characteristics of current can be extracted more accurately, and the device and the method are more sensitive to small current changes and higher in diagnosis reliability.
In order to achieve the above object, the present invention provides a neural network-based power system fault diagnosis apparatus, including: collection module, FPGA chip, singlechip. The acquisition module is used for acquiring data of the power system; the FPGA chip is connected with the acquisition module and is provided with a pipeline accelerator based on an AlexNet neural network model; and the singlechip is connected with the FPGA chip and is used for processing the acquired data in cooperation with the FPGA chip. Wherein the pipeline accelerator comprises: the device comprises an off-chip memory, a multiplication matrix unit, a modified linear unit, a pooling unit, a supplement multiplier, an on-chip memory, a feeding unit and a controller. The off-chip memory is used for storing the collected power system data and the AlexNet neural network model; the multiplication matrix unit is used for performing convolution operation on input data; the correction linear unit is coupled with the multiplication matrix unit and used for performing activation operation on data output by the multiplication matrix unit; the pooling unit is coupled with the modified linear unit and is used for normalizing and pooling data output by the modified linear unit; the supplementary multiplier is coupled with the pooling unit and is used for multiplying the data output by the pooling unit; the on-chip memory is coupled with the supplementary multiplier and is used for storing the data output by the supplementary multiplier; a feeding unit is coupled with the off-chip memory, the on-chip memory and the multiplication matrix unit for distributing data in the off-chip memory and the on-chip memory to the multiplication matrix unit; the controller is coupled with the feeding unit and used for controlling the data processing process of each unit according to the neural network model.
In an embodiment of the present invention, the AlexNet neural network model has an eight-layer structure, and each of the first layer, the second layer, the third layer, the fourth layer and the fifth layer includes convolution operation, wherein the first layer, the second layer and the fifth layer further include activation operation, normalization and pooling operation, and the sixth layer to the eighth layer are fully connected layers, wherein a reordering method with priority to weight is selected between the first layer and the second layer in the neural network model, a reordering method with priority to image is selected between the third layer and the fourth layer, the fifth layer, the sixth layer, the seventh layer and the eighth layer adopt a layer-by-layer calculation method. The reordering method of the weight priority is to take the upper left pixel of the image and the first group of weights as input, then fix the weights and move the image to carry out convolution calculation; the image priority reordering method is to fix the input image and traverse all weights to carry out convolution calculation; the layer-by-layer calculation method is that the intermediate result of the upper layer is used as the input of the calculation of the layer for convolution calculation.
In an embodiment of the present invention, the acquisition module includes: an AC input and filter plug-in and an analog-to-digital conversion plug-in. The alternating current input and filtering plug-in is used for collecting alternating current signals of the power system; and the analog-to-digital conversion plug-in is connected with the alternating current input and filtering plug-in and is used for performing analog-to-digital conversion on the alternating current signal.
The invention also provides a power system fault diagnosis method based on the neural network, which comprises the following steps: the acquisition module acquires data of the power system; the off-chip memory stores the acquired data of the power system and the AlexNet neural network model; the multiplication matrix unit executes convolution operation on input data according to a control signal of a controller, wherein the controller sends the control signal according to the AlexNet neural network model; the correction linear unit carries out activation operation on the data output by the multiplication matrix unit according to the control signal of the controller; the pooling unit is used for carrying out normalization and pooling operation on the data output by the correction linear unit according to the control signal of the controller; the supplementary multiplier performs multiplication operation on the data output by the pooling unit according to the control signal of the controller; the on-chip memory stores the data output by the supplementary multiplier; a feeding unit assigns data in the off-chip memory and the on-chip memory to the multiplication matrix unit.
In an embodiment of the present invention, the AlexNet neural network model has an eight-layer structure, and each of the first layer, the second layer, the third layer, the fourth layer and the fifth layer includes convolution operation, wherein the first layer, the second layer and the fifth layer further include activation operation, normalization and pooling operation, and the sixth layer to the eighth layer are fully connected layers, wherein a reordering method with priority to weight is selected between the first layer and the second layer in the neural network model, a reordering method with priority to image is selected between the third layer and the fourth layer, the fifth layer, the sixth layer, the seventh layer and the eighth layer adopt a layer-by-layer calculation method. The reordering method of the weight priority is to take the upper left pixel of the image and the first group of weights as input, then fix the weights and move the image to carry out convolution calculation; the image priority reordering method is to fix the input image and traverse all weights to carry out convolution calculation; the layer-by-layer calculation method is that the intermediate result of the upper layer is used as the input of the calculation of the layer for convolution calculation.
In an embodiment of the present invention, the acquiring module acquires data of the power system, and includes: collecting an alternating current signal of the power system through an alternating current input of the collection module and the filter plug-in; and performing analog-to-digital conversion on the alternating current signal through an analog-to-digital conversion plug-in of the acquisition module.
The invention also provides a pipeline accelerator. It includes: the device comprises an off-chip memory, a multiplication matrix unit, a modified linear unit, a pooling unit, a supplement multiplier, an on-chip memory, a feeding unit and a controller. The off-chip memory is used for storing the collected power system data and the AlexNet neural network model; the multiplication matrix unit is used for performing convolution operation on input data; the correction linear unit is coupled with the multiplication matrix unit and is used for activating the data output by the multiplication matrix unit; the pooling unit is coupled with the correction linear unit and is used for normalizing and pooling data output by the correction linear unit; the supplementary multiplier is coupled with the pooling unit and is used for multiplying the data output by the pooling unit; the on-chip memory is coupled with the supplementary multiplier and is used for storing the data output by the supplementary multiplier; a feeding unit is coupled with the off-chip memory, the on-chip memory and the multiplication matrix unit for distributing the data in the off-chip memory and the on-chip memory to the multiplication matrix unit; the controller is coupled with the feeding unit and used for controlling the data processing process of each unit according to the neural network model.
In an embodiment of the present invention, the AlexNet neural network model has an eight-layer structure, and each of the first layer, the second layer, the third layer, the fourth layer and the fifth layer includes convolution operation, wherein the first layer, the second layer and the fifth layer further include activation operation, normalization and pooling operation, and the sixth layer to the eighth layer are fully connected layers, wherein a reordering method with priority to weight is selected between the first layer and the second layer in the neural network model, a reordering method with priority to image is selected between the third layer and the fourth layer, the fifth layer, the sixth layer, the seventh layer and the eighth layer adopt a layer-by-layer calculation method. The reordering method of the weight priority is to take the upper left pixel of the image and the first group of weights as input, then fix the weights and move the image to carry out convolution calculation; the image priority reordering method is to fix the input image and traverse all weights to carry out convolution calculation; the layer-by-layer calculation method is that the intermediate result of the upper layer is used as the input of the calculation of the layer for convolution calculation.
Compared with the prior art, according to the electric power system fault diagnosis device and method based on the neural network, the AlexNet-oriented assembly line accelerator is designed, the high-dimensional characteristics of the current can be extracted more accurately, the device is more sensitive to small current changes, and the diagnosis reliability is higher. And the first layer and the second layer in the pipeline acceleration of AlexNet are calculated by adopting a weight priority reordering method, and the image priority reordering method is used between the third layer and the fourth layer, so that the processing speed of the data of the power system can be improved, in addition, the number of intermediate variables can be reduced by combining with a complementary multiplier, the read-write quantity of an external memory is reduced, and the operation efficiency is further improved.
Drawings
Fig. 1 is a neural network-based power system fault diagnosis apparatus according to an embodiment of the present invention;
FIG. 2 is a pipeline accelerator based on AlexNet neural network model according to an embodiment of the present invention.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
In order to solve the problems in the prior art, the invention provides a device and a method for diagnosing the faults of the power system of the neural network, wherein a new flow structure accelerator is designed in the process of realizing hardware of the neural network, and a data reordering method with weight priority and image priority is provided for the operation of a cross-weight convolutional layer and the operation of a non-cross-weight convolutional layer on the basis of emphasizing the possibility that the adjacent convolutional layers use the flow structure. The interlayer pipelining operation is carried out through the complementary multiplier, the convolution operation of the sublayer is hidden in the layer to be realized, the number of intermediate variables can be reduced, and the read-write quantity of an external memory is reduced. Then, based on the proposed rearrangement method, a reconfigurable convolution neural network accelerator facing AlexNet is designed and realized.
As shown in fig. 1, in one embodiment, a neural network-based power system fault diagnosis apparatus includes: acquisition module 100, FPGA chip 200, singlechip 300.
The collection module 100 is used for collecting data of the power system. The acquisition module 100 comprises an ac input and filtering plug-in 101 and an analog to digital conversion plug-in 102.
The ac input and filter plug-in 101 is used to collect ac signals of the power system.
The analog-to-digital conversion card 102 is connected to the ac input and filter card 101 for analog-to-digital conversion of the ac signal.
The FPGA chip 200 is connected to the acquisition module 100, and the FPGA chip 200 has a pipeline accelerator 201 based on an AlexNet neural network model. Optionally, the FPGA chip 200 further has a signal alarm interface thereon, and is configured to input a signal of the power system to the FPGA chip 200 and send an alarm signal to the power system. The FPGA chip 200 also has a switching value input/output interface for inputting and outputting the switching value of the power system.
The single chip microcomputer 300 is connected with the FPGA chip 200 and used for processing the acquired data in cooperation with the FPGA chip 200.
Among them, the pipeline accelerator 201 includes: an off-chip memory 201a, a multiplication matrix unit 201b, a modified linear unit 201c, a pooling unit 201d, a complementary multiplier 201e, an on-chip memory 201f, a feeding unit 201g, and a controller 201 h.
The off-chip memory 201a is used for storing the collected power system data and the AlexNet neural network model. The multiplication matrix unit 201b is used to perform convolution operations on the input data. The modified linear unit 201c is coupled to the multiplication matrix unit 201b, and is configured to perform an activation operation on the data output by the multiplication matrix unit 201 b. The pooling unit 201d is coupled to the modified linearization unit 201c for normalizing and pooling the data output by the modified linearization unit 201 c. A complementary multiplier 201e is coupled to pooling unit 201d for multiplying the data output by pooling unit 201 d. The on-chip memory 201f is coupled to the complementary multiplier 201e for storing the data output by the complementary multiplier 201 e. The feeding unit 201g is coupled to each of the off-chip memory 201a, the on-chip memory 201f and the multiplication matrix unit 201b for distributing data in the off-chip memory 201a and the on-chip memory 201f to the multiplication matrix unit 201 b. The controller 201h is coupled to the feeding unit 201g, and is configured to control a data processing process of each unit according to the neural network model.
Specifically, the AlexNet neural network model is an eight-layer structure, the first layer, the second layer, the third layer, the fourth layer and the fifth layer all include convolution operations, wherein the first layer, the second layer and the fifth layer further include activation operations, normalization and pooling operations, and the sixth layer to the eighth layer are fully connected layers.
Specifically, the structure of each layer of AlexNet is as follows:
first, the first layer is a convolutional layer, which includes convolution, activation, pooling, and normalization operations.
The original image size input to the convolutional layer is 224 × 224 × 3(RGB image), and is preprocessed to be 227 × 227 × 3 during training. In this layer, convolution calculation is performed using 96 convolution kernels of 11 × 11 × 3, and new pixels are generated. Because two graphics processors are adopted for parallel operation, the upper part and the lower part in the network structure chart respectively bear the operation of 48 convolution kernels.
The convolution kernel moves to the x-axis direction and the y-axis direction according to a certain step length along the image to calculate convolution, and then a new characteristic diagram is generated, wherein the size of the new characteristic diagram is as follows: floor ((img _ size-filter _ size)/stride) +1 ═ new _ texture _ size, where floor denotes rounded down, img _ size is the image size, filter _ size is the kernel size, stride is the step size, and new _ texture _ size is the convolved feature map size, and this formula denotes the image size minus the convolution kernel size divided by the step size plus one pixel generated for the subtracted kernel-size pixel, the result being the convolved feature map size. The convolution shift step length of the layer in AlexNet is 4 pixels, and the size of the feature map generated by the convolution kernel after shift calculation is (227-11)/4+1 ═ 55, namely 55 × 55.
The convolved 55 x 55 pixel layers are then subjected to an activation operation to generate activated pixel layers of data that are still 2 groups of 55 x 48 pixel layers.
The activated pixel layer is subjected to pooling operation, the size of the pooling operation is 3 × 3, the step size is 2, and the size of the pooled image is (55-3)/2+1 ═ 27, that is, the size of the pooled pixel is 27 × 27 × 96.
And normalizing the pooled pixel layers, wherein the normalization operation size is 5 multiplied by 5, the normalized pixel scale is still 27 multiplied by 96, the 96 pixel layers are divided into two groups, and each group of 48 pixel layers are respectively operated on an independent graphic processor.
The second layer is a convolution layer, which comprises convolution operation, activation operation, pooling operation and normalization operation.
The input data of the second layer is a 27 × 27 × 96 pixel layer output by the first layer (the pixel layer divided into two groups of 27 × 27 × 48 is put in two different graphics processors to be operated), where the upper, lower, left, and right edges of each pixel layer are filled with 2 pixels (filled with 0) for convenience of subsequent processing, i.e., the size of the image becomes (27+2+2) × (27+2+ 2). The convolution kernel size of the second layer is 5 × 5, the shift step is 1 pixel, and the pixel layer size after convolution kernel calculation becomes (27+2+2-5)/1+1 ═ 27, that is, the size after convolution is 27 × 27, as in the calculation formula of the (1) th point of the first layer. This layer uses 256 convolution kernels of 5 × 5 × 48, and is similarly divided into two groups of 128, each of which is divided into two graphics processors for convolution operation, resulting in two groups of 27 × 27 × 128 convolved pixel layers. These pixel layers undergo an activation operation to produce activated pixel layers, again of 27 x 128 pixel layers in size. And then, the image is processed by pooling operation, wherein the size of the pooling operation is 3 × 3, the step size is 2, and the size of the image after pooling is (57-3)/2+1 ═ 13, namely, the size of the pixels after pooling is 2 groups of pixel layers of 13 × 13 × 128. Then, after normalization processing, the scale of normalization operation is 5 × 5, and the scale of the normalized pixel layer is 2 groups of 13 × 13 × 128 pixel layers, which are respectively operated by 2 graphics processors, or 13 × 13 × 256.
The third layer is a convolutional layer, which includes convolution operations and activation operations. The third layer input data is 2 groups of 13 × 13 × 128 pixel layers output by the second layer, for convenience of subsequent processing, the upper, lower, left and right edges of each pixel layer are filled with 1 pixel, the filled pixel is changed into (13+1+1) × (13+1+1) × 128, and the pixel layers are distributed in two graphics processors for operation. Each graphics processor in this layer has 192 convolution kernels, each convolution kernel being 3 x 256 in size. Thus, the convolution kernel in each graphics processor can perform convolution operations on all data of 2 sets of 13 × 13 × 128 pixel layers. As shown in the block diagram of this layer, two graphics processors have a connection via a crossed dotted line, that is, each graphics processor has to process input from all graphics processors of the previous layer. The step size of the convolution is 1 pixel, and the size after convolution operation is (13+1+1-3)/1+1 ═ 13, that is, 13 × 13 × 192 convolution kernels in each graphics processor, and 13 × 13 × 384 pixel layers in 2 graphics processors. The convolved pixel layers are activated to produce activated pixel layers, which are still 2 groups of 13 x 192 pixel layers, which are allocated to two groups of graphics processors for processing.
The fourth layer is a convolutional layer, which includes convolution operations and activation operations. The input data of the fourth layer is 2 groups of 13 × 13 × 192 pixel layers output by the third layer, and similarly to the third layer, for convenience of subsequent processing, the upper, lower, left, and right edges of each pixel layer are filled with 1 pixel, the size after filling is changed to (13+1+1) × (13+1+1) × 192, and the pixel layers are distributed in two graphics processors for operation. Each graphics processor in this layer has 192 convolution kernels, each convolution kernel being 3 × 3 × 192 in size (unlike in the third layer, there is no dotted connection between graphics processors in the fourth layer, i.e., there is no communication between graphics processors). The convolution step is 1 pixel, the convolution size is (13+1+1-3)/1+1 is 13, each gpu has 13 × 13 × 192 convolution kernels, and 2 gpus convolve to generate 13 × 13 × 384 pixel layers. The convolved pixel layers are subjected to an activation operation to generate activated pixel layers, still 2 groups of 13 x 192 pixel layers, which are allocated to two graphics processors for processing.
The fifth layer is a convolutional layer, which includes convolution, activation, and pooling operations. The fifth layer inputs data into 2 groups of 13 × 13 × 192 pixel layers output by the fourth layer, for convenience of subsequent processing, the upper, lower, left and right edges of each pixel layer are filled with 1 pixel, the filled size is changed to (13+1+1) × (13+1+1), and the 2 groups of pixel layer data are sent to 2 different graphics processors for operation. Each graphics processor in this layer has 128 convolution kernels, the size of each convolution kernel is 3 × 3 × 192, the step size of convolution is 1 pixel, the size after convolution is (13+1+1-3)/1+1 ═ 13, each graphics processor has 13 × 13 × 128 convolution kernels, and 2 graphics processors generate a 13 × 13 × 256 pixel layer after convolution. The convolved pixel layers are activated to produce activated pixel layers, still of size 2 groups of 13 × 13 × 128 pixel layers, which are processed separately by two graphics processors. The 2 groups of 13 × 13 × 128 pixel layers are respectively processed by pooling operations in 2 different graphics processors, the size of the pooling operation is 3 × 3, the step size is 2, the size of the image after pooling is (13-3)/2+1 ═ 6, that is, the size of the pixels after pooling is two groups of 6 × 6 × 128 pixel layer data, and the total of the 6 × 6 × 256 pixel layer data is obtained.
The sixth layer is a fully connected layer. The sixth layer input data is the output of the fifth layer and is 6 × 6 × 256 in size. This layer has 4096 convolution kernels each with a size of 6 × 6 × 256, and is called a fully-connected layer because the size of the convolution kernels is exactly the same as the size of the feature map (input) to be processed, i.e., each coefficient in the convolution kernels is multiplied by only one pixel value of the feature map (input) size, one to one. Since the convolution kernel has the same size as the feature map and only has one value after convolution operation, the pixel layer size after convolution is 4096 × 1 × 1, namely 4096 neurons. The 4096 operation results generate 4096 values by the ReLU activation function. Then 4096 result values are output through the overfitting operation.
The seventh layer is a full connection layer, 4096 data output by the sixth layer is fully connected with 4096 neurons of the seventh layer, 4096 data are generated after activation operation, and 4096 data are output after fitting operation.
The eighth layer is a full connection layer, 4096 data output by the seventh layer are fully connected with 1000 neurons of the eighth layer, and 1000 floating point values are output after training, namely the operation result. From this value, the type of power system fault can be predicted.
The method comprises the steps of selecting a reordering method with priority of weight between a first layer and a second layer in a neural network model, selecting a reordering method with priority of images between a third layer and a fourth layer, and adopting a layer-by-layer calculation method for the fourth layer, the fifth layer, the sixth layer, the seventh layer and the eighth layer. Specifically, the first layer inputs a 224 × 224 × 3 image and is convolved with 96 weight values of 11 × 11. To perform pooling calculations for the second layer, the first layer is required to output 96 × 7 × 7 results, which requires about 312000 cycles. The second layer requires about 96 × 7 × 7 × 128 cycles to produce 128 × 1 × 1 output results. Therefore, the processing time of the second layer can be covered by the first layer, and the pipeline implementation is completed. The other convolutional layers are similar to the First two layers in pipeline processing, except that IF (Image-First) Image-First reordering is used.
It should be noted that the above-described power system fault diagnosis apparatus based on a neural network needs to perform training of the neural network before performing power system fault diagnosis.
In summary, the AlexNet-oriented pipeline accelerator 201 is designed in the neural network-based power system fault diagnosis apparatus of the present embodiment, so that the high-dimensional characteristics of the current can be extracted more accurately, and the apparatus is more sensitive to the small current change and has higher diagnosis reliability. In addition, the calculation of interlayer flow is carried out by combining with a complementary multiplier 201e, the convolution operation of a sublayer can be hidden in the layer to realize, the number of intermediate variables can be reduced, the read-write quantity of an external memory is reduced, and the operation efficiency is further improved.
Based on the same inventive concept, the invention also provides a power system fault diagnosis method based on the neural network. As shown in fig. 2, in an embodiment, the method comprises: the acquisition module 100 acquires data of the power system; the off-chip memory 201a stores the acquired data of the power system and the AlexNet neural network model; the multiplication matrix unit 201b performs convolution operation on input data according to a control signal of the controller 201h, wherein the controller 201h sends the control signal according to the AlexNet neural network model; the modified linear unit 201c performs an activation operation on the data output by the multiplication matrix unit 201b according to the control signal of the controller 201 h; the pooling unit 201d performs normalization and pooling operations on the data output by the modified linear unit 201c according to the control signal of the controller 201 h; the complementary multiplier 201e multiplies the data output from the pooling unit 201d according to the control signal of the controller 201 h; the on-chip memory 201f stores the data output by the complementary multiplier 201 e; the feeding unit 201g allocates the data in the off-chip memory 201a and the on-chip memory 201f to the multiplication matrix unit 201 b.
Specifically, the AlexNet neural network model is an eight-layer structure, the first layer, the second layer, the third layer, the fourth layer and the fifth layer all include convolution operations, wherein the first layer, the second layer and the fifth layer further include activation operations, normalization and pooling operations, and the sixth layer to the eighth layer are fully connected layers. The method comprises the steps of selecting a reordering method with priority of weight between a first layer and a second layer in a neural network model, selecting a reordering method with priority of images between a third layer and a fourth layer, and adopting a layer-by-layer calculation method for the fourth layer, the fifth layer, the sixth layer, the seventh layer and the eighth layer.
In one embodiment, the collecting module 100 collects data of the power system including: collecting an alternating current signal of the power system through an alternating current input and filtering plug-in 101 of a collection module 100; the ac signal is analog-to-digital converted by the analog-to-digital conversion card 102 of the acquisition module 100.
To verify the effect, an embodiment of the present invention (using AlexNet flow accelerator) was compared with a prior art method (not using AlexNet flow accelerator) for simulation. The memory occupation of the first five layers of operations is shown in table 1.
TABLE 1
Figure BDA0002187785240000121
Simulation results show that the first five layers of AlexNet computation can improve the implementation efficiency by about 43% compared with the case of not using the AlexNet flow accelerator. The processor is realized under the 65nm process, the AlexNet flow accelerator can reach the main clock frequency of 200MHz, and the computing capability of 24GFLOPS is obtained under the power consumption of 350 mW.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (9)

1. A neural network-based power system fault diagnosis apparatus, comprising:
the acquisition module is used for acquiring data of the power system;
the FPGA chip is connected with the acquisition module and is provided with a pipeline accelerator based on an AlexNet neural network model;
a singlechip connected with the FPGA chip and used for processing the acquired data in cooperation with the FPGA chip,
wherein the pipeline accelerator comprises:
the off-chip memory is used for storing the collected power system data and the AlexNet neural network model;
a multiplication matrix unit for performing a convolution operation on the input data;
the correction linear unit is coupled with the multiplication matrix unit and is used for performing activation operation on the data output by the multiplication matrix unit;
the pooling unit is coupled with the correction linear unit and is used for normalizing and pooling data output by the correction linear unit;
a complementary multiplier, coupled to the pooling unit, for multiplying data output by the pooling unit;
an on-chip memory coupled to the complementary multiplier for storing data output by the complementary multiplier;
a feeding unit, coupled to the off-chip memory, the on-chip memory and the multiplication matrix unit, for distributing data in the off-chip memory and the on-chip memory to the multiplication matrix unit;
and the controller is coupled with the feeding unit and used for controlling the data processing process of each unit according to the neural network model.
2. The neural network-based power system fault diagnosis device according to claim 1, wherein the AlexNet neural network model has an eight-layer structure, each of the first, second, third, fourth and fifth layers includes convolution operations, wherein the first, second and fifth layers further include activation, normalization and pooling operations, and the sixth to eighth layers are fully connected layers, wherein the first and second layers in the neural network model are convolved by a weight-first reordering method, the third and fifth layers are convolved by an image-first reordering method, and the fourth, third, sixth, seventh and eighth layers are calculated by layers,
the reordering method of the weight priority is to take the upper left corner pixel of the image and the first group of weights as input, then fix the weights and move the image to carry out convolution calculation; the image priority reordering method is to fix the input image and traverse all weights to carry out convolution calculation; the layer-by-layer calculation method is that the intermediate result of the upper layer is used as the input of the calculation of the layer for convolution calculation.
3. The neural network-based power system fault diagnosis device of claim 1, wherein the acquisition module comprises:
the alternating current input and filtering plug-in is used for collecting alternating current signals of the power system;
and the analog-to-digital conversion plug-in is connected with the alternating current input and filtering plug-in and is used for performing analog-to-digital conversion on the alternating current signal.
4. A power system fault diagnosis method based on a neural network is characterized by comprising the following steps:
the acquisition module acquires data of the power system;
the off-chip memory stores the acquired data of the power system and the AlexNet neural network model;
the multiplication matrix unit executes convolution operation on input data according to a control signal of a controller, wherein the controller sends the control signal according to the AlexNet neural network model;
the correction linear unit carries out activation operation on the data output by the multiplication matrix unit according to the control signal of the controller;
the pooling unit is used for carrying out normalization and pooling operation on the data output by the correction linear unit according to the control signal of the controller;
the supplementary multiplier multiplies the data output by the pooling unit according to the control signal of the controller;
the on-chip memory stores the data output by the supplementary multiplier;
a feeding unit assigns data in the off-chip memory and the on-chip memory to the multiplication matrix unit.
5. The neural network-based power system fault diagnosis method according to claim 4, wherein the AlexNet neural network model is an eight-layer structure, each of first, second, third, fourth and fifth layers includes convolution operations, wherein the first, second and fifth layers further include activation, normalization and pooling operations, and sixth to eighth layers are fully-connected layers, wherein a reordering method of weight preference is selected between the first and second layers, a reordering method of image preference is selected between the third and fourth layers, and the fourth, fifth, sixth, seventh and eighth layers employ a layer-by-layer calculation method,
the reordering method of the weight priority is to take the upper left corner pixel of the image and the first group of weights as input, then fix the weights and move the image to carry out convolution calculation; the image priority reordering method is to fix the input image and traverse all weights to carry out convolution calculation; the layer-by-layer calculation method is that the intermediate result of the upper layer is used as the input of the calculation of the layer for convolution calculation.
6. The neural network-based power system fault diagnosis method of claim 4, wherein the collecting module collecting data of the power system comprises:
collecting an alternating current signal of the power system through an alternating current input of the collection module and the filter plug-in;
and performing analog-to-digital conversion on the alternating current signal through an analog-to-digital conversion plug-in of the acquisition module.
7. A pipeline accelerator, comprising:
the off-chip memory is used for storing the collected power system data and the AlexNet neural network model;
a multiplication matrix unit for performing a convolution operation on the input data;
the correction linear unit is coupled with the multiplication matrix unit and is used for performing activation operation on the data output by the multiplication matrix unit;
the pooling unit is coupled with the correction linear unit and is used for normalizing and pooling data output by the correction linear unit;
a complementary multiplier, coupled to the pooling unit, for multiplying data output by the pooling unit;
an on-chip memory coupled to the complementary multiplier for storing data output by the complementary multiplier;
a feeding unit, coupled to the off-chip memory, the on-chip memory and the multiplication matrix unit, for distributing data in the off-chip memory and the on-chip memory to the multiplication matrix unit;
and the controller is coupled with the feeding unit and used for controlling the data processing process of each unit according to the neural network model.
8. The pipeline accelerator of claim 7, wherein the AlexNet neural network model is an eight-layer structure, each of the first, second, third, fourth, and fifth layers comprises convolution operations, wherein the first, second, and fifth layers further comprise activation, normalization, and pooling operations, and the sixth through eighth layers are fully-connected layers, wherein weight-first reordering is selected between the first and second layers, image-first reordering is selected between the third and fourth layers, and layer-by-layer calculation is used for the fourth, fifth, sixth, seventh, and eighth layers in the neural network model,
the reordering method of the weight priority is to take the upper left corner pixel of the image and the first group of weights as input, then fix the weights and move the image to carry out convolution calculation; the image priority reordering method is to fix the input image and traverse all weights to carry out convolution calculation; the layer-by-layer calculation method is that the intermediate result of the upper layer is used as the input of the calculation of the layer for convolution calculation.
9. An FPGA chip comprising the pipeline accelerator of claim 7 or 8.
CN201910821925.5A 2019-09-02 Power system fault diagnosis device and method based on neural network Active CN110687392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910821925.5A CN110687392B (en) 2019-09-02 Power system fault diagnosis device and method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910821925.5A CN110687392B (en) 2019-09-02 Power system fault diagnosis device and method based on neural network

Publications (2)

Publication Number Publication Date
CN110687392A true CN110687392A (en) 2020-01-14
CN110687392B CN110687392B (en) 2024-05-31

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220628A (en) * 2021-04-29 2021-08-06 深圳供电局有限公司 Processor and edge computing device for power grid anomaly detection
CN114282608A (en) * 2021-12-22 2022-04-05 国网安徽省电力有限公司 Hidden fault diagnosis and early warning method and system for current transformer

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102065A (en) * 2018-06-28 2018-12-28 广东工业大学 A kind of convolutional neural networks accelerator based on PSoC
CN211453834U (en) * 2019-09-02 2020-09-08 北京智芯微电子科技有限公司 Assembly line accelerator, power system fault diagnosis device and FPGA chip

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102065A (en) * 2018-06-28 2018-12-28 广东工业大学 A kind of convolutional neural networks accelerator based on PSoC
CN211453834U (en) * 2019-09-02 2020-09-08 北京智芯微电子科技有限公司 Assembly line accelerator, power system fault diagnosis device and FPGA chip

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220628A (en) * 2021-04-29 2021-08-06 深圳供电局有限公司 Processor and edge computing device for power grid anomaly detection
CN114282608A (en) * 2021-12-22 2022-04-05 国网安徽省电力有限公司 Hidden fault diagnosis and early warning method and system for current transformer

Similar Documents

Publication Publication Date Title
CN104091172B (en) A kind of feature extracting method of Mental imagery EEG signals
Sasaki et al. A solution method of unit commitment by artificial neural networks
CN108010016A (en) A kind of data-driven method for diagnosing faults based on convolutional neural networks
CN107862374A (en) Processing with Neural Network system and processing method based on streamline
CN106778745A (en) A kind of licence plate recognition method and device, user equipment
CN106709532A (en) Image processing method and device
CN110263833A (en) Based on coding-decoding structure image, semantic dividing method
CN107748900A (en) Tumor of breast sorting technique and device based on distinction convolutional neural networks
CN109325591A (en) Neural network processor towards Winograd convolution
CN111626932B (en) Super-resolution reconstruction method and device for image
CN107194426A (en) A kind of image-recognizing method based on Spiking neutral nets
CN106650924A (en) Processor based on time dimension and space dimension data flow compression and design method
CN108510058A (en) Weight storage method in neural network and the processor based on this method
CN110689118A (en) Improved target detection method based on YOLO V3-tiny
CN110222760A (en) A kind of fast image processing method based on winograd algorithm
CN113011386B (en) Expression recognition method and system based on equally divided characteristic graphs
CN108875917A (en) A kind of control method and device for convolutional neural networks processor
CN108985231A (en) A kind of vena metacarpea feature extracting method based on multiple dimensioned convolution kernel
CN115222946B (en) Single-stage instance image segmentation method and device and computer equipment
CN106339071A (en) Method and device for identifying behaviors
CN107368887A (en) A kind of structure and its construction method of profound memory convolutional neural networks
CN101828916A (en) Electrocardiosignal processing system
CN110163350A (en) A kind of computing device and method
CN109829371A (en) A kind of method for detecting human face and device
CN109739703A (en) Adjust wrong method and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant