CN115546224B - Automatic fault identification and control method for motor operation process - Google Patents

Automatic fault identification and control method for motor operation process Download PDF

Info

Publication number
CN115546224B
CN115546224B CN202211553078.7A CN202211553078A CN115546224B CN 115546224 B CN115546224 B CN 115546224B CN 202211553078 A CN202211553078 A CN 202211553078A CN 115546224 B CN115546224 B CN 115546224B
Authority
CN
China
Prior art keywords
image
spark
motor
matrix
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211553078.7A
Other languages
Chinese (zh)
Other versions
CN115546224A (en
Inventor
杜川
张清枝
郭晋飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinxiang University
Original Assignee
Xinxiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinxiang University filed Critical Xinxiang University
Priority to CN202211553078.7A priority Critical patent/CN115546224B/en
Publication of CN115546224A publication Critical patent/CN115546224A/en
Application granted granted Critical
Publication of CN115546224B publication Critical patent/CN115546224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a motor operation process fault automatic identification and control method, which utilizes an image sensor to continuously capture dynamic image signals in the motor operation process in real time, utilizes an image intelligent identification method to identify image signals of faults generated by a motor, and controls the motor to stop operating according to the signals, thereby achieving the purpose of protecting the motor.

Description

Automatic fault identification and control method for motor operation process
Technical Field
The invention belongs to the field of motor control and detection, and particularly relates to a method for automatically identifying and controlling a fault in a motor operation process.
Background
The most common fault of a dc motor is a commutation fault, which is most obviously characterized by an excessive commutation spark. Since no practical instrument is available to accurately determine the grade of the commutation spark, monitoring of the commutation spark is typically done by spot inspection workers in production practice. Manual observation of sparks relies on subjective impressions of spark characteristics and personal experience accumulation by the spotter. The observation results are inevitably affected by subjective factors.
The research of automatic spark monitoring is started from the middle of the 80's of the last century abroad and made into instruments. For example, a paper published in the international motor conference in 1992 and entitled spark monitoring system and data processing principle discloses a reversing spark monitoring system based on a photoelectric device in the motor operation process, and has the problems that the strength of an optical fiber is fragile and is easy to be broken under the driving of high-speed wind power in the motor operation process, and for a large-sized motor, the number of the optical fibers required to be arranged is as many as hundreds, so that the detection reliability of the whole system is seriously reduced. Some methods, which have been improved in recent years, such as a high-frequency disturbance detection method based on a main pole magnetic field, an electric spectrum monitoring method, a radio wave monitoring method, and the like, cannot capture the color characteristics of the spark, reducing the accuracy of detection.
With the recent advancement of image sensor technology, industrial detection methods based on image signals have become popular. From the field of motor monitoring, the monitoring method based on the image sensor can acquire the directional distribution characteristics of the reversing sparks while capturing the color characteristics of the sparks, and meanwhile, the image sensor is easy to fix in a testing environment, not easy to be influenced by non-contact wind power and good in reliability.
However, the conventional image processing methods, such as edge extraction, are generally insufficient in spark recognition accuracy and are easily interfered by the environment. Although it is also proposed to use a neural network for identification, the conventional neural networks such as CNN and FNN or their simple variants are usually adopted, and no suitable network model structure is specially designed for motor sparks, so that the problems of low detection accuracy, large calculation burden and high requirement on the number of samples exist. Particularly, in the existing neural network recognition, the collected images are directly or simply processed and then sent to the neural network, so that the requirement on the neural network structure is high, the processing effect is poor, the input data is not optimized according to the neural network structure, and the network structure is not optimized according to the input data.
Disclosure of Invention
The invention provides a motor operation process fault automatic identification and control method based on image sensor signal collection and image intelligent automatic processing, which utilizes an image sensor to continuously capture dynamic image signals in the motor operation process in real time, utilizes an image intelligent identification method to identify image signals of fault generated by the motor, and controls the motor to stop operation according to the signals, thereby achieving the purpose of protecting the motor.
A method for automatically recognizing and controlling the failure of motor in running procedure includes such steps as collecting the spark image of motor to form a digital image of visible light
Figure DEST_PATH_IMAGE002
And an ultraviolet digital image->
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE006
Is a matrix>
Figure DEST_PATH_IMAGE008
Is selected based on the presence of>
Figure DEST_PATH_IMAGE010
Is an element matrix->
Figure 927136DEST_PATH_IMAGE008
Is greater than or equal to>
Figure DEST_PATH_IMAGE012
Is a matrix->
Figure 596015DEST_PATH_IMAGE004
The total number of rows. Is a matrix->
Figure 651695DEST_PATH_IMAGE008
The element assignments are:
Figure DEST_PATH_IMAGE014
wherein the percentage% represents the remainder operation,
Figure DEST_PATH_IMAGE016
a matrix of 4x4 orthogonal bases:
Figure DEST_PATH_IMAGE018
then define
Figure DEST_PATH_IMAGE020
Is a matrix->
Figure 281391DEST_PATH_IMAGE008
And matrix->
Figure 831452DEST_PATH_IMAGE004
The product of (a):
Figure DEST_PATH_IMAGE022
solving for
Figure 417154DEST_PATH_IMAGE020
If the number of the pixels is larger than the threshold value, the spark exists in the image, otherwise, the spark does not exist;
if spark exists in the collected image, the ultraviolet digital image is processed
Figure 846999DEST_PATH_IMAGE004
Performing spark pre-positioning:
Figure DEST_PATH_IMAGE024
wherein
Figure DEST_PATH_IMAGE026
Indicates that the template is->
Figure DEST_PATH_IMAGE028
Is in the position coordinate, </or is greater than or equal to>
Figure DEST_PATH_IMAGE030
Indicating that the coordinate in the original image is pick>
Figure DEST_PATH_IMAGE032
A convolution offset coordinate centered;
obtaining
Figure DEST_PATH_IMAGE034
Is counted as ^ maximum position>
Figure DEST_PATH_IMAGE036
As an ultraviolet image->
Figure 791688DEST_PATH_IMAGE004
Presetting a point in the center of the middle spark; normalizing the initial coordinates of all visible images to the spark center position;
utilizing a neural network to carry out spark detection based on the coordinate normalized visible light image, wherein the neural network has the structure that: the system comprises a convolution characteristic extraction layer, a multi-scale pooling layer, an orthogonal optimization layer and an output layer;
wherein the multi-scale pooling layer output is:
Figure DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE040
、/>
Figure DEST_PATH_IMAGE042
、/>
Figure DEST_PATH_IMAGE044
、/>
Figure DEST_PATH_IMAGE046
for a pooling parameter, is>
Figure DEST_PATH_IMAGE048
Is linearly biased amount, is>
Figure DEST_PATH_IMAGE050
Is based on an excitation function>
Figure DEST_PATH_IMAGE052
、/>
Figure DEST_PATH_IMAGE054
Figure DEST_PATH_IMAGE056
、/>
Figure DEST_PATH_IMAGE058
The result of the convolution feature extraction of the previous layer.
And after the spark is detected, judging that the motor fails, sending a stop signal to the motor, and controlling the motor to stop.
The model is trained using a Back Propagation (BP) algorithm.
And determining values of a convolution kernel, a pooling parameter and a linear mapping parameter through training to finish the training.
Before training, a plurality of visible light images with sparks are prepared and are used as positive samples after translation normalization.
And a plurality of visible light images without sparks are used as negative samples after translation and normalization.
The positive samples give a sample truth value 1, and the negative samples give a sample truth value 0.
And inputting the sample into the model to carry out iterative computation, comparing the true value of the sample with the output value of the model in each round, and iterating until the convergence and finishing the training.
Including by site processors and servers.
The field processor carries out acquisition image preprocessing, and the server detects and identifies the preprocessed image by utilizing the neural network and sends a control signal to the controller.
The invention has the advantages that:
1. the invention is captured by two sensors which can sense optical signals of different wave bands and are respectively positioned in a visible light wave band and an ultraviolet wave band; sparks generated by the motor can generate stronger response in an ultraviolet band relative to a visible light band, and the detection precision is improved; meanwhile, collected images are processed by using a special template and are subjected to normalization positioning, so that the environmental noise of the motor can be removed in a targeted manner, the obtained data are sent into a neural network, the detection precision can be improved, the requirement on the structural complexity of the neural network is reduced, and the calculation efficiency is improved.
2. The neural network structure is optimized, a unique excitation function is used, multi-dimensional pooling is carried out in one layer, and meanwhile orthogonal optimization is combined, so that the correlation of parameters is reduced, the network structure is more suitable for input data obtained by processing in the steps, high-precision identification can be realized under a simpler network structure, and the operation burden is reduced.
Detailed Description
Step 1: acquisition of dynamic image signals in motor operation process
The camera with the image sensor is used for mounting the camera at a position where a spark area generated in the motor can be completely shot, and the optical lens of the camera is aligned with the position of a spark line and used for collecting dynamic image signals in the running process of the motor.
The dynamic image signals are captured by two sensors capable of sensing optical signals of different wave bands and are respectively positioned in a visible light wave band and an ultraviolet wave band; sparks generated by the motor can generate stronger response in an ultraviolet band relative to a visible light band, and the detection precision is improved; and meanwhile, the visible light wave band is collected, so that the distribution position of the generated sparks can be better identified. Compared with the existing case that only a visible light detection method is adopted, the configuration can improve the detection precision and reduce the false alarm rate of faults.
The image signals collected by the visible light wave band and the ultraviolet wave band are collected in pairs according to time sequence, and the image signals collected at the same time are quantized and coded to form a visible light digital image
Figure 879861DEST_PATH_IMAGE002
And an ultraviolet digital image>
Figure 523332DEST_PATH_IMAGE004
Respectively use>
Figure 671548DEST_PATH_IMAGE002
Figure 400469DEST_PATH_IMAGE004
And (4) showing. Each image is represented by a two-dimensional matrix. One element in the two-dimensional matrix is referred to as one pixel of the image. By using
Figure 479284DEST_PATH_IMAGE032
Representing the coordinates of a pixel in an image, i.e. the coordinates of a matrix element, in conjunction with a pixel in the image>
Figure DEST_PATH_IMAGE060
Respectively representing the row and column directions of the matrixSubscripts of (a).
Figure DEST_PATH_IMAGE062
、/>
Figure DEST_PATH_IMAGE064
Respectively represent->
Figure 849698DEST_PATH_IMAGE002
、/>
Figure 621344DEST_PATH_IMAGE004
Middle coordinate is>
Figure 571983DEST_PATH_IMAGE032
The pixel value of (2).
For ultraviolet digital image
Figure 516805DEST_PATH_IMAGE004
Performing pretreatment, and detecting ultraviolet digital image->
Figure 869289DEST_PATH_IMAGE004
The probability of motor sparking being present.
Ultraviolet digital image
Figure 811837DEST_PATH_IMAGE004
And performing linear transformation. For convenience of mathematical expression, the subscripts of the elements of the matrix or vector in the present invention are counted from 0. Is arranged and/or is>
Figure DEST_PATH_IMAGE066
Represents a matrix->
Figure 62821DEST_PATH_IMAGE004
Is the width of the image, is greater than>
Figure 483438DEST_PATH_IMAGE012
Is a matrix->
Figure 18325DEST_PATH_IMAGE004
The total number of rows of (a),i.e. the height of the image. Define >>
Figure DEST_PATH_IMAGE068
Square matrix based on the status of the blood pressure>
Figure 944824DEST_PATH_IMAGE008
:/>
Figure DEST_PATH_IMAGE070
Figure 932371DEST_PATH_IMAGE006
Is a matrix>
Figure 156679DEST_PATH_IMAGE008
Is selected based on the presence of>
Figure 218176DEST_PATH_IMAGE010
Is an element matrix->
Figure 315576DEST_PATH_IMAGE008
In (c), based on the coordinate of (c)>
Figure 728103DEST_PATH_IMAGE012
Is a matrix->
Figure 756102DEST_PATH_IMAGE004
The total number of rows. Is a matrix->
Figure 1DEST_PATH_IMAGE008
The element assignments are:
Figure DEST_PATH_IMAGE014A
wherein the percentage% represents the remainder operation,
Figure 5653DEST_PATH_IMAGE016
matrix composed of 4x4 orthogonal bases:
Figure DEST_PATH_IMAGE018A
then define
Figure 967793DEST_PATH_IMAGE020
Is a matrix->
Figure 533904DEST_PATH_IMAGE008
And matrix>
Figure 648621DEST_PATH_IMAGE004
The product of (a):
Figure DEST_PATH_IMAGE022A
matrix array
Figure 71512DEST_PATH_IMAGE016
According to the collection characteristics of the ultraviolet image and the image characteristics of the spark, the original image is based on the UV image collection characteristics and the spark image characteristics>
Figure 458631DEST_PATH_IMAGE004
According to >>
Figure 641482DEST_PATH_IMAGE008
After linear transformation, the original characteristics of the spark part are kept, noise caused by the sensor in the ultraviolet image is eliminated, and the signal-to-noise ratio of the ultraviolet image is improved.
Solving for
Figure 532078DEST_PATH_IMAGE020
The value of the middle pixel is greater than the threshold value (recorded as->
Figure DEST_PATH_IMAGE072
) If the number of pixels is larger than the threshold value (marked as
Figure DEST_PATH_IMAGE074
) Then the spark is deemed to be present in the image, otherwise the spark is deemed to be absent. Tested preferably->
Figure DEST_PATH_IMAGE076
(normalization of the image pixel value range to 0-1), -or (R) in the image>
Figure DEST_PATH_IMAGE078
. In the step 1, the spark is primarily filtered through the ultraviolet image, so that the false alarm rate can be further reduced, and the detection precision is improved.
And 2, step: spark prepositioning and visible light image coordinate normalization based on ultraviolet image motor spark pre-detection
If spark exists in the collected image, the ultraviolet digital image is processed
Figure 204499DEST_PATH_IMAGE004
Spark pre-positioning is performed, i.e. using the following template:
Figure DEST_PATH_IMAGE080
and ultraviolet images
Figure 875651DEST_PATH_IMAGE004
Convolution is carried out to obtain a convolution graph>
Figure 49144DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE024A
Wherein
Figure 869945DEST_PATH_IMAGE026
Indicates that the template is->
Figure 900218DEST_PATH_IMAGE028
Is in the position coordinate, </or is greater than or equal to>
Figure 996350DEST_PATH_IMAGE030
Representing coordinates in the original>
Figure 707954DEST_PATH_IMAGE032
The convolution offset coordinate is the center.
The above-mentioned template
Figure 386191DEST_PATH_IMAGE028
In order to learn and obtain the ultraviolet image spark data based on a large amount of ultraviolet image spark data by adopting a Bayesian learning method, 3-by-3 median filtering is further applied to the learning result to smooth the template for the convenience of use, and the template is re-quantized according to the step value of 0.05 to obtain the result of formula 1.
For a collected ultraviolet image
Figure 525048DEST_PATH_IMAGE004
Based on equation 2, calculate->
Figure 108476DEST_PATH_IMAGE034
And find out->
Figure 420509DEST_PATH_IMAGE034
Is recorded as the position of the maximum in->
Figure 405782DEST_PATH_IMAGE036
As an ultraviolet image->
Figure 715541DEST_PATH_IMAGE004
A preset point in the middle of the spark.
Will be mixed with
Figure 599315DEST_PATH_IMAGE004
The corresponding visible light image->
Figure 652721DEST_PATH_IMAGE002
According to>
Figure 554818DEST_PATH_IMAGE036
Performing space translation to normalize the initial coordinates of all visible light images to the spark center position, and recording the obtained translated visible light images as:
Figure DEST_PATH_IMAGE082
the effect of the translation operation is to normalize the initial coordinates of all visible images to the spark center position, which helps to move the image portions of the environment that are not associated with motor sparks to the periphery of the image, and to assign a lower detection weight in the subsequent steps, which can better remove the environmental noise in the image.
And step 3: spark detection using neural networks based on coordinate normalized visible light images
And at a certain moment, acquiring a visible light image and an ultraviolet image according to the step 1, preprocessing the ultraviolet image, and judging the probability of sparks in the ultraviolet image. If the preprocessing result meets the threshold value in the step 1, continuing; otherwise, waiting for the next acquisition time.
Pre-positioning sparks in the ultraviolet image according to the step 2 to obtain the ultraviolet image
Figure 848527DEST_PATH_IMAGE004
And (4) the coordinates of the center of the spark are obtained, and the visible light image is translated according to the coordinates to obtain a coordinate normalized visible light image.
Spark recognition is performed on the coordinate normalized visible light image according to the following steps.
Firstly, a convolution feature extraction layer is established, and local feature extraction modeling is carried out on the visible light image.
Namely:
Figure DEST_PATH_IMAGE084
in the above formula, the first and second carbon atoms are,
Figure DEST_PATH_IMAGE086
is a convolution kernel of the matrix, is asserted>
Figure 203285DEST_PATH_IMAGE026
Is a relative coordinate in a convolution kernel, and>
Figure DEST_PATH_IMAGE088
the visible light image is normalized for coordinates. />
Figure DEST_PATH_IMAGE090
Is a subscript to the convolution kernel, device for selecting or keeping>
Figure 864643DEST_PATH_IMAGE086
Indicates the fifth->
Figure 558930DEST_PATH_IMAGE090
A convolution kernel, in the present invention, preferably->
Figure 272808DEST_PATH_IMAGE090
Has a value in the range of 0-31, i.e. there are 32 convolution kernels->
Figure DEST_PATH_IMAGE092
、/>
Figure DEST_PATH_IMAGE094
、/>
Figure DEST_PATH_IMAGE096
. 32 matrix results, i.e., -corresponding to the convolutional feature extraction layer>
Figure DEST_PATH_IMAGE098
、/>
Figure DEST_PATH_IMAGE100
、…、
Figure DEST_PATH_IMAGE102
。/>
Figure DEST_PATH_IMAGE104
Is linearly biasedAmount of the compound (A). />
Figure 193490DEST_PATH_IMAGE050
The method is used for non-linearizing the linear convolution to realize the non-linear sample classification.
Figure DEST_PATH_IMAGE106
In the above formula, according to the data characteristics of the practical application of the present invention, the above nonlinear activation function is designed and the coefficients are introduced
Figure DEST_PATH_IMAGE108
The convergence speed of the function is adjusted, so that the recognition rate of the model to sparks is improved; />
Figure DEST_PATH_IMAGE110
Preferably +>
Figure DEST_PATH_IMAGE112
Secondly, on the basis of the convolution feature extraction layer, a multi-scale pooling layer is established, the convolution feature layer is compressed, and the calculation efficiency is improved.
Figure DEST_PATH_IMAGE114
Different from the common convolution pooling, the multi-scale pooling layer established by the invention also performs pooling between the convolution layers, so that the data volume is further reduced, and the calculation efficiency is improved; while further introducing pooling parameters during pooling between layers (
Figure 339432DEST_PATH_IMAGE040
、/>
Figure 888225DEST_PATH_IMAGE042
、/>
Figure 976267DEST_PATH_IMAGE044
、/>
Figure 53420DEST_PATH_IMAGE046
) The accuracy of pooling can be improved. />
Figure 252320DEST_PATH_IMAGE048
Is a linear offset. />
Figure 655619DEST_PATH_IMAGE050
As defined above.
And secondly, establishing an orthogonal optimization layer on the basis of the multi-scale pooling layer.
Figure DEST_PATH_IMAGE116
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE118
,/>
Figure DEST_PATH_IMAGE120
satisfies the following conditions:
Figure DEST_PATH_IMAGE122
Figure 852245DEST_PATH_IMAGE120
spatial coordinates representing 16 groups from a three-dimensional pooling layer +>
Figure 606575DEST_PATH_IMAGE060
To orthogonally optimized layer>
Figure DEST_PATH_IMAGE124
Is based on the fifth->
Figure DEST_PATH_IMAGE126
The linear mapping of the elements and the constraint of the formula 8 enable the elements to have orthogonal characteristics, reduce the correlation between parameters and contribute to improving the modelAnd (4) detecting the capability.
Parameter(s)
Figure DEST_PATH_IMAGE128
For normalizing data of the three-dimensional pooling layer, there are 8 matrices (m takes the value of 32/4= 8) in the three-dimensional pooling layer. />
Figure DEST_PATH_IMAGE130
Is a linear offset. />
Figure 218953DEST_PATH_IMAGE050
As defined above.
Finally, the output layer is defined as the probability of detecting motor sparks:
Figure DEST_PATH_IMAGE132
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE134
indicating a fifth degree of linkage with an orthogonally optimized layer>
Figure 555387DEST_PATH_IMAGE126
Linear mapping of elements to output layers, based on the number of pixels in a pixel>
Figure DEST_PATH_IMAGE136
Is a linear offset. />
Figure 47549DEST_PATH_IMAGE050
As defined above. Output layer>
Figure DEST_PATH_IMAGE138
Indicating a probability of detecting a motor spark, based on a motor speed and a motor speed>
Figure 839574DEST_PATH_IMAGE138
Approaching 0 indicates no spark detected, and>
Figure 380277DEST_PATH_IMAGE138
approaching 1 indicates that a spark is detected.
And (3) training the models 4-9 by adopting a Back Propagation (BP) algorithm, determining values of convolution kernels in the formula 4, pooling parameters in the formula 6, linear mapping parameters in the formulas 7 and 9 and other unknown parameter values, and finishing training.
Before training, preparing a plurality of visible light images with sparks, and after the translation normalization in the step 2, using the visible light images as positive samples, and after the translation normalization in the step 2, using the visible light images without sparks as negative samples; and inputting the sample into the model to carry out iterative computation, comparing the sample true value with the model output value in each round, and iterating until the sample true value is converged to finish training.
After the training is finished, the coordinate normalization visible light image obtained at the beginning of the step 3 can be detected by the method, and the probability estimation value is obtained through the model
Figure 23748DEST_PATH_IMAGE138
If is>
Figure 421231DEST_PATH_IMAGE138
Greater than a threshold value (marked>
Figure DEST_PATH_IMAGE140
) Spark detection is determined, otherwise no spark is determined to be detected. Preferably, setting +>
Figure 150153DEST_PATH_IMAGE140
=0.8。
Step 4, automatic recognition and control of spark faults in the motor operation process based on image detection
And (3) acquiring images in real time according to the steps 1-3, processing, detecting the visible light images in each pair of acquired images according to the model and the method in the step 3, judging that the motor has a fault after detecting sparks, sending a stop signal to the motor, and controlling the motor to stop.
The following table is a comparison of the results of the test using the method of the present invention and the existing CNN neural network method, and it can be seen from the table that the test effect is better because the image preprocessing process of the present invention is matched with the neural network structure, and both are specially designed for motor spark detection, considering the actual application environment.
TABLE 1
Figure DEST_PATH_IMAGE142
The method is implemented in the following system:
the collection equipment: including ultraviolet cameras and visible light cameras.
The pretreatment equipment comprises: comprises a processor connected with the acquisition equipment and is arranged on site.
A server: and storing the trained neural network model, receiving and judging the image sent by the preprocessing equipment, and sending a control signal to the controller.
A controller: for controlling the motor action.
It will be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described in detail herein, many other variations or modifications can be made, which are consistent with the principles of this invention, and which are directly determined or derived from the disclosure herein, without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.

Claims (10)

1. A motor operation process fault automatic identification and control method is characterized in that: collecting the electric spark image of the motor to form a visible light digital image I v And an ultraviolet digital image I u Each image is represented by a two-dimensional matrix;
phi (p, q) is an element of the matrix phi, and (p, q) is a coordinate in the element matrix phi; x represents a matrix I u The total number of columns, i.e. the width of the image, Y being the matrix I u The total number of rows of (i.e. the height of the image), defines a Y x Y square matrix Φ, the elements of which are assigned to:
Figure FDA0004060333200000011
Where% represents the remainder operation, χ is the matrix of 4x4 orthogonal bases:
Figure FDA0004060333200000012
then define H u Is matrix phi and matrix I u The product of (a):
H u =Φ×I u …(1)
solving for H u If the number of the pixels is larger than the threshold value, the spark exists in the image, otherwise, the spark does not exist;
if spark exists in the collected image, the ultraviolet digital image I is subjected to u Carrying out spark prepositioning:
Figure FDA0004060333200000013
wherein (i, j) represents a template T u (x + i, y + j) represents a convolution offset coordinate centered on the coordinate (x, y) in the original image;
finding R u Is (x) as the position of the maximum value 0 ,y 0 ) As an ultraviolet image I u Presetting a point in the center of the middle spark; normalizing the initial coordinates of all visible light images to the spark center position;
utilizing a neural network to carry out spark detection based on the coordinate normalized visible light image, wherein the neural network has the structure that: the method comprises a convolution characteristic extraction layer, a multi-scale pooling layer, an orthogonal optimization layer and an output layer;
wherein the multi-scale pooling layer output is:
Figure FDA0004060333200000021
wherein, κ 1 、κ 2 、κ 3 、κ 4 For the pooling parameter, beta 1 Is a linear offset, λ is an excitation function, Θ 4m 、Θ 4m+1 、Θ 4m+2 、Θ 4m+3 The result of the convolution feature extraction of the previous layer.
2. The method of claim 1, wherein: and after the spark is detected, judging that the motor fails, sending a stop signal to the motor, and controlling the motor to stop.
3. The method of claim 1, wherein: the model is trained using a Back Propagation (BP) algorithm.
4. The method of claim 3, wherein: and determining values of a convolution kernel, a pooling parameter and a linear mapping parameter through training to finish the training.
5. The method of claim 3, wherein: before training, a plurality of visible light images with sparks are prepared and are used as positive samples after translation normalization.
6. The method of claim 3, wherein: and a plurality of visible light images without sparks are used as negative samples after translation and normalization.
7. The method of claim 6, wherein: the positive samples give a sample truth value 1, and the negative samples give a sample truth value 0.
8. The method of claim 7, wherein: and inputting the sample into the model to carry out iterative computation, comparing the true value of the sample with the output value of the model in each round, and iterating until convergence to finish training.
9. The method of claims 1-8, wherein: including by site processors and servers.
10. The method of claim 9, wherein: the field processor carries out acquisition image preprocessing, and the server detects and identifies the preprocessed image by utilizing the neural network and sends a control signal to the controller.
CN202211553078.7A 2022-12-06 2022-12-06 Automatic fault identification and control method for motor operation process Active CN115546224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553078.7A CN115546224B (en) 2022-12-06 2022-12-06 Automatic fault identification and control method for motor operation process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553078.7A CN115546224B (en) 2022-12-06 2022-12-06 Automatic fault identification and control method for motor operation process

Publications (2)

Publication Number Publication Date
CN115546224A CN115546224A (en) 2022-12-30
CN115546224B true CN115546224B (en) 2023-04-07

Family

ID=84722497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553078.7A Active CN115546224B (en) 2022-12-06 2022-12-06 Automatic fault identification and control method for motor operation process

Country Status (1)

Country Link
CN (1) CN115546224B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396109A (en) * 2020-11-19 2021-02-23 天津大学 Motor bearing fault diagnosis method based on recursion graph and multilayer convolution neural network
CN115204234A (en) * 2022-07-22 2022-10-18 福州大学 Fault diagnosis method for wind driven generator

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI121443B (en) * 2009-02-06 2010-11-15 Alte Visetec Oy Method and arrangement for controlling sparking
CN110598655B (en) * 2019-09-18 2023-12-19 东莞德福得精密五金制品有限公司 Artificial intelligent cloud computing multispectral smoke high-temperature spark fire monitoring method
CN113049922B (en) * 2020-04-22 2022-11-15 青岛鼎信通讯股份有限公司 Fault arc signal detection method adopting convolutional neural network
CN113592849A (en) * 2021-08-11 2021-11-02 国网江西省电力有限公司电力科学研究院 External insulation equipment fault diagnosis method based on convolutional neural network and ultraviolet image
CN114062511A (en) * 2021-10-24 2022-02-18 北京化工大学 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396109A (en) * 2020-11-19 2021-02-23 天津大学 Motor bearing fault diagnosis method based on recursion graph and multilayer convolution neural network
CN115204234A (en) * 2022-07-22 2022-10-18 福州大学 Fault diagnosis method for wind driven generator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
武海彬 ; 卜明龙 ; 刘圆圆 ; 郝惠敏 ; .基于SDP图像与VGG网络的旋转机械转子故障诊断研究.机电工程.2020,(09),全文. *

Also Published As

Publication number Publication date
CN115546224A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN108230237B (en) Multispectral image reconstruction method for electrical equipment online detection
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN109086675B (en) Face recognition and attack detection method and device based on light field imaging technology
CN112634159B (en) Hyperspectral image denoising method based on blind noise estimation
CN112308873A (en) Edge detection method for multi-scale Gabor wavelet PCA fusion image
CN113420614A (en) Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm
CN112288682A (en) Electric power equipment defect positioning method based on image registration
CN110751667A (en) Method for detecting infrared dim small target under complex background based on human visual system
CN113221805B (en) Method and device for acquiring image position of power equipment
CN115546224B (en) Automatic fault identification and control method for motor operation process
CN111882554B (en) SK-YOLOv 3-based intelligent power line fault detection method
CN113021355A (en) Agricultural robot operation method for predicting sheltered crop picking point
CN112102379A (en) Unmanned aerial vehicle multispectral image registration method
CN115035168B (en) Multi-constraint-based photovoltaic panel multi-source image registration method, device and system
CN111881922B (en) Insulator image recognition method and system based on salient features
CN116433528A (en) Image detail enhancement display method and system for target area detection
CN115761606A (en) Box electric energy meter identification method and device based on image processing
CN115731456A (en) Target detection method based on snapshot type spectrum polarization camera
CN114821187A (en) Image anomaly detection and positioning method and system based on discriminant learning
CN113723400A (en) Electrolytic cell polar plate fault identification method, system, terminal and readable storage medium based on infrared image
CN112686880A (en) Method for detecting abnormity of railway locomotive component
Wang et al. Research on sugarcane seed-bud location based on anisotropic scaling transformation
CN116930192B (en) High-precision copper pipe defect detection method and system
CN111723709B (en) Fly face recognition method based on deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant