CN117260297A - Full-automatic machining equipment - Google Patents

Full-automatic machining equipment Download PDF

Info

Publication number
CN117260297A
CN117260297A CN202311153568.2A CN202311153568A CN117260297A CN 117260297 A CN117260297 A CN 117260297A CN 202311153568 A CN202311153568 A CN 202311153568A CN 117260297 A CN117260297 A CN 117260297A
Authority
CN
China
Prior art keywords
feature
convolution
vibration
sound
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311153568.2A
Other languages
Chinese (zh)
Inventor
吴志华
万光松
王娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jiadun Automation Equipment Co ltd
Original Assignee
Anhui Jiadun Automation Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jiadun Automation Equipment Co ltd filed Critical Anhui Jiadun Automation Equipment Co ltd
Priority to CN202311153568.2A priority Critical patent/CN117260297A/en
Publication of CN117260297A publication Critical patent/CN117260297A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q1/00Members which are comprised in the general build-up of a form of machine, particularly relatively large fixed members
    • B23Q1/0009Energy-transferring means or control lines for movable machine parts; Control panels or boxes; Control parts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The utility model relates to an intelligent monitoring field, it specifically discloses a full-automatic machining equipment, its artificial intelligence technique that adopts based on degree of depth neural network model, obtain vibration signal and sound signal in the full-automatic machining equipment working process that is gathered by vibration sensor and sound sensor, obtain vibration and sound feature vector respectively through the convolution neural network model as the feature extractor, carry out feature extraction through the convolution neural network that contains mixed convolution layer after the combination to obtain the classification result that is used for showing whether full-automatic machining equipment operating condition is normal. Furthermore, the abnormal condition of the equipment can be found in time, and the reliability, the processing quality and the production efficiency of the equipment are improved.

Description

Full-automatic machining equipment
Technical Field
The present application relates to the field of intelligent monitoring, and more particularly, to a fully automated machining apparatus.
Background
The quality of the processing equipment affects the processed product. If the working state of the mechanical processing equipment is abnormal, the processing quality is reduced, the equipment is damaged and failed due to the abnormal operation of the equipment for a long time, the processing progress is slow, even parts possibly fall off under some conditions, personal accidents occur, and the cost for maintaining the equipment is correspondingly increased. However, the prior art does not perform real-time monitoring when the machining equipment works, so that the production efficiency is reduced, and safety accidents occur.
Thus, an optimized fully automated machining monitoring scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides full-automatic machining equipment, which adopts an artificial intelligence technology based on a deep neural network model to acquire vibration signals and sound signals acquired by a vibration sensor and a sound sensor in the working process of the full-automatic machining equipment, respectively acquires vibration and sound feature vectors through a convolutional neural network model serving as a feature extractor, and performs feature extraction through a convolutional neural network containing a mixed convolutional layer after combination so as to acquire a classification result used for indicating whether the working state of the full-automatic machining equipment is normal. Furthermore, the abnormal condition of the equipment can be found in time, and the reliability, the processing quality and the production efficiency of the equipment are improved.
According to one aspect of the present application, there is provided a fully automatic machining apparatus comprising:
the signal acquisition module is used for acquiring vibration signals and sound signals acquired by the vibration sensor and the sound sensor in the working process of the full-automatic machining equipment;
the vibration characteristic extraction module is used for enabling the waveform diagram of the vibration signal to pass through a first convolution neural network model serving as a characteristic extractor so as to obtain a vibration waveform characteristic vector;
The sound characteristic extraction module is used for enabling the waveform diagram of the sound signal to pass through a second convolution neural network model serving as a characteristic extractor so as to obtain sound waveform characteristic vectors;
the joint module is used for representing probability density domains for constructing the vibration waveform characteristic vector and the sound waveform characteristic vector so as to obtain a working state characteristic matrix;
the mixed convolution module is used for enabling the working state feature matrix to pass through a third convolution neural network model containing a mixed convolution layer to obtain a classification feature vector;
and the result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the working state of the full-automatic machining equipment is normal or not.
In the above-mentioned fully automatic machining device, the vibration feature extraction module is configured to: each layer of the first convolutional neural network model used as the feature extractor performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling each feature matrix along the channel dimension of the convolution feature map to obtain a pooled feature map; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the first convolution neural network model serving as the feature extractor is the vibration waveform feature vector, and the input of the first layer of the first convolution neural network model serving as the feature extractor is the vibration signal waveform graph.
In the above-mentioned fully automatic machining device, the sound feature extraction module is configured to: each layer of the second convolutional neural network model used as the feature extractor performs the following steps on input data in forward transfer of the layer: performing convolution processing on the input data based on convolution check to generate a convolution feature map; performing global average pooling processing on each feature matrix along the channel dimension on the convolution feature map to generate a pooled feature map; performing nonlinear activation on the feature values of all positions in the pooled feature map to generate an activated feature map; the output of the last layer of the second convolutional neural network model serving as the feature extractor is the characteristic vector of the sound waveform, the input of the second layer to the last layer of the second convolutional neural network model serving as the feature extractor is the output of the last layer, and the input of the second convolutional neural network model serving as the feature extractor is the waveform graph of the sound signal.
In the above-mentioned fully automatic machining apparatus, the joint module includes: the Gaussian normalization unit is used for carrying out Gaussian normalization processing on the vibration waveform characteristic vector and the sound waveform characteristic vector to obtain a normalized vibration waveform characteristic vector and a normalized sound waveform characteristic vector; the vibration probability density function calculation unit is used for calculating probability density function values of the normalized vibration waveform feature vectors to obtain first feature probability density distribution; the sound probability density function calculation unit is used for calculating probability density function values of the normalized sound waveform feature vectors to obtain second feature probability density distribution; and a probability density domain map construction unit configured to construct a probability density domain map between the first feature probability density distribution and the second feature probability density distribution to obtain the operating state feature matrix, where feature values of respective positions in the operating state feature matrix are equal to products between probability density function values of respective two positions in the first feature probability density distribution and the second feature probability density distribution.
In the above-mentioned fully automatic machining device, the hybrid convolution module is configured to: each mixed convolutional layer using the third convolutional neural network model comprising mixed convolutional layers performs respective processing on input data in forward pass of the layer: performing multi-scale convolution coding on input data to obtain a multi-scale convolution characteristic diagram; pooling the multi-scale convolution feature map to obtain a pooled feature map; performing activation processing on the pooled feature map to obtain an activated feature map; wherein the output of the last mixed convolutional layer of the third convolutional neural network model comprising mixed convolutional layers is the classification feature vector.
In the above-described fully automatic machining apparatus, the multi-scale convolutional encoding is configured to: performing convolution processing on the input data based on a first convolution kernel to obtain a first scale feature map; performing convolution processing on the input data based on a second convolution kernel to obtain a second scale feature map, wherein the second convolution kernel is a cavity convolution kernel with first cavity rate; performing convolution processing on the input data based on a third convolution kernel to obtain a third scale feature map, wherein the third convolution kernel is a cavity convolution kernel with a second cavity rate; performing convolution processing on the input data based on a fourth convolution kernel to obtain a fourth scale feature map, wherein the fourth convolution kernel is a cavity convolution kernel with a third cavity rate; and cascading the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map to obtain a multi-scale convolution feature map.
In the above-mentioned fully automatic machining apparatus, the result generation module includes: the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors; and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
Compared with the prior art, the full-automatic machining equipment provided by the application adopts an artificial intelligence technology based on a deep neural network model to acquire vibration signals and sound signals acquired by a vibration sensor and a sound sensor in the working process of the full-automatic machining equipment, respectively acquires vibration and sound feature vectors through a convolution neural network model serving as a feature extractor, and performs feature extraction through a convolution neural network containing a mixed convolution layer after combination so as to acquire a classification result used for indicating whether the working state of the full-automatic machining equipment is normal. Furthermore, the abnormal condition of the equipment can be found in time, and the reliability, the processing quality and the production efficiency of the equipment are improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a block diagram of a fully automated machining apparatus according to an embodiment of the present application.
Fig. 2 is a schematic diagram of the architecture of a fully automated machining apparatus according to an embodiment of the present application.
Fig. 3 is a block diagram of a joint module in a fully automated machining apparatus according to an embodiment of the present application.
Fig. 4 is a block diagram of a result generation module in a fully automated machining apparatus according to an embodiment of the present application.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the operating state of the processing apparatus affects the processing time and the processing efficiency. An improper operation of the equipment may result in increased errors during the machining process, which may render the size, shape or surface quality of the machined part undesirable. Abnormal operating conditions may cause increased wear or damage to components of the apparatus, such as overheating of bearings, gear breakage, etc., thereby causing the apparatus to fail to operate properly or stop operating completely, and further, the processing speed is slowed or the yield is reduced, thereby affecting the production schedule and delivery period. Even sometimes, the equipment is not normally operated, and potential safety hazards may exist, for example, the equipment is unstable in operation, parts fall off, and the like, which may cause personal injury or equipment accident. However, the prior art does not monitor the machining equipment in real time during working, so that the efficiency is reduced and dangerous accidents occur. Thus, an optimized fully automated machining monitoring scheme is desired.
According to the technical problem, the applicant obtains vibration and sound feature vectors through a convolutional neural network model serving as a feature extractor respectively by obtaining vibration signals and sound signals acquired by a vibration sensor and a sound sensor in the working process of the full-automatic machining equipment, and performs feature extraction through a convolutional neural network containing a mixed convolutional layer after combination so as to obtain a classification result for indicating whether the working state of the full-automatic machining equipment is normal.
Accordingly, in the technical solution of the present application, important information on the operating state of the device may be provided in consideration of vibration and sound signals. In particular, by monitoring and analyzing vibration and acoustic signals, abnormal conditions in the operation of the equipment, such as bearing failure, gear imbalance, loosening or wear, etc., can be detected, which helps to discover potential problems early and take corresponding repair and maintenance measures to avoid equipment failure and increased downtime. In addition, vibration and sound signals may provide clues as to the type and location of equipment failure. Different types of faults often produce specific vibration and sound characteristics, and by analyzing and diagnosing vibration and sound signals, specific causes and locations of equipment faults can be determined, which helps to quickly solve problems and reduce maintenance time. Vibration and acoustic signals may also reflect quality problems during processing. For example, during machining, if there are problems with tool wear, workpiece misalignment, or cutting instability, variations in vibration and acoustic signals may result. By monitoring and analyzing the signals, the processing quality problem can be found in time, and corresponding corrective measures are taken to ensure that the product quality meets the requirements. Therefore, acquiring vibration signals and sound signals acquired by the vibration sensor and the sound sensor during operation of the fully automatic machining apparatus can provide critical operating state information for anomaly detection, fault diagnosis, quality control, thereby improving reliability, machining quality and production efficiency of the apparatus.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
Specifically, in the technical scheme of the application, firstly, vibration signals and sound signals acquired by the vibration sensor and the sound sensor during the working process of the full-automatic machining equipment are acquired. Then, considering that the waveform of the vibration signal is a time-series data, a great deal of detail and noise are contained. The original vibration signal is directly used for analysis and judgment, and the original vibration signal is possibly interfered by noise, so that key characteristics are difficult to capture. And the convolutional neural network can automatically extract the characteristic with distinguishing property through the operation of the convolutional layer and the pooling layer, so that the characteristic of the vibration signal is better represented. Specifically, the convolutional neural network can gradually extract features of different layers through multi-layer convolution and pooling operations. The lower level of convolution layer may capture local modes and detail features of the vibration signal, while the higher level of convolution layer may capture more abstract global features. By using the convolutional neural network, the characteristics of the vibration signals can be represented hierarchically from low level to high level, and the structure and the characteristics of the vibration signals can be better captured. In addition, the convolutional neural network has the capability of self-adaptive feature learning, and can automatically learn the most distinguishable feature representation according to the input vibration signal. By training the convolutional neural network, the network can adjust network parameters according to specific vibration signal data sets, so that the extracted vibration waveform feature vectors better reflect the difference of the working states of the equipment, and the classification accuracy is improved. Therefore, the waveform diagram of the vibration signal is passed through the first convolution neural network model serving as the feature extractor to obtain the vibration waveform feature vector, and the abstract feature representation of the vibration signal can be effectively extracted to obtain the vibration waveform feature vector.
Next, it is considered that the waveform of the sound signal is a time-series data, which contains abundant frequency and amplitude information. Spectral features of sound signals are important for classification and recognition of sound. The convolutional neural network can capture the characteristics of different frequencies through convolution operation, so that the spectrum information of the sound signal is better represented. The convolutional neural network can gradually extract the characteristics of different layers through multi-layer convolution and pooling operations. The lower level of convolution layers may capture local patterns and detail features of the sound signal, while the higher level of convolution layers may capture more abstract global features. By using the convolutional neural network, the characteristics of the sound signal can be represented hierarchically from low level to high level, and the structure and the characteristics of the sound signal can be better captured. Therefore, the waveform diagram of the sound signal is passed through the second convolution neural network model serving as the feature extractor to obtain the sound waveform feature vector, and the abstract feature representation of the sound signal can be effectively extracted to obtain the sound waveform feature vector.
Further, the vibration waveform characteristic vector and the sound waveform characteristic vector are subjected to joint coding to obtain a working state characteristic matrix. Considering that vibration signals and sound signals are two signal types common in the operation of mechanical devices, they represent different physical characteristics, respectively. The vibration signal may reflect structural vibrations and motion states of the device, while the sound signal may reflect sound characteristics and operational states of the device. By jointly encoding the characteristics of the two signals, richer and more comprehensive working state information can be obtained. In addition, the vibration signal and the sound signal have complementary characteristics in terms of the operational state of the capturing device. For example, for certain fault types, the vibration signal may more easily detect an abnormal vibration pattern, while the sound signal may more easily detect an abnormal noise or sound pattern. By jointly encoding the characteristics of the two signals, the complementarity of the two signals can be fully utilized, and the fault detection and diagnosis capability of the working state of the equipment can be improved. Therefore, the vibration waveform feature vector and the sound waveform feature vector are subjected to joint coding to obtain a working state feature matrix, and features of vibration and sound signals can be fused to obtain a more comprehensive and more accurate working state feature matrix. Such characterization facilitates subsequent operational state analysis, fault diagnosis, and predictive maintenance, thereby improving reliability, process quality, and production efficiency of the apparatus.
Then, consider that the hybrid convolution layer is one that incorporates convolution kernels of different scales. By using a hybrid convolution layer, features can be extracted at different scales, thereby better capturing multi-scale information of operational state features. Such multi-scale feature extraction helps to more fully represent the operational state, including local detail and global overall features. In addition, the mixed convolution layer can enhance the expression capability of the features by combining convolution kernels with different scales, and the convolution kernels with different scales can capture the features with different sizes and shapes, so that richer and more differentiated feature representations are provided. The hybrid convolution layer may also reduce the dimensionality of the feature matrix through convolution and pooling operations. Therefore, the working state feature matrix is passed through a third convolutional neural network model comprising a mixed convolutional layer to obtain a classification feature vector, and abstract representation of the working state feature can be further extracted and represented to obtain the classification feature vector. Such feature representations have better multi-scale feature extraction capabilities, enhanced feature expression capabilities, and reduced dimensions, facilitating subsequent classification tasks and operational state analysis.
Further, the classification feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the working state of the full-automatic machining equipment is normal. It is contemplated that various malfunctions or anomalies may occur in the operation of the fully automated machining equipment, such as equipment component damage, tool wear, material anomalies, and the like. By extracting the working state features as classification feature vectors and classifying the classification feature vectors through a classifier, the abnormal detection of the working state can be realized. The classifier can learn the characteristic mode of the normal working state, can identify the abnormal mode which is inconsistent with the normal state, and can realize automatic judgment of the working state by classifying by using the classifier, so that the requirements of manual intervention and subjective judgment can be reduced, and the efficiency and accuracy of the working state analysis are improved. Therefore, by classifying the classification feature vectors by using the classifier, automatic judgment and abnormality detection of the operating state of the fully automatic machining equipment can be realized. Therefore, the efficiency and the accuracy of the working state analysis can be improved, and corresponding measures can be timely taken to ensure the normal operation and the production efficiency of the equipment.
In particular, the correlation and complementarity between the vibration waveform feature vector and the sound waveform feature vector can be comprehensively utilized by jointly encoding the vibration waveform feature vector and the sound waveform feature vector, so that a richer and more accurate working state feature matrix is obtained, and here, the vibration waveform feature vector and the sound waveform feature vector can be transposed and multiplied to obtain the working state feature matrix. It is contemplated that the vibration waveform feature vector and the sound waveform feature vector typically have different dimensions because they capture different information. In this case, direct transposed multiplication causes a problem of dimension mismatch, and an effective operating state feature matrix cannot be obtained. Meanwhile, simply performing transposed multiplication may result in information loss. Vibration and sound act as two distinct sources of signals, with possible non-linear relationships and interactions between them. Directly transposed multiplying may not capture these non-linear relationships and interactions, resulting in a lack of accuracy and characterization capability for the operating state features. Therefore, by constructing the probability density domain representation, the relation and information between vibration and sound can be better captured, and a working state characteristic matrix with better characterization force and comprehensiveness is provided.
Specifically, constructing probability density domain representations of the vibration waveform feature vector and the sound waveform feature vector to obtain an operating state feature matrix, including: carrying out Gaussian normalization processing on the vibration waveform characteristic vector and the sound waveform characteristic vector to obtain a normalized vibration waveform characteristic vector and a normalized sound waveform characteristic vector; calculating probability density function values of the normalized vibration waveform feature vectors to obtain first feature probability density distribution; calculating probability density function values of the normalized sound waveform feature vectors to obtain second feature probability density distribution; and constructing a probability density domain graph between the first characteristic probability density distribution and the second characteristic probability density distribution to obtain the working state characteristic matrix, wherein the characteristic value of each position in the working state characteristic matrix is equal to the product between probability density function values of corresponding two positions in the first characteristic probability density distribution and the second characteristic probability density distribution.
Constructing probability density domain representations of vibration waveform feature vectors and sound waveform feature vectors, the principal mode features of the vibration waveform feature vectors and the sound waveform feature vectors can be effectively extracted to effectively capture implicit intersection features between cross-modal domains between the vibration waveform feature vectors and the sound waveform feature vectors, so as to improve the manifold robustness of fusion feature representations of the vibration waveform feature vectors and the sound waveform feature vectors.
Based on this, the present application provides a fully automated machining apparatus comprising: the signal acquisition module is used for acquiring vibration signals and sound signals acquired by the vibration sensor and the sound sensor in the working process of the full-automatic machining equipment; the vibration characteristic extraction module is used for enabling the waveform diagram of the vibration signal to pass through a first convolution neural network model serving as a characteristic extractor so as to obtain a vibration waveform characteristic vector; the sound characteristic extraction module is used for enabling the waveform diagram of the sound signal to pass through a second convolution neural network model serving as a characteristic extractor so as to obtain sound waveform characteristic vectors; the joint module is used for representing probability density domains for constructing the vibration waveform characteristic vector and the sound waveform characteristic vector so as to obtain a working state characteristic matrix; the mixed convolution module is used for enabling the working state feature matrix to pass through a third convolution neural network model containing a mixed convolution layer to obtain a classification feature vector; and the result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the working state of the full-automatic machining equipment is normal or not.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
Fig. 1 is a block diagram of a fully automated machining apparatus according to an embodiment of the present application. As shown in fig. 1, a fully automatic machining apparatus 100 according to an embodiment of the present application includes: the signal acquisition module 110 is used for acquiring vibration signals and sound signals acquired by the vibration sensor and the sound sensor in the working process of the full-automatic machining equipment; a vibration feature extraction module 120, configured to pass a waveform chart of the vibration signal through a first convolutional neural network model serving as a feature extractor to obtain a vibration waveform feature vector; a sound feature extraction module 130, configured to pass the waveform diagram of the sound signal through a second convolutional neural network model serving as a feature extractor to obtain a sound waveform feature vector; a joint module 140, configured to represent probability density domains for constructing the vibration waveform feature vector and the sound waveform feature vector to obtain an operating state feature matrix; the hybrid convolution module 150 is configured to pass the working state feature matrix through a third convolutional neural network model that includes a hybrid convolution layer to obtain a classification feature vector; and a result generating module 160, configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the working state of the fully automatic machining device is normal.
Fig. 2 is a schematic diagram of the architecture of a fully automated machining apparatus according to an embodiment of the present application. As shown in fig. 2, first, vibration signals and sound signals acquired by a vibration sensor and a sound sensor during the operation of the fully automatic machining apparatus are acquired. Then, the waveform diagram of the vibration signal is passed through a first convolutional neural network model as a feature extractor to obtain a vibration waveform feature vector. Then, the waveform diagram of the sound signal is passed through a second convolutional neural network model as a feature extractor to obtain a sound waveform feature vector. Next, a probability density domain representation of the vibration waveform feature vector and the sound waveform feature vector is constructed to obtain an operating state feature matrix. And then, the working state feature matrix passes through a third convolutional neural network model containing a mixed convolutional layer to obtain a classification feature vector. And finally, the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the working state of the full-automatic machining equipment is normal or not.
In the embodiment of the present application, the signal acquisition module 110 is configured to acquire a vibration signal and a sound signal acquired by the vibration sensor and the sound sensor during the operation of the fully automatic machining device. Important information on the operating state of the device can be provided in view of vibrations and acoustic signals. In particular, by monitoring and analyzing vibration and acoustic signals, abnormal conditions in the operation of the equipment, such as bearing failure, gear imbalance, loosening or wear, etc., can be detected, which helps to discover potential problems early and take corresponding repair and maintenance measures to avoid equipment failure and increased downtime. In addition, vibration and sound signals may provide clues as to the type and location of equipment failure. Different types of faults often produce specific vibration and sound characteristics, and by analyzing and diagnosing vibration and sound signals, specific causes and locations of equipment faults can be determined, which helps to quickly solve problems and reduce maintenance time. Vibration and acoustic signals may also reflect quality problems during processing. For example, during machining, if there are problems with tool wear, workpiece misalignment, or cutting instability, variations in vibration and acoustic signals may result. By monitoring and analyzing the signals, the processing quality problem can be found in time, and corresponding corrective measures are taken to ensure that the product quality meets the requirements. Therefore, acquiring vibration signals and sound signals acquired by the vibration sensor and the sound sensor during operation of the fully automatic machining apparatus can provide critical operating state information for anomaly detection, fault diagnosis, quality control, thereby improving reliability, machining quality and production efficiency of the apparatus.
In this embodiment of the present application, the vibration feature extraction module 120 is configured to pass the waveform chart of the vibration signal through a first convolutional neural network model serving as a feature extractor to obtain a vibration waveform feature vector. The waveform diagram of the vibration signal is considered to be a time series data, and contains a great deal of detail and noise. The original vibration signal is directly used for analysis and judgment, and the original vibration signal is possibly interfered by noise, so that key characteristics are difficult to capture. And the convolutional neural network can automatically extract the characteristic with distinguishing property through the operation of the convolutional layer and the pooling layer, so that the characteristic of the vibration signal is better represented. Specifically, the convolutional neural network can gradually extract features of different layers through multi-layer convolution and pooling operations. The lower level of convolution layer may capture local modes and detail features of the vibration signal, while the higher level of convolution layer may capture more abstract global features. By using the convolutional neural network, the characteristics of the vibration signals can be represented hierarchically from low level to high level, and the structure and the characteristics of the vibration signals can be better captured. In addition, the convolutional neural network has the capability of self-adaptive feature learning, and can automatically learn the most distinguishable feature representation according to the input vibration signal. By training the convolutional neural network, the network can adjust network parameters according to specific vibration signal data sets, so that the extracted vibration waveform feature vectors better reflect the difference of the working states of the equipment, and the classification accuracy is improved. Therefore, the waveform diagram of the vibration signal is passed through the first convolution neural network model serving as the feature extractor to obtain the vibration waveform feature vector, and the abstract feature representation of the vibration signal can be effectively extracted to obtain the vibration waveform feature vector.
Specifically, in an embodiment of the present application, the vibration feature extraction module is configured to: each layer of the first convolutional neural network model used as the feature extractor performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling each feature matrix along the channel dimension of the convolution feature map to obtain a pooled feature map; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the first convolution neural network model serving as the feature extractor is the vibration waveform feature vector, and the input of the first layer of the first convolution neural network model serving as the feature extractor is the vibration signal waveform graph.
In this embodiment, the acoustic feature extraction module 130 is configured to pass the waveform chart of the acoustic signal through a second convolutional neural network model serving as a feature extractor to obtain acoustic waveform feature vectors. Consider that the waveform of an acoustic signal is a time series of data that contains rich frequency and amplitude information. Spectral features of sound signals are important for classification and recognition of sound. The convolutional neural network can capture the characteristics of different frequencies through convolution operation, so that the spectrum information of the sound signal is better represented. The convolutional neural network can gradually extract the characteristics of different layers through multi-layer convolution and pooling operations. The lower level of convolution layers may capture local patterns and detail features of the sound signal, while the higher level of convolution layers may capture more abstract global features. By using the convolutional neural network, the characteristics of the sound signal can be represented hierarchically from low level to high level, and the structure and the characteristics of the sound signal can be better captured. Therefore, the waveform diagram of the sound signal is passed through the second convolution neural network model serving as the feature extractor to obtain the sound waveform feature vector, and the abstract feature representation of the sound signal can be effectively extracted to obtain the sound waveform feature vector.
Specifically, in an embodiment of the present application, the sound feature extraction module is configured to: each layer of the second convolutional neural network model used as the feature extractor performs the following steps on input data in forward transfer of the layer: performing convolution processing on the input data based on convolution check to generate a convolution feature map; performing global average pooling processing on each feature matrix along the channel dimension on the convolution feature map to generate a pooled feature map; performing nonlinear activation on the feature values of all positions in the pooled feature map to generate an activated feature map; the output of the last layer of the second convolutional neural network model serving as the feature extractor is the characteristic vector of the sound waveform, the input of the second layer to the last layer of the second convolutional neural network model serving as the feature extractor is the output of the last layer, and the input of the second convolutional neural network model serving as the feature extractor is the waveform graph of the sound signal.
In this embodiment, the combining module 140 is configured to represent a probability density domain for constructing the vibration waveform feature vector and the acoustic waveform feature vector to obtain an operating state feature matrix.
Further, the correlation and complementarity between the vibration waveform feature vector and the sound waveform feature vector can be comprehensively utilized by jointly encoding the vibration waveform feature vector and the sound waveform feature vector, so that a richer and more accurate working state feature matrix is obtained, and the vibration waveform feature vector and the sound waveform feature vector can be transposed and multiplied to obtain the working state feature matrix. It is contemplated that the vibration waveform feature vector and the sound waveform feature vector typically have different dimensions because they capture different information. In this case, direct transposed multiplication causes a problem of dimension mismatch, and an effective operating state feature matrix cannot be obtained. Meanwhile, simply performing transposed multiplication may result in information loss. Vibration and sound act as two distinct sources of signals, with possible non-linear relationships and interactions between them. Directly transposed multiplying may not capture these non-linear relationships and interactions, resulting in a lack of accuracy and characterization capability for the operating state features. Therefore, by constructing the probability density domain representation, the relation and information between vibration and sound can be better captured, and a working state characteristic matrix with better characterization force and comprehensiveness is provided.
Fig. 3 is a block diagram of a joint module in a fully automated machining apparatus according to an embodiment of the present application. Specifically, in the embodiment of the present application, as shown in fig. 3, the joint module 140 includes: a gaussian normalization unit 141, configured to perform gaussian normalization processing on the vibration waveform feature vector and the acoustic waveform feature vector to obtain a normalized vibration waveform feature vector and a normalized acoustic waveform feature vector; a vibration probability density function calculation unit 142, configured to calculate probability density function values of the normalized vibration waveform feature vectors to obtain a first feature probability density distribution; a sound probability density function calculation unit 143 for calculating probability density function values of the normalized sound waveform feature vectors to obtain a second feature probability density distribution; and a probability density domain map construction unit 144, configured to construct a probability density domain map between the first feature probability density distribution and the second feature probability density distribution to obtain the operating state feature matrix, where the feature value of each position in the operating state feature matrix is equal to the product between the probability density function values of the corresponding two positions in the first feature probability density distribution and the second feature probability density distribution.
Constructing probability density domain representations of vibration waveform feature vectors and sound waveform feature vectors, the principal mode features of the vibration waveform feature vectors and the sound waveform feature vectors can be effectively extracted to effectively capture implicit intersection features between cross-modal domains between the vibration waveform feature vectors and the sound waveform feature vectors, so as to improve the manifold robustness of fusion feature representations of the vibration waveform feature vectors and the sound waveform feature vectors.
In this embodiment, the hybrid convolution module 150 is configured to pass the working state feature matrix through a third convolutional neural network model including a hybrid convolutional layer to obtain a classification feature vector. Consider that a hybrid convolutional layer is one that incorporates convolutional kernels of different scales. By using a hybrid convolution layer, features can be extracted at different scales, thereby better capturing multi-scale information of operational state features. Such multi-scale feature extraction helps to more fully represent the operational state, including local detail and global overall features. In addition, the mixed convolution layer can enhance the expression capability of the features by combining convolution kernels with different scales, and the convolution kernels with different scales can capture the features with different sizes and shapes, so that richer and more differentiated feature representations are provided. The hybrid convolution layer may also reduce the dimensionality of the feature matrix through convolution and pooling operations. Therefore, the working state feature matrix is passed through a third convolutional neural network model comprising a mixed convolutional layer to obtain a classification feature vector, and abstract representation of the working state feature can be further extracted and represented to obtain the classification feature vector. Such feature representations have better multi-scale feature extraction capabilities, enhanced feature expression capabilities, and reduced dimensions, facilitating subsequent classification tasks and operational state analysis.
Specifically, in the embodiment of the present application, the hybrid convolution module is configured to: each mixed convolutional layer using the third convolutional neural network model comprising mixed convolutional layers performs respective processing on input data in forward pass of the layer: performing multi-scale convolution coding on input data to obtain a multi-scale convolution characteristic diagram; pooling the multi-scale convolution feature map to obtain a pooled feature map; performing activation processing on the pooled feature map to obtain an activated feature map; wherein the output of the last mixed convolutional layer of the third convolutional neural network model comprising mixed convolutional layers is the classification feature vector.
More specifically, in embodiments of the present application, the multi-scale convolutional encoding is used to: performing convolution processing on the input data based on a first convolution kernel to obtain a first scale feature map; performing convolution processing on the input data based on a second convolution kernel to obtain a second scale feature map, wherein the second convolution kernel is a cavity convolution kernel with first cavity rate; performing convolution processing on the input data based on a third convolution kernel to obtain a third scale feature map, wherein the third convolution kernel is a cavity convolution kernel with a second cavity rate; performing convolution processing on the input data based on a fourth convolution kernel to obtain a fourth scale feature map, wherein the fourth convolution kernel is a cavity convolution kernel with a third cavity rate; cascading the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map to obtain a multi-scale convolution feature map.
In this embodiment of the present application, the result generating module 160 is configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the working state of the fully automatic machining device is normal. It is contemplated that various malfunctions or anomalies may occur in the operation of the fully automated machining equipment, such as equipment component damage, tool wear, material anomalies, and the like. By extracting the working state features as classification feature vectors and classifying the classification feature vectors through a classifier, the abnormal detection of the working state can be realized. The classifier can learn the characteristic mode of the normal working state, can identify the abnormal mode which is inconsistent with the normal state, and can realize automatic judgment of the working state by classifying by using the classifier, so that the requirements of manual intervention and subjective judgment can be reduced, and the efficiency and accuracy of the working state analysis are improved. Therefore, by classifying the classification feature vectors by using the classifier, automatic judgment and abnormality detection of the operating state of the fully automatic machining equipment can be realized. Therefore, the efficiency and the accuracy of the working state analysis can be improved, and corresponding measures can be timely taken to ensure the normal operation and the production efficiency of the equipment.
Fig. 4 is a block diagram of a result generation module in a fully automated machining apparatus according to an embodiment of the present application. Specifically, in the embodiment of the present application, as shown in fig. 4, the result generating module 160 includes: a full-connection encoding unit 161, configured to perform full-connection encoding on the classification feature vector by using a full-connection layer of the classifier to obtain an encoded classification feature vector; and a classification unit 162, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the full-automatic machining apparatus 100 according to the embodiment of the present application is illustrated, which acquires a vibration signal and a sound signal acquired by a vibration sensor and a sound sensor during the operation of the full-automatic machining apparatus by using an artificial intelligence technique based on a deep neural network model, obtains vibration and sound feature vectors respectively through a convolutional neural network model as a feature extractor, and performs feature extraction through a convolutional neural network including a hybrid convolutional layer after combining, so as to obtain a classification result for indicating whether the operation state of the full-automatic machining apparatus is normal. Furthermore, the abnormal condition of the equipment can be found in time, and the reliability, the processing quality and the production efficiency of the equipment are improved.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 5. Fig. 5 is a block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a central processing module (CPU) or other form of processing module having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to perform the functions in the fully automated machining apparatus of the various embodiments of the application described above and/or other desired functions. Various contents such as vibration signals and sound signals during the operation of the full-automatic machining apparatus may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information including the classification result and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 5 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the systems and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform steps in functions in a fully automated machining system according to various embodiments of the present application described in the "exemplary systems" section of this specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in the functions in a fully automated machining system according to various embodiments of the present application described in the "exemplary systems" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and systems of the present application, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and systems of the present application, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (7)

1. A fully automatic machining apparatus, comprising:
the signal acquisition module is used for acquiring vibration signals and sound signals acquired by the vibration sensor and the sound sensor in the working process of the full-automatic machining equipment;
the vibration characteristic extraction module is used for enabling the waveform diagram of the vibration signal to pass through a first convolution neural network model serving as a characteristic extractor so as to obtain a vibration waveform characteristic vector;
the sound characteristic extraction module is used for enabling the waveform diagram of the sound signal to pass through a second convolution neural network model serving as a characteristic extractor so as to obtain sound waveform characteristic vectors;
the joint module is used for representing probability density domains for constructing the vibration waveform characteristic vector and the sound waveform characteristic vector so as to obtain a working state characteristic matrix;
the mixed convolution module is used for enabling the working state feature matrix to pass through a third convolution neural network model containing a mixed convolution layer to obtain a classification feature vector;
And the result generation module is used for enabling the classification feature vector to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the working state of the full-automatic machining equipment is normal or not.
2. The full-automatic machining apparatus according to claim 1, wherein the vibration feature extraction module is configured to:
each layer of the first convolutional neural network model used as the feature extractor performs the following steps on input data in forward transfer of the layer:
carrying out convolution processing on input data to obtain a convolution characteristic diagram;
pooling each feature matrix along the channel dimension of the convolution feature map to obtain a pooled feature map;
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
the output of the last layer of the first convolution neural network model serving as the feature extractor is the vibration waveform feature vector, and the input of the first layer of the first convolution neural network model serving as the feature extractor is the vibration signal waveform graph.
3. The fully automatic machining apparatus according to claim 2, wherein the acoustic feature extraction module is configured to:
Each layer of the second convolutional neural network model used as the feature extractor performs the following steps on input data in forward transfer of the layer:
performing convolution processing on the input data based on convolution check to generate a convolution feature map;
performing global average pooling processing on each feature matrix along the channel dimension on the convolution feature map to generate a pooled feature map;
non-linear activation is carried out on the characteristic values of all positions in the pooled characteristic map so as to generate an activated characteristic incremental map;
the output of the last layer of the second convolutional neural network model serving as the feature extractor is the characteristic vector of the sound waveform, the input of the second layer to the last layer of the second convolutional neural network model serving as the feature extractor is the output of the last layer, and the input of the second convolutional neural network model serving as the feature extractor is the waveform graph of the sound signal.
4. A fully automatic machining apparatus according to claim 3, wherein the joint module comprises:
the Gaussian normalization unit is used for carrying out Gaussian normalization processing on the vibration waveform characteristic vector and the sound waveform characteristic vector to obtain a normalized vibration waveform characteristic vector and a normalized sound waveform characteristic vector;
The vibration probability density function calculation unit is used for calculating probability density function values of the normalized vibration waveform feature vectors to obtain first feature probability density distribution;
the sound probability density function calculation unit is used for calculating probability density function values of the normalized sound waveform feature vectors to obtain second feature probability density distribution;
and a probability density domain diagram construction unit, configured to construct a probability density domain diagram between the first feature probability density distribution and the second feature probability density distribution to obtain the working state feature matrix, where the feature value of each position in the working state feature matrix is equal to the product between the probability density function values of the corresponding two positions in the first feature probability density distribution and the second feature probability density distribution.
5. The fully automatic machining apparatus of claim 4, wherein the hybrid convolution module is to:
each mixed convolutional layer using the third convolutional neural network model comprising mixed convolutional layers performs respective processing on input data in forward pass of the layer:
performing multi-scale convolution coding on input data to obtain a multi-scale convolution characteristic diagram;
Pooling the multi-scale convolution feature map to obtain a pooled feature map;
performing activation processing on the pooled feature map to obtain an activated feature map;
wherein the output of the last mixed convolutional layer of the third convolutional neural network model comprising mixed convolutional layers is the classification feature vector.
6. The fully automatic machining apparatus of claim 5, wherein the multi-scale convolutional encoding is to:
performing convolution processing on the input data based on a first convolution kernel to obtain a first scale feature map;
performing convolution processing on the input data based on a second convolution kernel to obtain a second scale feature map, wherein the second convolution kernel is a cavity convolution kernel with first cavity rate;
performing convolution processing on the input data based on a third convolution kernel to obtain a third scale feature map, wherein the third convolution kernel is a cavity convolution kernel with a second cavity rate;
performing convolution processing on the input data based on a fourth convolution kernel to obtain a fourth scale feature map, wherein the fourth convolution kernel is a cavity convolution kernel with a third cavity rate;
cascading the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map to obtain a multi-scale convolution feature map.
7. The fully automatic machining apparatus according to claim 6, wherein the result generation module includes:
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors;
and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
CN202311153568.2A 2023-09-08 2023-09-08 Full-automatic machining equipment Pending CN117260297A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311153568.2A CN117260297A (en) 2023-09-08 2023-09-08 Full-automatic machining equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311153568.2A CN117260297A (en) 2023-09-08 2023-09-08 Full-automatic machining equipment

Publications (1)

Publication Number Publication Date
CN117260297A true CN117260297A (en) 2023-12-22

Family

ID=89215168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311153568.2A Pending CN117260297A (en) 2023-09-08 2023-09-08 Full-automatic machining equipment

Country Status (1)

Country Link
CN (1) CN117260297A (en)

Similar Documents

Publication Publication Date Title
Zhu et al. Estimation of bearing remaining useful life based on multiscale convolutional neural network
CN107436597A (en) A kind of chemical process fault detection method based on sparse filtering and logistic regression
Yu et al. Stacked denoising autoencoder‐based feature learning for out‐of‐control source recognition in multivariate manufacturing process
Barakat et al. Parameter selection algorithm with self adaptive growing neural network classifier for diagnosis issues
CN113095402B (en) Code input-based generation countermeasure network fault detection method and system
KR20140039380A (en) Apparatus and method for quality control using datamining in manufacturing process
Wei et al. A novel deep learning model based on target transformer for fault diagnosis of chemical process
Wang et al. Remaining useful life prediction based on improved temporal convolutional network for nuclear power plant valves
CN111742462A (en) System and method for audio and vibration based power distribution equipment condition monitoring
Li et al. Intelligent fault diagnosis of aeroengine sensors using improved pattern gradient spectrum entropy
CN117784710B (en) Remote state monitoring system and method for numerical control machine tool
CN117030129A (en) Paper cup on-line leakage detection method and system thereof
Yen et al. Application of a neural network integrated with the internet of things sensing technology for 3D printer fault diagnosis
CN117422935B (en) Motorcycle fault non-contact diagnosis method and system
Goyal et al. An intelligent self-adaptive bearing fault diagnosis approach based on improved local mean decomposition
CN116975728B (en) Safety management method and system for coal bed methane drilling engineering
CN117260297A (en) Full-automatic machining equipment
Vives Incorporating machine learning into vibration detection for wind turbines
Liao et al. Machine anomaly detection and diagnosis incorporating operational data applied to feed axis health monitoring
Hou et al. A bearing remaining life prediction method under variable operating conditions based on cross-transformer fusioning segmented data cleaning
Jombo et al. Sensor Fault Detection and Diagnosis: Methods and Challenges
Oh et al. Explainable Process Monitoring Based on Class Activation Map: Garbage In, Garbage Out
Sobha et al. A comprehensive approach for gearbox fault detection and diagnosis using sequential neural networks
Wang et al. Improved process fault diagnosis by using neural networks with Andrews plot and autoencoder
Su et al. Review of Machine Learning Approaches for Diagnostics and Prognostics of Industrial Systems Using Industrial Open Source Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication