WO2016125256A1 - Abnormal sound diagnosis device, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program - Google Patents
Abnormal sound diagnosis device, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program Download PDFInfo
- Publication number
- WO2016125256A1 WO2016125256A1 PCT/JP2015/052991 JP2015052991W WO2016125256A1 WO 2016125256 A1 WO2016125256 A1 WO 2016125256A1 JP 2015052991 W JP2015052991 W JP 2015052991W WO 2016125256 A1 WO2016125256 A1 WO 2016125256A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- intensity
- vector
- trajectory
- time series
- Prior art date
Links
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 132
- 238000003745 diagnosis Methods 0.000 title claims abstract description 123
- 238000000034 method Methods 0.000 title claims description 24
- 239000013598 vector Substances 0.000 claims abstract description 159
- 230000005856 abnormality Effects 0.000 claims abstract description 54
- 238000000605 extraction Methods 0.000 claims abstract description 33
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 6
- 238000012706 support-vector machine Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 12
- 230000000694 effects Effects 0.000 description 6
- 238000009499 grossing Methods 0.000 description 5
- 230000001174 ascending effect Effects 0.000 description 4
- 238000000513 principal component analysis Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 102100035353 Cyclin-dependent kinase 2-associated protein 1 Human genes 0.000 description 2
- 101000737813 Homo sapiens Cyclin-dependent kinase 2-associated protein 1 Proteins 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 101000661807 Homo sapiens Suppressor of tumorigenicity 14 protein Proteins 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 210000003792 cranial nerve Anatomy 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0259—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
- G05B23/0275—Fault isolation and identification, e.g. classify fault; estimate cause or root of failure
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0218—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
- G05B23/0224—Process history based detection method, e.g. whereby history implies the availability of large amounts of data
- G05B23/0227—Qualitative history assessment, whereby the type of data acted upon, e.g. waveforms, images or patterns, is not relevant, e.g. rule based assessment; if-then decisions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H13/00—Measuring resonant frequency
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0218—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
- G05B23/0221—Preprocessing measurements, e.g. data collection rate adjustment; Standardization of measurements; Time series or signal analysis, e.g. frequency analysis or wavelets; Trustworthiness of measurements; Indexes therefor; Measurements using easily measured parameters to estimate parameters difficult to measure; Virtual sensor creation; De-noising; Sensor fusion; Unconventional preprocessing inherently present in specific fault detection methods like PCA-based methods
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
- G10L19/038—Vector quantisation, e.g. TwinVQ audio
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/37—Measurements
- G05B2219/37337—Noise, acoustic emission, sound
Definitions
- the present invention is an abnormal sound diagnosis device that analyzes sound generated from a device to be diagnosed and diagnoses the generation of abnormal sound of the device and the type of abnormal sound that has occurred, and the device operates normally
- the present invention relates to an abnormal sound diagnosis apparatus, an abnormal sound diagnosis system, an abnormal sound diagnosis method, and an abnormal sound diagnosis program that do not require sound collection.
- an abnormal sound diagnosis device the analysis result of the sound data collected while the device to be diagnosed is operating normally is stored as a reference value, and the analysis result of the sound data collected at the time of diagnosis is stored.
- a device that diagnoses that an abnormality has occurred in a device when it deviates from the reference value is known.
- the abnormal sound detection device disclosed in Patent Document 1 detects and stores the frequency band of the sound collected when the elevator is operating normally, and collects the sound during diagnostic operation. The presence or absence of an abnormal sound is diagnosed by excluding the sound in the frequency band stored from the sound.
- the abnormal sound diagnosis apparatus disclosed in Patent Document 2 acquires a normal time frequency distribution which is a reference at the time of diagnosis, and the normal time frequency distribution and the time time frequency part distribution acquired in the diagnosis mode. And the degree of abnormality is calculated, and it is determined whether an abnormality has occurred by comparing the calculated degree of abnormality with a threshold value.
- the sound during normal operation cannot be collected and a reference for diagnosis cannot be created, the sound during normal operation can be obtained using another device with the same specifications. It is also possible to create a reference for diagnosis by collecting sound.
- specifications such as the location of the sound collector, the size of the parts that make up the equipment, and the placement conditions of the equipment, for example, if the equipment is an elevator.
- it is not practical from a cost standpoint to prepare equipment that has the same specifications such as hoistway size, hoistway material, basket loading capacity, and operating speed. There is a problem that it is difficult to create an appropriate standard using.
- the present invention has been made to solve the above-described problems, and diagnoses the operating state of a device without requiring sound collection during normal operation in advance for the device to be diagnosed. For the purpose.
- An abnormal sound diagnosis apparatus collects sound generated by a diagnosis target device and acquires sound data, and an intensity time series from time frequency distribution obtained by analyzing waveform data of the sound data.
- An intensity time series acquisition unit for acquiring the trajectory
- a trajectory feature extraction unit for extracting a trajectory vector by converting a trajectory showing intensity features in all time directions of the intensity time series
- waveform data of sound data generated from a reference device Identification that is learned by using as input a vector that is a trajectory indicating intensity characteristics in all time directions of intensity time series obtained from the time-frequency distribution obtained by analyzing
- An identification parameter storage unit that stores parameters, an identification unit that acquires a score for each state type of the diagnosis target device from the trajectory vector and the identification parameter, and a diagnosis pair with reference to the score Sound generated in the device is one that includes a a determination unit or abnormal either, and abnormality type is normal.
- the present invention it is possible to diagnose the presence or absence of abnormal sound even for devices that cannot collect sound during normal operation in advance and create a standard for diagnosis.
- FIG. 1 is a block diagram illustrating a configuration of an abnormal sound diagnosis apparatus according to Embodiment 1.
- FIG. It is a figure which shows the structure of the identification part of the abnormal sound diagnostic apparatus which concerns on Embodiment 1.
- FIG. 1 is a block diagram illustrating a configuration of an identification parameter learning device according to Embodiment 1.
- FIG. It is a figure which shows the accumulation example of the database of the identification parameter learning apparatus which concerns on Embodiment 1.
- FIG. 3 is a flowchart showing an operation of the abnormal sound diagnosis apparatus according to the first embodiment. 3 is a flowchart showing an operation of the abnormal sound diagnosis apparatus according to the first embodiment.
- FIG. 6 is a flowchart showing an operation of the abnormal sound diagnosis apparatus according to the second embodiment.
- Embodiment 1 The abnormal sound diagnosis apparatus diagnoses sound generated from a device to be diagnosed (for example, an elevator), and whether the generated sound is normal sound or abnormal sound. When the sound is abnormal, the type of abnormality is determined.
- the device to be diagnosed is a device composed of a plurality of operating parts such as an elevator, for example, and by attaching sound collecting means for collecting the generated sound in the elevator car or outside the car. The sound generated when the car is reciprocated is collected, and the working sound of the working parts is diagnosed by determining whether the collected sound is normal or abnormal.
- the abnormal sound diagnosis apparatus of the present invention can be applied to devices other than elevators.
- the abnormal sound diagnosis apparatus is implemented as software on a personal computer (hereinafter referred to as a PC)
- the PC includes a USB terminal and a LAN terminal.
- a microphone is connected to the USB terminal via an audio interface circuit, and a diagnosis target device is connected to the LAN terminal via a LAN cable.
- the device to be diagnosed is configured to perform a predetermined driving operation according to a control instruction output from the PC.
- the abnormal sound diagnosis apparatus 100 is not limited to being implemented as software, and can be changed as appropriate.
- FIG. 1 is a block diagram illustrating a configuration of an abnormal sound diagnosis apparatus 100 according to the first embodiment.
- FIG. 1A is a diagram illustrating functional blocks of the abnormal sound diagnosis apparatus 100 according to the first embodiment.
- the sound collection unit 1, the waveform acquisition unit 2, the time frequency analysis unit 3, the intensity time series acquisition unit 4, the locus It comprises a feature extraction unit 5, an identification parameter storage unit 6, an identification unit 7 and a determination unit 8.
- the sound collection unit 1 is configured by a sound collector such as a microphone, for example, collects sound generated from the diagnosis target device, and outputs sound data 11 in synchronization with the operation of the diagnosis target device.
- the sound collection unit 1 is arranged in the passenger car or outside the passenger car.
- the waveform acquisition unit 2 includes, for example, an amplifier and an A / D converter, samples the waveform of the sound data 11 collected by the sound collection unit 1, and outputs the waveform data 12 converted into a digital signal.
- the time frequency analysis unit 3 multiplies the waveform data 12 output from the waveform acquisition unit 2 by a time window, and performs a fast Fourier transform (hereinafter referred to as FFT) operation while shifting the time window in the time direction to convert the waveform data 12 to the time frequency. Analysis is performed to obtain a time-frequency distribution 13.
- the intensity time series acquisition unit 4 obtains an intensity time series 14 indicating the intensity with respect to time and frequency from the time frequency distribution 13 output from the time frequency analysis unit 3.
- the trajectory feature extraction unit 5 smoothes the intensity time series 14 output from the intensity time series acquisition unit 4 in the time direction, and extracts a trajectory vector 15 over the entire time axis.
- the identification parameter storage unit 6 is a storage area for storing an identification parameter learned in advance, and an identification parameter for identifying whether the operation state of the device is normal or abnormal, and when the operation state of the device is abnormal An identification parameter for identifying the type of abnormality is stored. Details of learning of the identification parameter 16 stored in the identification parameter storage unit 6 will be described later.
- the identification unit 7 collates the identification parameter 16 stored in the identification parameter storage unit 6 with the trajectory vector 15 extracted by the trajectory feature extraction unit 5, and acquires scores for a plurality of abnormal types. It is assumed that K types of abnormalities such as a normal operating state and an abnormal operating state at a specific location are set as the abnormal type.
- the score for the K types of abnormal types is hereinafter referred to as a K-dimensional score vector 17.
- the determination unit 8 determines whether the operation state of the device is normal or abnormal based on the K-dimensional score vector 17 of the identification unit 7, and if abnormal, also determines the type of abnormality, The result 18 is output.
- FIG. 1B is a block diagram illustrating a hardware configuration of the abnormal sound diagnosis apparatus 100 according to the first embodiment, and includes a processor 100a and a memory 100b.
- the sound collection unit 1, the waveform acquisition unit 2, the time frequency analysis unit 3, the intensity time series acquisition unit 4, the trajectory feature extraction unit 5, the identification unit 7 and the determination unit 8 execute a program stored in the memory 100b by the processor 100a. This is realized. Further, it is assumed that the identification parameter storage unit 6 is stored in the memory 100b.
- FIG. 2 is an explanatory diagram showing the configuration of the identification unit 7 of the abnormal sound diagnosis apparatus 100 according to the first embodiment, and shows the configuration of the neural network in the identification unit 7.
- the neural network shown in the example of FIG. 2 includes a hierarchical type, and includes a first hidden layer 72 and a second hidden layer 73 that are one input layer 71 and two hidden layers.
- the input layer 71, the first hidden layer 72, and the second hidden layer 73 include a unit for simulating the function of the synapse of the cranial nerve circuit.
- the last hidden layer also serves as the output layer.
- the second hidden layer 73 also serves as the output layer.
- the number M of hidden layers may be an integer of one or more layers (M ⁇ 1).
- the input layer 71 has the same number of units as the number of dimensions (for example, L ⁇ B) of the trajectory vector 15 input from the trajectory feature extraction unit 5.
- the second hidden layer 73 that is, the output layer has K number of nonlinear units equal to the number K of abnormal types.
- the number of hidden layer units excluding the output layer is set to a predetermined number in view of the discrimination performance of the neural network.
- U (m) L ⁇ B
- U (M) K
- U (m) represents the number of units in the mth layer.
- the load and bias necessary for calculating the response of the hidden layer are supplied from the identification parameter 16 stored in the identification parameter storage unit 6.
- the load and bias supplied to the mth hidden layer are w (i, j, m-1) and c (j, m-1), respectively.
- FIG. 3A is a diagram illustrating functional blocks of the identification parameter learning apparatus 200 according to the first embodiment.
- the sound data generation unit 21, the sound database 22, the waveform acquisition unit 23, the time frequency analysis unit 24, and the intensity time series are illustrated.
- An acquisition unit 25, a trajectory feature extraction unit 26, a teacher vector creation unit 27, and an identification learning unit 28 are included.
- the sound data generation unit 21 collects sound data using a plurality of devices with different specifications and operations as reference devices, or generates sound data by computer simulation. In the example of the first embodiment, a plurality of elevators having different specifications and operations are reference devices.
- the sound database 22 stores sound data 22a and abnormality type data 22b.
- the sound data 22a is composed of sound data generated by the sound data generation unit 21 and sound data obtained by superimposing an abnormal sound on the sound data 22a generated by the sound data generation unit 21.
- the abnormality type data 22b is an abnormality type of the device associated with the sound data 22a, specifically, a label indicating whether the operation state of the device is normal or abnormal, and when the operation state of the device is abnormal. A label indicating the type of abnormality is accumulated.
- FIG. 4 An example of the sound data 22a and the abnormality type data 22b stored in the sound database 22 is shown in FIG.
- the sound data 22a is composed of “serial number”, “solid name”, and “sound data file name”
- the abnormality type data 22b is “abnormality type C (v” corresponding to the above “serial number”. ): Example ”.
- the abnormality type C (v) types such as “normal”, “top abnormality”, and “middle floor abnormality” are associated, and K abnormality types are set in total including “normal”. ing.
- the waveform acquisition unit 23 samples the waveform of the sound data 22a accumulated in the sound database 22, and outputs the waveform data 31 converted into a digital signal.
- the time frequency analysis unit 24, the intensity time series acquisition unit (parameter intensity time series acquisition unit) 25, and the trajectory feature extraction unit (parameter trajectory feature extraction unit) 26 of the abnormal sound diagnosis apparatus 100 of FIG. The same operation as the time frequency analysis unit 3, the intensity time series acquisition unit 4 and the trajectory feature extraction unit 5 is performed, and a time frequency distribution 32, an intensity time series 33 and a trajectory vector 34 are output, respectively.
- the teacher vector creation unit 27 creates a teacher vector 35 using the abnormality type data 22b accumulated in the sound database 22.
- the identification learning unit 28 creates learning data for learning the neural network.
- the learning data of a neural network generally consists of input data and output data expected to be output from the neural network when the input data is given.
- the input data is a trajectory vector 34 input from the trajectory feature extraction unit 26, and the output data is a teacher vector 35 input from the teacher vector creation unit 27.
- the input data is V trajectory vectors 34 and the output data is V teacher vectors 35.
- the V teacher vectors 35 created by the teacher vector creation unit 27 are K for the number of types of abnormal types, y (k, v) for the kth element of the vth teacher vector, and vth sound data.
- Is C (v), y (k, v) is given by the following equation (3) as a vector in which the element at the C (v) -th position is 1 and the other elements are 0: .
- the identification learning unit 28 learns the neural network using the trajectory vector 34 that is the input data obtained as described above and the teacher vector 35 that is the output data, and uses the load and bias obtained as a result of the learning. Is stored in the identification parameter storage unit 6 as the identification parameter 36.
- the load and bias constituting the discrimination parameter 36 are the load w (i, j, m ⁇ 1) used when calculating the response of the first hidden layer 72 and the second hidden layer 73 of the discriminator 7 described above, and This corresponds to the bias c (j, m ⁇ 1).
- FIG. 3B is a block diagram showing a hardware configuration of the identification parameter learning apparatus 200 according to Embodiment 1, and includes a processor 200a and a memory 200b.
- the processor 200a is stored in the memory 200b. This is realized by executing the program.
- the sound database 22 is stored in the memory 200b.
- FIGS. 5 and 6 are flowcharts showing the operation of the abnormal sound diagnosis apparatus 100 according to Embodiment 1
- FIG. 5 shows the operation of the sound collection unit 1 and the waveform acquisition unit 2
- FIG. 6 shows the time-frequency analysis unit 3
- a device to be diagnosed by the abnormal sound diagnosis apparatus 100 is simply referred to as a device.
- the abnormal sound diagnosis apparatus 100 detects the start of operation of the device (step ST1)
- the sound collecting unit 1 collects sound generated from the device (step ST2).
- the waveform acquisition unit 2 samples the sound waveform 11 by acquiring and amplifying the sound data 11 collected in step ST2 and performing A / D conversion (step ST3).
- the waveform acquisition unit 2 is a 16-bit linear having a sampling frequency of 48 kHz.
- PCM (pulse code modulation) digital signal waveform data is converted (step ST4).
- the abnormal sound diagnosis apparatus 100 determines whether or not the operation of the device is finished (step ST5). If the operation of the device has not ended (step ST5; NO), the process returns to step ST2 and the above-described process is repeated. On the other hand, when the operation of the device is completed (step ST5; YES), the waveform acquisition unit 2 connects the waveform data acquired in step ST4 and outputs it as a series of waveform data 12 (step ST6). This completes the sound collection and waveform data acquisition process. Next, proceeding to the flowchart of FIG. 6, an abnormal sound diagnosis process using the acquired waveform data 12 is performed.
- the time-frequency analysis unit 3 acquires the waveform data 12 output from the waveform acquisition unit 2, and extracts a frame from the waveform data 12 while shifting a time window of, for example, 1024 points in the time direction at intervals of 16 milliseconds. Then, a time frequency distribution g (t, f), which is a frequency spectrum series, is obtained by FFT calculation for each frame to obtain a time frequency distribution 13 (step ST11).
- t is a time index corresponding to the shift interval for shifting the time window
- f is an index indicating the frequency of the result of the FFT operation.
- the time t and the frequency f are integers that satisfy 0 ⁇ t ⁇ T and 0 ⁇ f ⁇ F, respectively.
- T is the number of frames in the time direction of the time frequency distribution 13
- the intensity time series acquisition unit 4 has a band of 1 octave width in the time frequency distribution 13 obtained in step ST11, for example, with five frequencies of 0.5 kHz, 1 kHz, 2 kHz, 4 kHz, and 8 kHz as center frequencies.
- the sum of the frequency components included in the five frequency bands is obtained, and the intensity time series 14 of each band is obtained (step ST12).
- G (t, b) is given by the following equation (4).
- ⁇ (b) represents a set of frequencies f for which the sum is obtained in the time frequency distribution g (t, f) with respect to the band b.
- the trajectory feature extraction unit 5 smoothes the intensity time series 14 in the time direction for each band (step ST13), obtains a smoothing intensity at a point that equally divides the entire time axis into L, and creates an L-dimensional intensity vector ( Step ST14).
- the intensity time series 14 is smoothed in the time direction in five bands.
- the intensity of the created L-dimensional intensity vector is normalized (step ST15), and the L-dimensional intensity vector of each normalized band is connected to create an L ⁇ B-dimensional trajectory vector 15 (step ST16).
- smooth_t (x (t)) is a function that outputs a new time series obtained by smoothing the series x (t) relating to t in the subscript t direction.
- Equation (6) ⁇ (l) is a real function representing the interpolation position with respect to the subscript t in G to (t, b), and w (l) is a function that gives a weighting coefficient for interpolation. It is given by equations (7) and (8).
- Int (x) in Expression (8) is a function for obtaining the integer part of the argument x.
- the identification unit 7 inputs the trajectory vector 15 input from the trajectory feature extraction unit 5 to the input layer 71 of the neural network, and uses the identification parameter stored in the identification parameter storage unit 6 to activate the output unit. And a K-dimensional score vector 17 is generated (step ST17).
- the processing in step ST17 will be described with reference to a specific configuration example of the identification unit 7 in FIG.
- the i-th element in the trajectory vector 15 is copied to the i-th unit of the input layer. If the value of the i-th unit in the input layer is x (i, 0), x (i, 0) is given by the following equation (10).
- x (i, 0) ⁇ (i) (10)
- ⁇ (i) represents the value of the i-th element of the trajectory vector 15.
- the output of each unit is calculated in order from the first hidden layer 72 to the second hidden layer 73.
- the output of each unit is obtained by applying a load to the output from all units in the previous layer to obtain the sum, subtracting the bias, and performing non-linear conversion using a sigmoid function.
- x (j, m) is calculated by the following equation (11).
- ⁇ (x) is a sigmoid function having a nonlinear input / output characteristic indicating a soft threshold characteristic, and is given by the following Expression (12).
- m 1, x (i, 0) is required. This is because the i-th element ⁇ () of the trajectory vector 15 as shown in the above equation (10). equal to i).
- the determination unit 8 compares the elements of the K-dimensional score vector 17 generated in step ST17, determines a possible abnormality type based on the index of the largest element (step ST18), and outputs a determination result (step ST18). Step ST19), the process is terminated. If the possible abnormality type is k *, k * is given by the following equation (15). In addition, although the structure which outputs one element with the largest score of the K-dimensional score vector 17 was shown, you may comprise so that a some element may be output with those scores.
- FIG. 7 is a diagram illustrating an example of an abnormality type and a K-dimensional score vector referred to by the determination unit 8 of the abnormal sound diagnosis apparatus 100 according to the first embodiment.
- K-dimensional score vectors are associated with K “abnormality types”, respectively.
- the K-dimensional score vector becomes “1” when all the values of the K score vectors constituting the K-dimensional score vector are added.
- the determination unit 8 determines that the possible abnormality type is “top abnormality”.
- FIG. 8 is an explanatory diagram showing the effect of abnormal sound diagnosis performed by the abnormal sound diagnosis apparatus 100 according to the first embodiment.
- FIG. 9 shows a result of abnormal sound diagnosis by a conventional abnormal sound diagnosis apparatus.
- the traveling section 301 of the car 300 is divided, and the signal intensity of the sound generated at normal time for each divided section is stored as a reference value.
- the travel section is divided into six, and the first reference value, the second reference value,..., The sixth reference value are acquired and stored.
- the abnormality was detected in each section by comparing the stored reference value with the intensity time series of the sound data acquired at the time of diagnosis. Since the signal strength of normal sound in each section differs depending on the use and operating environment of each elevator, the reference value acquired for a certain elevator cannot be applied to the abnormal sound diagnosis of different elevators, or Even if it can be applied, there is a problem that the accuracy of the abnormal sound diagnosis is lowered. For this reason, in the conventional abnormal sound diagnosis apparatus, it is necessary to perform a learning operation in advance for each elevator and store a reference value.
- FIG. 9B shows the result of comparison with the sound signal intensity at the time of diagnosis when a reference value created by a different elevator is applied to another elevator, and based on the signal intensity of the sound at the time of diagnosis.
- no matter how the reference value 305 is set there is a problem that the normal operation state and the abnormal operation state of the device cannot be clearly separated based on the sound signal intensity at the time of diagnosis. It was.
- the effect of the abnormal sound diagnosis performed by the abnormal sound diagnosis apparatus 100 according to Embodiment 1 will be described with reference to FIG.
- the abnormal sound diagnosis apparatus 100 according to the first embodiment sounds generated while the car 300 reciprocates between the lowermost floor and the uppermost floor are collected, and the obtained sound data is collected. Then, the time frequency is analyzed, an intensity time series is obtained, and the trajectory over the entire length in the time direction of the intensity time series is vector-converted as an integral to extract a trajectory vector.
- FIG. 8A sounds generated while the car 300 reciprocates between the lowermost floor and the uppermost floor are collected, and the obtained sound data is collected.
- the time frequency is analyzed, an intensity time series is obtained, and the trajectory over the entire length in the time direction of the intensity time series is vector-converted as an integral to extract a trajectory vector.
- FIG. 8A shows a result of plotting the positions of the trajectory vector 306 and the trajectory vector 307 in the space when the trajectory vector 306 and the trajectory vector 307 are input to the discriminating unit 7.
- FIG. 8B shows a first feature axis (main axis) and a second feature axis (axis orthogonal to the main axis) by performing, for example, principal component analysis using a vector of normal individuals and a vector of abnormal individuals as a set. It is a diagram showing the arrangement of each vector in the L ⁇ 1D space where these feature axes are stretched.
- the principal component analysis is a process for displaying the mutual positional relationship of vectors in a multidimensional space, and is not a process constituting the present invention.
- the first feature axis and the second feature axis are not calculated by the configuration of the present invention, but are described to show that the trajectory vectors are classified in space.
- the group 308 indicating the normality of the device and the group 309 indicating the abnormality of the device are arranged.
- a hyperplane (straight line) orthogonal to a straight line connecting the center of gravity of the group 308 and the center of gravity of the group 309 is obtained as the boundary 310.
- FIG. 8B shows an example in which a straight line is obtained as the boundary 310, it is assumed that a hypercurve (curve) having a complicated shape is obtained in actual diagnosis processing. In this way, it is possible to capture general characteristics that appear in the intensity time series regardless of the elevator specifications and operating environment, and there is no need to learn reference values for each individual in advance, and there are differences in elevator specifications and operating environments. Can also make a robust diagnosis.
- the sound collection unit 1 that collects the sound generated from the device, and the waveform acquisition that acquires the waveform data obtained by sampling and converting the waveform of the collected sound data.
- Unit 2 time frequency analysis unit 3 that performs time frequency analysis of the acquired waveform data, intensity time series acquisition unit 4 that obtains an intensity time series indicating the intensity with respect to time and frequency from the time frequency distribution, and acquired intensity
- the trajectory feature extracting unit 5 that smoothes the time series in the time direction and extracts trajectory vectors over the entire time axis, and stores the identification parameters learned using the extracted trajectory vectors as input data and abnormal types as output data.
- the sound collecting unit 1 is configured by one sound collector and arranged in a device to be diagnosed.
- the sound collecting unit 1 includes a plurality of sound collectors. And may be arranged at a plurality of locations of the diagnosis target device.
- multi-channel sound collection is performed simultaneously with the operation of the diagnosis target device, and multi-channel sound data 11 is obtained.
- the waveform acquisition unit 2, the time frequency analysis unit 3, and the intensity time series acquisition unit 4 acquire the waveform data 12, the time frequency distribution 13, and the intensity time series 14 for the multichannel signals, respectively.
- the trajectory feature extraction unit 5 acquires a multi-channel intensity vector from the multi-channel intensity time series 14 input from the intensity time-series acquisition unit 4. Further, the intensity vectors of the respective channels are connected in the time axis direction.
- FIG. 10 is an explanatory diagram illustrating connection of multi-channel intensity vectors in the trajectory feature extraction unit 5 of the abnormal sound diagnosis apparatus 100 according to the first embodiment.
- FIG. 10 shows a case where the intensity vectors of the three channels are connected, and the first channel vector 15a, the second channel vector 15b, and the third channel vector 15c are connected in the time axis direction of the vector to obtain L ⁇ B.
- a trajectory vector 15 of three dimensions (“ ⁇ 3” is obtained by connecting intensity vectors of three channels) is generated. Since there is a connection between the channels in the intermediate layer of the neural network, the synchronicity between the channels can be learned.
- the dimension number of the trajectory vector is L ⁇ B, but here, the dimension number of the trajectory vector is read as L ⁇ B ⁇ 3.
- FIG. 11 is an explanatory diagram showing an effect when an abnormal sound diagnosis is performed based on a trajectory vector obtained by connecting multi-channel intensity vectors.
- intensity time series 311, 312, and 313 indicate intensity time series obtained in the first frequency band, the second frequency band, and the third frequency band, respectively.
- 312, and 313 are shown as a L ⁇ 1 ⁇ 3D trajectory vector 314 and a trajectory vector 315 that are connected in the time axis direction.
- the trajectory vector 314 indicates a vector when the abnormality type is “1: abnormal”, and the trajectory vector 315 indicates a vector when the abnormal type is “0: normal”.
- Embodiment 2 FIG. In the first embodiment, the case where the identification unit 7 has a neural network configuration has been described. In the second embodiment, a case where a support vector machine (hereinafter referred to as SVM) is applied as the identification unit will be described. I do. Since the entire configuration of the abnormal sound diagnosis apparatus 100 of the second embodiment is the same as that of the first embodiment, description of the block diagram is omitted, and an identification unit having a different configuration will be described in detail below.
- SVM support vector machine
- FIG. 12 is a diagram illustrating a configuration of the identification unit 7a of the abnormal sound diagnosis apparatus 100 according to the second embodiment.
- the identification unit 7a has (K-1) K / 2 SVMs as a whole, where K is the number of abnormal types.
- each SVM is configured to learn to classify and identify any two abnormal types of vectors among K abnormal types including normal.
- SVM [i, j] (0 ⁇ i ⁇ j ⁇ K).
- FIG. 13 is a flowchart showing the operation of the abnormal sound diagnosis apparatus according to the second embodiment.
- the same steps as those in the abnormal sound diagnosis apparatus according to the first embodiment are denoted by the same reference numerals as those used in FIG. 6, and the description thereof is omitted or simplified.
- the operations of the sound collection unit 1 and the waveform acquisition unit 2 are the same as those in the flowchart shown in FIG.
- the identification unit 7a inputs the trajectory vector 15 to each SVM, and uses the identification parameter stored in the identification parameter storage unit 6.
- the output value y ( ⁇ ) of the discrimination function of each SVM is calculated based on the following equation (16) (step ST21).
- k (x 1, x 2 ) is the inner product between the mapping to a multi-dimensional space of vectors x1 ⁇ (x 1) and mapping ⁇ (x 2) to a multi-dimensional space vector x 2 ⁇ (x 1 ), ⁇ (x 2 )> (note that ⁇ (x) is a nonlinear function of the vector x that cannot be expressed by an explicit expression).
- the kernel function for example, a Gauss kernel represented by the following equation (17) can be used. Note that ⁇ is a Gaussian kernel parameter.
- the identification unit 7a calculates the classification output of each class from the output value of the identification function of each SVM calculated in step ST21, and the score vector value s indicating the score of 1 to K corresponding to the abnormality type (K) is calculated, and the calculated score vector value s (k) is output to the determination unit 8 as the K-dimensional score vector 17 (step ST22).
- the determination unit 8 compares the elements of the K-dimensional score vector 17 generated in step ST17, determines a possible abnormality type based on the index of the largest element (step ST18), and outputs a determination result (step ST18). Step ST19), the process is terminated.
- the trajectory feature extraction unit 5 creates the trajectory vector 15 over the entire length in the time direction of the intensity time series 14. It is possible to capture generalized features that appear in the intensity time series that do not depend on device specifications or operating environments. Thereby, it is not necessary to learn the criteria at the time of diagnosis for each individual, and robust diagnosis can be performed even with respect to differences in device specifications and operating environments. Moreover, the abnormal sound diagnostic apparatus which suppressed the fall of the diagnostic precision by the difference in an apparatus can be provided.
- the trajectory vector 15 output from the trajectory feature extraction unit 5 uses the trajectory features over the entire length in the time direction of the intensity time series 14 as an L-dimensional vector by linear interpolation.
- a trajectory of the intensity time series 14 over the entire length in the time direction may be Fourier-transformed to form an L-dimensional vector from low-order Fourier coefficients.
- a compressed feature may be output as an L-dimensional vector by principal component analysis.
- the above-described conversion without loss indicates that the features as they are are used as the vectors without performing processing on the vectors indicating the features over the entire length in the time direction of the intensity time series 14.
- the conversion that allows loss is a process of reducing the number of dimensions by multiplying a vector indicating the characteristics of the intensity time series 14 over the entire length in the time direction by, for example, a matrix obtained by principal component analysis. It shows that the feature is used as a vector. It is considered that a part of the information included in the raw feature vector is lost due to the above-described reduction of the number of dimensions.
- the trajectory vector is extracted by converting the vector into a vector, but in the reciprocating operation of the working section, it is divided into one-way sections such as the ascending section and the descending section, and the total length of the section in the time direction for each divided section It is also possible to convert the trajectory extending over to a vector and extract the trajectory vector, and to prepare the identification unit 7 for each divided section and perform the identification processing.
- the sections to be divided are not limited to the ascending section and the descending section.
- the ascending section may be further divided into smaller sections such as a lower section, a middle section, and a higher section.
- the abnormal sound diagnosis apparatus can perform abnormal sound diagnosis with high accuracy with respect to differences in equipment specifications and operations, a reference value for determining abnormal sound is created for each individual. Applicable to devices that cannot be used, and suitable for diagnosing abnormal sounds in devices.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
- Testing And Monitoring For Control Systems (AREA)
Abstract
Description
例えば、特許文献1に開示された異常音検出装置は、エレベータが正常に運転されているときに集音される音の周波数帯域を検出して記憶しておき、診断運転の際に集音した音から記憶されている周波数帯域の音を除外して異常音の有無を診断している。
また、特許文献2に開示された異常音診断装置は、診断時に基準となる正常時時間周波数分布を取得しておき、当該正常時時間周波数分布と、診断モードにおいて取得した診断時時間周波数部分布とを比較して異常度を算出し、算出した異常度と閾値を比較することにより異常が発生しているかを判定している。 Conventionally, as an abnormal sound diagnosis device, the analysis result of the sound data collected while the device to be diagnosed is operating normally is stored as a reference value, and the analysis result of the sound data collected at the time of diagnosis is stored. A device that diagnoses that an abnormality has occurred in a device when it deviates from the reference value is known.
For example, the abnormal sound detection device disclosed in
Further, the abnormal sound diagnosis apparatus disclosed in
そのため、機器を診断する前に正常動作時の音を集音することができない場合、例えば途中契約した既設のエレベータなどの場合には、診断のための基準を作成することができず、異常音診断装置を適用することができないという課題があった。 However, in the techniques of
Therefore, if the sound during normal operation cannot be collected before diagnosing the equipment, for example, in the case of an existing elevator contracted in the middle, a reference for diagnosis cannot be created and abnormal sound There was a problem that the diagnostic device could not be applied.
実施の形態1.
この実施の形態1の異常音診断装置は、診断対象とする機器(例えば、エレベーターなど)から発生する音を診断して、当該発生音が正常な音であるか異常な音であるか、さらに異常な音である場合に異常の種類を判定するものである。診断対象となる機器は、例えばエレベータのように複数の稼働部品で構成される機器であり、発生する音を集音する集音手段をエレベータの乗車かごの中、またはかごの外に取り付けることにより乗車かごが往復運動した際に発生する音を集音し、集音した音が正常か異常かを判定することにより稼働部品の稼働音を診断する。なお、本願発明の異常音診断装置は、エレベータ以外にも適用可能である。 Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
The abnormal sound diagnosis apparatus according to the first embodiment diagnoses sound generated from a device to be diagnosed (for example, an elevator), and whether the generated sound is normal sound or abnormal sound. When the sound is abnormal, the type of abnormality is determined. The device to be diagnosed is a device composed of a plurality of operating parts such as an elevator, for example, and by attaching sound collecting means for collecting the generated sound in the elevator car or outside the car. The sound generated when the car is reciprocated is collected, and the working sound of the working parts is diagnosed by determining whether the collected sound is normal or abnormal. Note that the abnormal sound diagnosis apparatus of the present invention can be applied to devices other than elevators.
図1(a)は、実施の形態1の異常音診断装置100の機能ブロックを示す図であり、集音部1、波形取得部2、時間周波数分析部3、強度時系列取得部4、軌跡特徴抽出部5、識別パラメータ記憶部6、識別部7および判定部8で構成されている。
集音部1は、例えばマイクなどの集音器で構成され、診断対象となる機器の動作に同期し、当該診断対象となる機器から発生する音を集音し、音データ11を出力する。診断対象となる機器がエレベータの場合、集音部1は乗車かごの中または乗車かごの外などに配置される。波形取得部2は、例えば増幅器およびA/D変換器などで構成され、集音部1が集音した音データ11の波形をサンプリングし、デジタル信号に変換した波形データ12を出力する。 FIG. 1 is a block diagram illustrating a configuration of an abnormal
FIG. 1A is a diagram illustrating functional blocks of the abnormal
The
次に、識別部7の詳細な構成について説明する。
図2は、実施の形態1に係る異常音診断装置100の識別部7の構成を示す説明図であり、識別部7内のニューラルネットの構成を示している。
図2の例で示したニューラルネットは、階層型で構成され、1つの入力層71と2つの隠れ層である第1の隠れ層72および第2の隠れ層73で構成されている。入力層71、第1の隠れ層72および第2の隠れ層73は脳神経回路のシナプスの機能を模擬するためのユニットを備えている。各層内のユニット間の結合はなく、各層間のユニット間の結合のみを持つ。このため、この実施の形態1のニューラルネットは機械学習の分野で、Deep Learningとして知られる学習方法によって、良好な性能が安定して得られることが知られている。
Next, a detailed configuration of the
FIG. 2 is an explanatory diagram showing the configuration of the
The neural network shown in the example of FIG. 2 includes a hierarchical type, and includes a first hidden layer 72 and a second hidden
入力層71は、軌跡特徴抽出部5から入力される軌跡ベクトル15の次元数(例えば、L×B個)と同じ数のユニットを持つ。また、第2の隠れ層73すなわち出力層は、異常種類の数Kと同一のK個の非線形ユニットを有する。出力層を除く隠れ層のユニット数はニューラルネットの識別性能を鑑みて所定の数に設定する。0番目を入力層としてm番目の層のユニット数をU(m)(m=0,1,2,...,M)とすると、ユニット数には以下の式(1)に基づく制約がある。
U(0)=L×B
U(m)=任意の自然数(m=1,2,・・・M-1) (1)
U(M)=K
式(1)において、U(m)はm番目の層のユニット数を示す。 The last hidden layer also serves as the output layer. In the example of FIG. 2, the second hidden
The input layer 71 has the same number of units as the number of dimensions (for example, L × B) of the
U (0) = L × B
U (m) = any natural number (m = 1, 2,... M−1) (1)
U (M) = K
In the formula (1), U (m) represents the number of units in the mth layer.
図3(a)は、実施の形態1の識別パラメータ学習装置200の機能ブロックを示す図であり、音データ生成部21、音データベース22、波形取得部23、時間周波数分析部24、強度時系列取得部25、軌跡特徴抽出部26、教師ベクトル作成部27および識別学習部28で構成されている。 Next, learning of the
FIG. 3A is a diagram illustrating functional blocks of the identification
図4に示すように、音データ22aは、「通し番号」、「固体名」および「音データファイル名」で構成され、異常種類データ22bは上述した「通し番号」に対応した「異常種類C(v):例」で構成されている。
異常種類C(v)の例として、”正常”や”頂部異常”、”中間階異常”などの種類が対応付けられており、”正常”を含めて全部でK種類の異常種類が設定されている。 An example of the
As shown in FIG. 4, the
As an example of the abnormality type C (v), types such as “normal”, “top abnormality”, and “middle floor abnormality” are associated, and K abnormality types are set in total including “normal”. ing.
ニューラルネットの学習に用いる音データの総数をVとすると、入力データはV個の軌跡ベクトル34であり、出力データはV個の教師ベクトル35となる。 The
When the total number of sound data used for learning of the neural network is V, the input data is V trajectory vectors 34 and the output data is V teacher vectors 35.
x(k,v)=ρ(k,v) (2)
すなわち、入力データx(k,v)は、軌跡ベクトル34と同一であることを示している。 If the trajectory vector 34 extracted from the v-th sound data in the
x (k, v) = ρ (k, v) (2)
That is, the input data x (k, v) is the same as the trajectory vector 34.
The V teacher vectors 35 created by the teacher
図5および図6は実施の形態1に係る異常音診断装置100の動作を示すフローチャートであり、図5は集音部1および波形取得部2の動作について示し、図6は時間周波数分析部3以降の各構成の動作について示している。なお、以下では、異常音診断装置100の診断対象となる機器を、単に機器と称して説明を行う。
異常音診断装置100が機器の運転開始を検知すると(ステップST1)、集音部1は機器から発生する音を集音する(ステップST2)。波形取得部2は、ステップST2で集音された音データ11を取得して増幅し、A/D変換することにより、音の波形をサンプリングし(ステップST3)、例えばサンプリング周波数48kHzの16ビットリニアPCM(pulse code modulation)のデジタル信号の波形データに変換する(ステップST4)。 Next, the operation of the abnormal
5 and 6 are flowcharts showing the operation of the abnormal
When the abnormal
ここで、tは時間窓をずらすシフト間隔に対応する時刻のインデックス、fはFFT演算の結果の周波数を示すインデックスである。なお、時間tおよび周波数fは、それぞれ、0≦t≦T,0≦f≦Fを満たす整数である。また、Tは時間周波数分布13の時間方向のフレーム数、Fは波形データ12のサンプリング周波数fsの1/2であるナイキスト周波数に対応するインデックスである(F=fs/2)。 The time-
Here, t is a time index corresponding to the shift interval for shifting the time window, and f is an index indicating the frequency of the result of the FFT operation. The time t and the frequency f are integers that satisfy 0 ≦ t ≦ T and 0 ≦ f ≦ F, respectively. T is the number of frames in the time direction of the time frequency distribution 13, and F is an index corresponding to a Nyquist frequency that is ½ of the sampling frequency fs of the waveform data 12 (F = fs / 2).
式(4)において、bは帯域のインデックスで0≦b<Bを満たす整数である(Bは帯域数で、本例ではB=5)。また、Ω(b)は、帯域bに関して、時間周波数分布g(t,f)において、総和を求める対象となる周波数fの集合を表す。 Next, the intensity time
In Equation (4), b is an integer satisfying 0 ≦ b <B as a band index (B is the number of bands, and B = 5 in this example). Further, Ω (b) represents a set of frequencies f for which the sum is obtained in the time frequency distribution g (t, f) with respect to the band b.
式(5)においてsmooth_t(x(t))はtに関する系列x(t)を添え字t方向に平滑化した新たな時系列を出力する関数である。 The intensity time series G to (t, b) (t = 0, 1,..., T−1, b = 0, 1,..., B−1) after the smoothing in step ST13 are as follows. Calculated based on equation (5).
In Expression (5), smooth_t (x (t)) is a function that outputs a new time series obtained by smoothing the series x (t) relating to t in the subscript t direction.
Further, the smoothing strength H (l, b) of the L equally divided point obtained in step ST14 (l = 0, 1,..., L−1, b = 0, 1,..., B−1). Is calculated based on the following equation (6).
式(8)におけるint(x)は引数xの整数部分を求める関数である。 In Equation (6), τ (l) is a real function representing the interpolation position with respect to the subscript t in G to (t, b), and w (l) is a function that gives a weighting coefficient for interpolation. It is given by equations (7) and (8).
Int (x) in Expression (8) is a function for obtaining the integer part of the argument x.
In the generation of the
ステップST17の処理について、図2の識別部7の具体的な構成例を参照しながら説明する。まず、軌跡ベクトル15内のi番目の要素が入力層のi番目のユニットにコピーされる。入力層のi番目のユニットの値をx(i,0)とすると、x(i,0)は以下の式(10)で与えられる。
x(i,0)=ρ(i) (10)
式(10)において、ρ(i)は、軌跡ベクトル15のi番目の要素の値を示す。 Next, the
The processing in step ST17 will be described with reference to a specific configuration example of the
x (i, 0) = ρ (i) (10)
In equation (10), ρ (i) represents the value of the i-th element of the
式(11)において、σ(x)はソフトな閾値特性を示す非線形な入出力特性を持つ、シグモイド関数であり、以下の式(12)で与えられる。
なお、上述した式(11)において、m=1のとき、x(i,0)が必要となるが、これは上述した式(10)に示すように軌跡ベクトル15のi番目の要素ρ(i)に等しい。 Next, the output of each unit is calculated in order from the first hidden layer 72 to the second hidden
In Expression (11), σ (x) is a sigmoid function having a nonlinear input / output characteristic indicating a soft threshold characteristic, and is given by the following Expression (12).
In the above equation (11), when m = 1, x (i, 0) is required. This is because the i-th element ρ () of the
o(k)=x(k,M) (13)
最後に、出力層のK個の出力を正規化する。正規化することにより、K個の出力の総和が1となる。正規化の結果をスコアベクトルの値s(k)とすると、スコアベクトルの値s(k)はsoftmax演算として知られる以下の式(14)で与えられる。
上述の処理により得られたK次元スコアベクトル17は、判定部8に出力される。 The calculation based on equation (11) is m = 1,. . . , M, the output x (k, M) of the last hidden layer is obtained. In the example of FIG. 2, the output x (k, 2) of the second hidden
o (k) = x (k, M) (13)
Finally, normalize the K outputs of the output layer. By normalizing, the sum of K outputs becomes 1. If the result of normalization is a score vector value s (k), the score vector value s (k) is given by the following equation (14) known as a softmax operation.
The K-
判定部8は、ステップST17で生成されたK次元スコアベクトル17の要素を比較し最大の要素のインデックスに基づいて可能性のある異常種類を判定し(ステップST18)、判定結果を出力して(ステップST19)、処理を終了する。可能性のある異常種類をk*とするとk*は以下の式(15)で与えられる。
なお、K次元スコアベクトル17のスコアが最大の要素を一つ出力する構成を示したが、複数の要素をそれらのスコアと共に出力するように構成してもよい。 Returning to the flowchart of FIG.
The
In addition, although the structure which outputs one element with the largest score of the K-
図7に示すように、K個の「異常種類」に、それぞれ「K次元スコアベクトル」が対応付けられている。K次元スコアベクトルは、構成しているK個のスコアベクトルの値を全て加算すると「1」となる。図7の例では、異常種類「頂部異常」のスコアベクトルが「0.64」で最大値を取ることから、判定部8は可能性のある異常種類が「頂部異常」であると判定する。 FIG. 7 is a diagram illustrating an example of an abnormality type and a K-dimensional score vector referred to by the
As shown in FIG. 7, “K-dimensional score vectors” are associated with K “abnormality types”, respectively. The K-dimensional score vector becomes “1” when all the values of the K score vectors constituting the K-dimensional score vector are added. In the example of FIG. 7, since the score vector of the abnormality type “top abnormality” is “0.64” and takes the maximum value, the
図8は、実施の形態1の異常音診断装置100による異常音診断の効果を示す説明図である。また、比較として図9には従来の異常音診断装置による異常音診断の結果を示している。
まず、従来の異常音診断装置による異常音診断の方法および得られる結果について図9を参照して説明する。従来の異常音診断装置では、かご300の走行区間301を分割し、分割した区間ごとに正常時に発生する音の信号強度を基準値として記憶する。図9(a)の例では、走行区間を6分割し、第1の基準値、第2の基準値、・・・、第6の基準値を取得して記憶する。 Next, the effect when the abnormal
FIG. 8 is an explanatory diagram showing the effect of abnormal sound diagnosis performed by the abnormal
First, an abnormal sound diagnosis method using a conventional abnormal sound diagnosis apparatus and results obtained will be described with reference to FIG. In the conventional abnormal sound diagnosis apparatus, the traveling
実施の形態1の異常音診断装置100では、図8(a)に示すようにかご300が最下階と最上階を往復走行する間に発生する音を集音し、得られた音データに対して時間周波数の分析を行い、強度時系列を得て、強度時系列の時間方向の全長に渡る軌跡を一体のものとしてベクトル変換して軌跡ベクトルを抽出する。図8(a)の例では説明の簡単化のため、異常種類を「正常」と「異常」の2種類(K=0~1)とし、帯域数も1帯域(B=1)とし、L×1次元の軌跡ベクトル306,307を抽出した場合を示している。軌跡ベクトル306は異常種類が「1:異常」の際のベクトルを示し、軌跡ベクトル307は異常種類が「0:正常」の際のベクトルを示している。軌跡ベクトル306および軌跡ベクトル307を識別部7に入力した場合に、識別部7が当該軌跡ベクトル306および軌跡ベクトル307の空間上の位置をプロットした結果を図8(b)に示している。 Next, the effect of the abnormal sound diagnosis performed by the abnormal
In the abnormal
なお、主成分分析はベクトルの多次元空間上の相互の位置関係を表示するための処理であり、本願発明を構成する処理ではない。また、第1特徴軸および第2特徴軸は本願発明の構成により算出されるものではなく、軌跡ベクトルが空間上で分類されることを示すために記載したものである。 FIG. 8B shows a first feature axis (main axis) and a second feature axis (axis orthogonal to the main axis) by performing, for example, principal component analysis using a vector of normal individuals and a vector of abnormal individuals as a set. It is a diagram showing the arrangement of each vector in the L × 1D space where these feature axes are stretched.
The principal component analysis is a process for displaying the mutual positional relationship of vectors in a multidimensional space, and is not a process constituting the present invention. The first feature axis and the second feature axis are not calculated by the configuration of the present invention, but are described to show that the trajectory vectors are classified in space.
なお、図8(b)では、境界310として直線が得られる例を示したが、実際の診断処理では複雑な形状を有する超曲線(曲線)が得られるものとする。
このように、エレベータの仕様や動作環境によらない、強度時系列に現れる一般的な特徴を捉えることができ、予め固体ごとの基準値を学習する必要がなく、エレベータの仕様や動作環境の違いに対しても頑健な診断を行うことができる。 As shown in the plot result of FIG. 8B, based on the abnormality type of the trajectory vector and the position in the space, the
Although FIG. 8B shows an example in which a straight line is obtained as the
In this way, it is possible to capture general characteristics that appear in the intensity time series regardless of the elevator specifications and operating environment, and there is no need to learn reference values for each individual in advance, and there are differences in elevator specifications and operating environments. Can also make a robust diagnosis.
図10では、3つのチャンネルの強度ベクトルを連結する場合を示し、第1チャンネルのベクトル15a、第2チャンネルのベクトル15bおよび第3チャンネルのベクトル15cをベクトルの時間軸方向に連結してL×B×3次元(「×3」は3つのチャンネルの強度ベクトルを連結したことによる)の軌跡ベクトル15を生成する。チャンネル間にまたがるコネクションが、ニューラルネットの中間層に存在するため、チャンネル間の共時性を学習することができる。なお、前段落までの説明では、軌跡ベクトルの次元数をL×Bとしたが、ここでは、軌跡ベクトルの次元数をL×B×3に、読み替えて実施する。
このように、複数の集音器で集音された音データを用いることにより、異常種類の異なるベクトル間の識別空間内での分離度が改善され、診断精度を向上させることができる。 FIG. 10 is an explanatory diagram illustrating connection of multi-channel intensity vectors in the trajectory
FIG. 10 shows a case where the intensity vectors of the three channels are connected, and the
Thus, by using sound data collected by a plurality of sound collectors, the degree of separation in the identification space between vectors of different types of abnormalities can be improved, and diagnostic accuracy can be improved.
図11(a)において、強度時系列311,312,313は、それぞれ第1の周波数帯域、第2の周波数帯域、第3の周波数帯域で得られた強度時系列を示し、当該強度時系列311,312,313から得られたベクトルを時間軸方向に連結したL×1×3次元の軌跡ベクトル314および軌跡ベクトル315として示している。軌跡ベクトル314は異常種類が「1:異常」の際のベクトルを示し、軌跡ベクトル315は異常種類が「0:正常」の際のベクトルを示している。軌跡ベクトル314および軌跡ベクトル315を識別部7に入力した場合、識別部7において当該軌跡ベクトル314および軌跡ベクトル315の空間上の位置をプロットした結果を図11(b)に示している。図11(b)では図8(b)で示した結果と同等の結果を得ることができる。 FIG. 11 is an explanatory diagram showing an effect when an abnormal sound diagnosis is performed based on a trajectory vector obtained by connecting multi-channel intensity vectors.
In FIG. 11A,
上述した実施の形態1では識別部7がニューラルネットの構成である場合について説明を行ったが、この実施の形態2では識別部としてサポートベクターマシン(以下、SVMと称する)を適用した場合について説明を行う。
実施の形態2の異常音診断装置100の全体の構成は実施の形態1と同一であるため、ブロック図の記載を省略し、構成が異なる識別部について以下詳細に説明を行う。
In the first embodiment, the case where the
Since the entire configuration of the abnormal
識別部7aは、異常種類の数をKとするとき、全体として(K-1)K/2個のSVMを有する。ここで、各SVMは、正常を含むK個の異常種類のうち任意の2つの異常種類のベクトルを分類して識別するように学習して構成されている。各SVMは、パラメータとして、サポートベクトルの個数n、n個のサポートベクトルxi(i=0,1,2,...,n-1)、n個の係数αi(i=0,1,2,...,n-1)、バイアスb、後述するカーネル関数の定義k(x1,x2)を有している。以下、正常または異常種類iと異常種類j(ただし、i<jとする)を識別するSVMを、SVM[i,j](0≦i<j<K)と記載する。 FIG. 12 is a diagram illustrating a configuration of the
The
図13は、実施の形態2に係る異常音診断装置の動作を示すフローチャートである。なお、以下では実施の形態1に係る異常音診断装置と同一のステップには図6で使用した符号と同一の符号を付し、説明を省略または簡略化する。また、集音部1および波形取得部2の動作は実施の形態1の図5で示したフローチャートと同一であるため、説明を省略する。
識別部7aは、ステップST16において軌跡特徴抽出部5が作成した軌跡ベクトル15が入力されると、当該軌跡ベクトル15を各SVMに入力し、識別パラメータ記憶部6に記憶された識別パラメータを用いて、各SVMの識別関数の出力値y(ρ)を以下の式(16)に基づいて計算する(ステップST21)。
Next, the operation of the
FIG. 13 is a flowchart showing the operation of the abnormal sound diagnosis apparatus according to the second embodiment. In the following, the same steps as those in the abnormal sound diagnosis apparatus according to the first embodiment are denoted by the same reference numerals as those used in FIG. 6, and the description thereof is omitted or simplified. The operations of the
When the
Here, k (x 1, x 2 ) is the inner product between the mapping to a multi-dimensional space of vectors x1 φ (x 1) and mapping φ (x 2) to a multi-dimensional space vector x 2 <φ (x 1 ), φ (x 2 )> (note that φ (x) is a nonlinear function of the vector x that cannot be expressed by an explicit expression). As the kernel function, for example, a Gauss kernel represented by the following equation (17) can be used. Note that σ is a Gaussian kernel parameter.
これにより、エレベータの場合、上昇時に異常はなく、下降時に異常があった場合でも、区間ごとの診断が可能になる。
なお、分割する区間は上昇区間と下降区間のみではなく、例えば上昇区間をさらに下層区間、中層区間、および高層区間のようにより細かい区間で分割してもよい。 In
Thereby, in the case of an elevator, there is no abnormality when ascending, and even if there is an abnormality when descending, diagnosis for each section is possible.
The sections to be divided are not limited to the ascending section and the descending section. For example, the ascending section may be further divided into smaller sections such as a lower section, a middle section, and a higher section.
Claims (10)
- 診断対象機器において発生した音が異常であるか診断を行う異常音診断装置において、
前記診断対象機器で発生した音を集音し、音データを取得する集音部と、
前記集音部が取得した音データの波形データを分析して得られる時間周波数分布から強度時系列を取得する強度時系列取得部と、
前記強度時系列取得部が取得した強度時系列の全時間方向の強度特徴を示す軌跡をベクトルに変換して軌跡ベクトルを抽出する軌跡特徴抽出部と、
参照機器から発生する音データの波形データを分析して得られた時間周波数分布から取得される強度時系列の全時間方向の強度特徴を示す軌跡であるベクトルを入力とし、前記診断対象機器の状態種別を示す情報を出力として学習された識別パラメータを記憶する識別パラメータ記憶部と、
前記軌跡特徴抽出部が抽出した軌跡ベクトルと、前記識別パラメータ記憶部に記憶された識別パラメータとから、前記診断対象機器の各状態種別に対するスコアを取得する識別部と、
前記識別部が取得したスコアを参照し、前記診断対象機器において発生した音が正常であるか異常であるか、および異常の種類を判定する判定部とを備えたことを特徴とする異常音診断装置。 In the abnormal sound diagnosis device that diagnoses whether the sound generated in the diagnosis target device is abnormal,
A sound collecting unit that collects sound generated by the diagnosis target device and obtains sound data;
An intensity time series acquisition unit for acquiring an intensity time series from a time frequency distribution obtained by analyzing waveform data of sound data acquired by the sound collection unit;
A trajectory feature extracting unit that extracts a trajectory vector by converting a trajectory indicating intensity characteristics in all time directions of the intensity time series acquired by the intensity time series acquiring unit;
An input is a vector that is a trajectory showing intensity characteristics in all time directions of intensity time series obtained from a time-frequency distribution obtained by analyzing waveform data of sound data generated from a reference device, and the state of the diagnosis target device An identification parameter storage unit for storing an identification parameter learned by using information indicating the type as an output;
An identification unit that acquires a score for each state type of the diagnosis target device from the locus vector extracted by the locus feature extraction unit and the identification parameter stored in the identification parameter storage unit;
An abnormal sound diagnosis, comprising: a determination unit that refers to the score acquired by the identification unit and determines whether the sound generated in the diagnosis target device is normal or abnormal and the type of abnormality apparatus. - 前記強度時系列取得部は、前記強度時系列として、前記時間周波数分布から時間と周波数に対する強度を取得し、
前記軌跡特徴抽出部は、時間と周波数に対する強度との2次元空間において、前記強度時系列取得部が取得した強度時系列が示す軌跡をベクトルに変換し、変換したベクトルを連結して前記軌跡ベクトルを抽出することを特徴とする請求項1記載の異常音診断装置。 The intensity time series acquisition unit acquires the intensity with respect to time and frequency from the time frequency distribution as the intensity time series,
The trajectory feature extraction unit converts the trajectory indicated by the intensity time series acquired by the intensity time series acquisition unit into a vector in a two-dimensional space of time and intensity with respect to frequency, and connects the converted vectors to generate the trajectory vector. The abnormal sound diagnosis apparatus according to claim 1, wherein: - 前記軌跡特徴抽出部は、前記強度時系列取得部が取得した強度時系列に対して損失なしのベクトル変換を行う、または損失ありのベクトル変換を行うことを特徴とする請求項1記載の異常音診断装置。 2. The abnormal sound according to claim 1, wherein the trajectory feature extraction unit performs lossless vector conversion or lossy vector conversion on the intensity time series acquired by the intensity time series acquisition unit. Diagnostic device.
- 前記識別部は、ニューラルネットワークの手法を用いて前記スコアを取得することを特徴とする請求項1記載の異常音診断装置。 The abnormal sound diagnosis apparatus according to claim 1, wherein the identification unit acquires the score using a neural network technique.
- 前記識別部は、サポートベクターマシンの手法を用いて前記スコアを取得することを特徴とする請求項1記載の異常音診断装置。 The abnormal sound diagnosis apparatus according to claim 1, wherein the identification unit acquires the score using a support vector machine technique.
- 前記集音部は、前記診断対象機器に複数配置され、前記診断対象機器において発生した音を集音して複数チャンネルの音データを収集し、
前記強度時系列取得部は、前記集音部が収集した複数チャンネルの各音データの波形データを分析して得られる時間周波数分布から、前記複数チャンネルの強度時系列を取得し、
前記軌跡特徴抽出部は、前記強度時系列取得部が取得した複数チャンネルの強度時系列の全時間方向の強度特徴を示す軌跡をベクトルに変換し、前記複数のチャンネルの変換したベクトルを時間方向に連結し、前記軌跡ベクトルを抽出することを特徴とする請求項1記載の異常音診断装置。 A plurality of the sound collection units are arranged in the diagnosis target device, collect sound generated in the diagnosis target device and collect sound data of a plurality of channels,
The intensity time series acquisition unit acquires the intensity time series of the plurality of channels from the time frequency distribution obtained by analyzing the waveform data of each sound data of the plurality of channels collected by the sound collection unit,
The trajectory feature extraction unit converts a trajectory indicating intensity characteristics in all time directions of the intensity time series of a plurality of channels acquired by the intensity time series acquisition unit into a vector, and converts the converted vectors of the plurality of channels in the time direction. The abnormal sound diagnosis apparatus according to claim 1, wherein the abnormal sound diagnosis apparatus is connected to extract the trajectory vector. - 前記強度時系列取得部は、前記診断対象機器の稼働区間に対応させて分割された前記時間周波数分布から、前記各稼働区間における前記強度時系列を取得し、
前記軌跡特徴抽出部は、前記強度時系列取得部が取得した強度時系列を前記診断対象機器の稼働区間に対応させて分割し、分割した各強度時系列の全時間方向の強度特徴を示す軌跡をベクトルに変換して前記軌跡ベクトルを抽出し、
前記識別部は、前記軌跡特徴抽出部が抽出した各稼働区間に対応した軌跡ベクトルと、前記識別パラメータ記憶部に記憶された識別パラメータとから、前記診断対象機器の各稼働区間について各状態種別に対するスコアを取得することを特徴とする請求項1記載の異常音診断装置。 The intensity time series acquisition unit acquires the intensity time series in each operating section from the time frequency distribution divided corresponding to the operating section of the diagnosis target device,
The trajectory feature extraction unit divides the intensity time series acquired by the intensity time series acquisition unit so as to correspond to the operation section of the diagnosis target device, and shows the trajectory indicating the intensity characteristics in all time directions of the divided intensity time series. To the vector to extract the trajectory vector,
The identification unit, for each state type for each operation section of the diagnosis target device, from the trajectory vector corresponding to each operation section extracted by the trajectory feature extraction unit and the identification parameter stored in the identification parameter storage unit. The abnormal sound diagnosis apparatus according to claim 1, wherein a score is acquired. - 前記参照機器から発生する音データと、前記音データに異音を重畳させた異音重畳データと、前記音データおよび前記異音重畳データに関連付けられた機器の異常種類情報とを蓄積した音データベースと、
前記音データベースに蓄積された前記音データおよび前記異音重畳信号の波形データを分析して得られる時間周波数分布から強度時系列を取得するパラメータ強度時系列取得部と、
前記パラメータ強度時系列取得部が取得した強度時系列から、前記強度時系列の全時間方向の強度特徴を示す軌跡をベクトルに変換するパラメータ軌跡特徴抽出部と、
前記音データベースに蓄積された異常種類情報から教師ベクトルを作成する教師ベクトル作成部と、
前記パラメータ軌跡特徴抽出部が変換した軌跡ベクトルを入力とし、前記教師ベクトル作成部が作成した教師ベクトルが出力となるように学習を行い、当該学習の結果を前記識別パラメータとして前記識別パラメータ記憶部に記憶させる識別学習部と
を有する識別パラメータ学習装置と、
請求項1記載の異常音診断装置とを備えたことを特徴とする異常音診断システム。 A sound database that stores sound data generated from the reference device, abnormal sound superimposition data obtained by superimposing abnormal sound on the sound data, and abnormal type information of the device associated with the sound data and the abnormal sound superimposition data When,
A parameter intensity time series acquisition unit for acquiring an intensity time series from a time frequency distribution obtained by analyzing the waveform data of the sound data and the abnormal sound superimposed signal accumulated in the sound database;
A parameter trajectory feature extraction unit that converts a trajectory indicating intensity features in all time directions of the intensity time series into vectors from the intensity time series acquired by the parameter intensity time series acquisition unit;
A teacher vector creation unit that creates a teacher vector from the abnormality type information accumulated in the sound database;
The trajectory vector converted by the parameter trajectory feature extraction unit is input, learning is performed so that the teacher vector generated by the teacher vector generation unit is output, and the learning result is stored in the identification parameter storage unit as the identification parameter. An identification parameter learning device comprising: an identification learning unit for storing;
An abnormal sound diagnosis system comprising the abnormal sound diagnosis apparatus according to claim 1. - 診断対象機器において発生した音が異常であるか診断を行う異常音診断方法において、
集音部が、前記診断対象機器で発生した音を集音し、音データを取得するステップと、
強度時系列取得部が、前記音データの波形データを分析して得られる時間周波数分布から強度時系列を取得するステップと、
軌跡特徴抽出部が、当該強度時系列の全時間方向の強度特徴を示す軌跡をベクトルに変換して軌跡ベクトルを抽出するステップと、
識別部が、参照機器から発生する音データの波形データを分析して得られた時間周波数分布から取得される強度時系列の全時間方向の強度特徴を示す軌跡であるベクトルを入力とし、前記診断対象機器の状態種別を示す情報を出力として学習された識別パラメータと、前記軌跡ベクトルとから、前記診断対象機器の各状態種別に対するスコアを取得するステップと、
判定部が、前記スコアを参照し、前記診断対象機器において発生した音が正常であるか異常であるか、および異常の種類を判定するステップとを備えたことを特徴とする異常音診断方法。 In the abnormal sound diagnosis method for diagnosing whether the sound generated in the diagnosis target device is abnormal,
A sound collection unit that collects sound generated by the diagnosis target device and obtains sound data; and
An intensity time series obtaining unit obtaining an intensity time series from a time-frequency distribution obtained by analyzing the waveform data of the sound data;
A trajectory feature extraction unit converts a trajectory indicating intensity features in all time directions of the intensity time series into a vector and extracts a trajectory vector;
The identification unit receives as input a vector that is a trajectory showing intensity characteristics in all time directions of intensity time series obtained from a time-frequency distribution obtained by analyzing waveform data of sound data generated from a reference device, and Obtaining a score for each state type of the diagnosis target device from the identification parameter learned as an output indicating information indicating the state type of the target device, and the trajectory vector;
An abnormal sound diagnosis method comprising: a step of determining whether a sound generated in the diagnosis target device is normal or abnormal and a type of abnormality with reference to the score. - 診断対象機器で発生した音を集音し、音データを取得する集音処理手順と、前記音データの波形データを分析して得られる時間周波数分布から強度時系列を取得する強度時系列取得処理手順と、前記強度時系列の全時間方向の強度特徴を示す軌跡をベクトルに変換して軌跡ベクトルを抽出する軌跡特徴抽出処理手順と、参照機器から発生する音データの波形データを分析して得られた時間周波数分布から取得される強度時系列の全時間方向の強度特徴を示す軌跡であるベクトルを入力とし、前記診断対象機器の状態種別を示す情報を出力として学習された識別パラメータと、前記軌跡ベクトルとから、前記診断対象機器の各状態種別に対するスコアを取得する識別処理手順と、前記スコアを参照し、前記診断対象機器において発生した音が正常であるか異常であるか、異常である場合に異常の種類を判定する判定処理手順とをコンピュータに実行させるための異常音診断プログラム。 Sound collection processing procedure for collecting sound generated by a diagnosis target device and acquiring sound data, and intensity time series acquisition processing for acquiring an intensity time series from a time-frequency distribution obtained by analyzing waveform data of the sound data Obtained by analyzing the procedure, the trajectory feature extraction processing procedure for extracting the trajectory vector by converting the trajectory showing the intensity features in the whole time direction of the intensity time series, and the waveform data of the sound data generated from the reference device. An identification parameter learned by using as input a vector that is a trajectory indicating intensity characteristics in all time directions of intensity time series acquired from the obtained time-frequency distribution, and outputting information indicating a state type of the diagnosis target device, and An identification processing procedure for obtaining a score for each state type of the diagnosis target device from a trajectory vector, and a sound generated in the diagnosis target device with reference to the score Whether it is a is or abnormal normal, determining the type of abnormality when an abnormality processing procedure and the abnormal sound diagnostic program for causing a computer to execute.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016572982A JP6250198B2 (en) | 2015-02-03 | 2015-02-03 | Abnormal sound diagnosis apparatus, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program |
KR1020177023765A KR101962558B1 (en) | 2015-02-03 | 2015-02-03 | Abnormal sound diagnostic apparatus, abnormal sound diagnostic system, abnormal sound diagnostic method, and abnormal sound diagnostic program |
CN201580075167.7A CN107209509B (en) | 2015-02-03 | 2015-02-03 | Abnormal sound diagnostic device, abnormal sound diagnostic system, abnormal sound diagnostic method and abnormal sound diagnostic program |
PCT/JP2015/052991 WO2016125256A1 (en) | 2015-02-03 | 2015-02-03 | Abnormal sound diagnosis device, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program |
DE112015006099.5T DE112015006099B4 (en) | 2015-02-03 | 2015-02-03 | Abnormal noise diagnostic device, abnormal noise diagnostic system, abnormal noise diagnostic method and abnormal noise diagnostic program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/052991 WO2016125256A1 (en) | 2015-02-03 | 2015-02-03 | Abnormal sound diagnosis device, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016125256A1 true WO2016125256A1 (en) | 2016-08-11 |
Family
ID=56563621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/052991 WO2016125256A1 (en) | 2015-02-03 | 2015-02-03 | Abnormal sound diagnosis device, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP6250198B2 (en) |
KR (1) | KR101962558B1 (en) |
CN (1) | CN107209509B (en) |
DE (1) | DE112015006099B4 (en) |
WO (1) | WO2016125256A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019160143A (en) * | 2018-03-16 | 2019-09-19 | 三菱重工業株式会社 | Apparatus for estimating parameter for servo mechanism, and parameter estimating method and parameter estimating program |
WO2020158398A1 (en) * | 2019-01-30 | 2020-08-06 | 日本電信電話株式会社 | Sound generation device, data generation device, abnormality degree calculation device, index value calculation device, and program |
JP2021032714A (en) * | 2019-08-26 | 2021-03-01 | 株式会社日立ビルシステム | Inspection equipment for machine facility |
CN112770012A (en) * | 2019-11-01 | 2021-05-07 | 中移物联网有限公司 | Information prompting method, device, system and storage medium |
CN112960506A (en) * | 2021-03-29 | 2021-06-15 | 浙江新再灵科技股份有限公司 | Elevator warning sound detection system based on audio features |
CN113447274A (en) * | 2020-03-24 | 2021-09-28 | 本田技研工业株式会社 | Abnormal sound determination device and abnormal sound determination method |
JP2021151902A (en) * | 2020-03-24 | 2021-09-30 | 株式会社日立ビルシステム | Inspection device and inspection method of elevator |
KR20210122839A (en) | 2019-06-06 | 2021-10-12 | 미쓰비시 덴키 빌딩 테크노 서비스 가부시키 가이샤 | analysis device |
JP2022082208A (en) * | 2020-11-20 | 2022-06-01 | 株式会社日立ビルシステム | Pattern classification device, elevator sound diagnosis system, pattern classification method, elevator sound diagnosis device, and elevator sound diagnosis method |
JP7367226B2 (en) | 2019-10-17 | 2023-10-23 | 三菱電機株式会社 | Manufacturing automation using sound wave separation neural network |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6777686B2 (en) * | 2018-05-29 | 2020-10-28 | ファナック株式会社 | Diagnostic equipment, diagnostic methods and diagnostic programs |
US11966218B2 (en) * | 2018-06-15 | 2024-04-23 | Mitsubishi Electric Corporation | Diagnosis device, diagnosis method and program |
JP7126256B2 (en) * | 2018-10-30 | 2022-08-26 | 国立研究開発法人宇宙航空研究開発機構 | Abnormality diagnosis device, abnormality diagnosis method, and program |
KR102240775B1 (en) * | 2019-10-08 | 2021-04-16 | 한국콘베어공업주식회사 | Deep learning-based apparatus and method for determining breakdown of power transfer device using noise data |
JP7222939B2 (en) * | 2020-02-03 | 2023-02-15 | 株式会社日立製作所 | Explanatory information generation device for time-series patterns |
CN112183647B (en) * | 2020-09-30 | 2024-07-26 | 国网山西省电力公司大同供电公司 | Method for detecting and positioning sound faults of substation equipment based on deep learning |
CN114486254B (en) * | 2022-02-09 | 2024-10-22 | 青岛迈金智能科技股份有限公司 | Bicycle bearing detection method based on time/frequency double-domain analysis |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012166935A (en) * | 2011-02-16 | 2012-09-06 | Mitsubishi Electric Building Techno Service Co Ltd | Abnormal sound detection device for elevator |
JP2013200143A (en) * | 2012-03-23 | 2013-10-03 | Mitsubishi Electric Corp | Abnormal sound diagnosis device and abnormal sound diagnosis system |
JP2014105075A (en) * | 2012-11-28 | 2014-06-09 | Mitsubishi Electric Corp | Failure part estimation device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0692914B2 (en) | 1989-04-14 | 1994-11-16 | 株式会社日立製作所 | Equipment / facility condition diagnosis system |
JP3785703B2 (en) | 1996-10-31 | 2006-06-14 | 株式会社明電舎 | Time series data identification method and identification apparatus |
JP2003334679A (en) * | 2002-05-16 | 2003-11-25 | Mitsubishi Electric Corp | Diagnosis system for laser welding |
CN101753992A (en) * | 2008-12-17 | 2010-06-23 | 深圳市先进智能技术研究所 | Multi-mode intelligent monitoring system and method |
CN102348101A (en) * | 2010-07-30 | 2012-02-08 | 深圳市先进智能技术研究所 | Examination room intelligence monitoring system and method thereof |
JP5783808B2 (en) * | 2011-06-02 | 2015-09-24 | 三菱電機株式会社 | Abnormal sound diagnosis device |
JP5930789B2 (en) * | 2012-03-23 | 2016-06-08 | 三菱電機株式会社 | Abnormal sound diagnosis device |
-
2015
- 2015-02-03 WO PCT/JP2015/052991 patent/WO2016125256A1/en active Application Filing
- 2015-02-03 DE DE112015006099.5T patent/DE112015006099B4/en active Active
- 2015-02-03 KR KR1020177023765A patent/KR101962558B1/en active IP Right Grant
- 2015-02-03 CN CN201580075167.7A patent/CN107209509B/en active Active
- 2015-02-03 JP JP2016572982A patent/JP6250198B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012166935A (en) * | 2011-02-16 | 2012-09-06 | Mitsubishi Electric Building Techno Service Co Ltd | Abnormal sound detection device for elevator |
JP2013200143A (en) * | 2012-03-23 | 2013-10-03 | Mitsubishi Electric Corp | Abnormal sound diagnosis device and abnormal sound diagnosis system |
JP2014105075A (en) * | 2012-11-28 | 2014-06-09 | Mitsubishi Electric Corp | Failure part estimation device |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019160143A (en) * | 2018-03-16 | 2019-09-19 | 三菱重工業株式会社 | Apparatus for estimating parameter for servo mechanism, and parameter estimating method and parameter estimating program |
WO2020158398A1 (en) * | 2019-01-30 | 2020-08-06 | 日本電信電話株式会社 | Sound generation device, data generation device, abnormality degree calculation device, index value calculation device, and program |
CN113767267A (en) * | 2019-06-06 | 2021-12-07 | 三菱电机大楼技术服务株式会社 | Analysis device |
KR20210122839A (en) | 2019-06-06 | 2021-10-12 | 미쓰비시 덴키 빌딩 테크노 서비스 가부시키 가이샤 | analysis device |
JP2021032714A (en) * | 2019-08-26 | 2021-03-01 | 株式会社日立ビルシステム | Inspection equipment for machine facility |
JP7105745B2 (en) | 2019-08-26 | 2022-07-25 | 株式会社日立ビルシステム | Mechanical equipment inspection device |
JP7367226B2 (en) | 2019-10-17 | 2023-10-23 | 三菱電機株式会社 | Manufacturing automation using sound wave separation neural network |
CN112770012A (en) * | 2019-11-01 | 2021-05-07 | 中移物联网有限公司 | Information prompting method, device, system and storage medium |
JP2021151902A (en) * | 2020-03-24 | 2021-09-30 | 株式会社日立ビルシステム | Inspection device and inspection method of elevator |
CN113447274A (en) * | 2020-03-24 | 2021-09-28 | 本田技研工业株式会社 | Abnormal sound determination device and abnormal sound determination method |
JP7142662B2 (en) | 2020-03-24 | 2022-09-27 | 株式会社日立ビルシステム | Elevator inspection device and inspection method |
CN113447274B (en) * | 2020-03-24 | 2023-08-25 | 本田技研工业株式会社 | Abnormal sound determination device and abnormal sound determination method |
JP2022082208A (en) * | 2020-11-20 | 2022-06-01 | 株式会社日立ビルシステム | Pattern classification device, elevator sound diagnosis system, pattern classification method, elevator sound diagnosis device, and elevator sound diagnosis method |
JP7492443B2 (en) | 2020-11-20 | 2024-05-29 | 株式会社日立ビルシステム | Pattern classification device, elevator sound diagnostic system, and pattern classification method Elevator sound diagnostic device and elevator sound diagnostic method |
CN112960506A (en) * | 2021-03-29 | 2021-06-15 | 浙江新再灵科技股份有限公司 | Elevator warning sound detection system based on audio features |
Also Published As
Publication number | Publication date |
---|---|
JP6250198B2 (en) | 2017-12-20 |
KR20170108085A (en) | 2017-09-26 |
DE112015006099B4 (en) | 2024-08-01 |
JPWO2016125256A1 (en) | 2017-08-03 |
CN107209509A (en) | 2017-09-26 |
CN107209509B (en) | 2019-05-28 |
KR101962558B1 (en) | 2019-03-26 |
DE112015006099T5 (en) | 2017-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6250198B2 (en) | Abnormal sound diagnosis apparatus, abnormal sound diagnosis system, abnormal sound diagnosis method, and abnormal sound diagnosis program | |
Khan et al. | Automatic heart sound classification from segmented/unsegmented phonocardiogram signals using time and frequency features | |
Dhar et al. | Cross-wavelet assisted convolution neural network (AlexNet) approach for phonocardiogram signals classification | |
CN105841961A (en) | Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network | |
CN108291837B (en) | Degraded portion estimation device, degraded portion estimation method, and mobile body diagnosis system | |
CN109512423A (en) | A kind of myocardial ischemia Risk Stratification Methods based on determining study and deep learning | |
JP6828807B2 (en) | Data analysis device, data analysis method and data analysis program | |
CN112036467A (en) | Abnormal heart sound identification method and device based on multi-scale attention neural network | |
CN111956208B (en) | ECG signal classification method based on ultra-lightweight convolutional neural network | |
Islam et al. | Motor bearing fault diagnosis using deep convolutional neural networks with 2d analysis of vibration signal | |
CN107301409A (en) | Learn the system and method for processing electrocardiogram based on Wrapper feature selectings Bagging | |
KR20220036292A (en) | Deep neural network pre-training method for electrocardiogram data classification | |
Gupta et al. | Segmentation and classification of heart sounds | |
CN111476339A (en) | Rolling bearing fault feature extraction method, intelligent diagnosis method and system | |
CN114564990A (en) | Electroencephalogram signal classification method based on multi-channel feedback capsule network | |
CN115530788A (en) | Arrhythmia classification method based on self-attention mechanism | |
CN112257741A (en) | Method for detecting generative anti-false picture based on complex neural network | |
CN112381895A (en) | Method and device for calculating cardiac ejection fraction | |
CN108647584A (en) | Cardiac arrhythmia method for identifying and classifying based on rarefaction representation and neural network | |
CN116864140A (en) | Intracardiac branch of academic or vocational study postoperative care monitoring data processing method and system thereof | |
CN113627391B (en) | Cross-mode electroencephalogram signal identification method considering individual difference | |
CN113639985B (en) | Mechanical fault diagnosis and state monitoring method based on optimized fault characteristic frequency spectrum | |
CN112336369A (en) | Coronary heart disease risk index evaluation system of multichannel heart sound signals | |
CN114383846B (en) | Bearing composite fault diagnosis method based on fault label information vector | |
CN116644273A (en) | Fault diagnosis method and system based on interpretability multiplication convolution network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15881073 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016572982 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112015006099 Country of ref document: DE |
|
ENP | Entry into the national phase |
Ref document number: 20177023765 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15881073 Country of ref document: EP Kind code of ref document: A1 |