WO2016125256A1 - 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム - Google Patents

異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム Download PDF

Info

Publication number
WO2016125256A1
WO2016125256A1 PCT/JP2015/052991 JP2015052991W WO2016125256A1 WO 2016125256 A1 WO2016125256 A1 WO 2016125256A1 JP 2015052991 W JP2015052991 W JP 2015052991W WO 2016125256 A1 WO2016125256 A1 WO 2016125256A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
intensity
vector
trajectory
time series
Prior art date
Application number
PCT/JP2015/052991
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
阿部 芳春
寛 福永
Original Assignee
三菱電機株式会社
三菱電機ビルテクノサービス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社, 三菱電機ビルテクノサービス株式会社 filed Critical 三菱電機株式会社
Priority to CN201580075167.7A priority Critical patent/CN107209509B/zh
Priority to DE112015006099.5T priority patent/DE112015006099T5/de
Priority to PCT/JP2015/052991 priority patent/WO2016125256A1/ja
Priority to JP2016572982A priority patent/JP6250198B2/ja
Priority to KR1020177023765A priority patent/KR101962558B1/ko
Publication of WO2016125256A1 publication Critical patent/WO2016125256A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0275Fault isolation and identification, e.g. classify fault; estimate cause or root of failure
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0224Process history based detection method, e.g. whereby history implies the availability of large amounts of data
    • G05B23/0227Qualitative history assessment, whereby the type of data acted upon, e.g. waveforms, images or patterns, is not relevant, e.g. rule based assessment; if-then decisions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H13/00Measuring resonant frequency
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0221Preprocessing measurements, e.g. data collection rate adjustment; Standardization of measurements; Time series or signal analysis, e.g. frequency analysis or wavelets; Trustworthiness of measurements; Indexes therefor; Measurements using easily measured parameters to estimate parameters difficult to measure; Virtual sensor creation; De-noising; Sensor fusion; Unconventional preprocessing inherently present in specific fault detection methods like PCA-based methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • G10L19/038Vector quantisation, e.g. TwinVQ audio
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/37Measurements
    • G05B2219/37337Noise, acoustic emission, sound

Definitions

  • the present invention is an abnormal sound diagnosis device that analyzes sound generated from a device to be diagnosed and diagnoses the generation of abnormal sound of the device and the type of abnormal sound that has occurred, and the device operates normally
  • the present invention relates to an abnormal sound diagnosis apparatus, an abnormal sound diagnosis system, an abnormal sound diagnosis method, and an abnormal sound diagnosis program that do not require sound collection.
  • an abnormal sound diagnosis device the analysis result of the sound data collected while the device to be diagnosed is operating normally is stored as a reference value, and the analysis result of the sound data collected at the time of diagnosis is stored.
  • a device that diagnoses that an abnormality has occurred in a device when it deviates from the reference value is known.
  • the abnormal sound detection device disclosed in Patent Document 1 detects and stores the frequency band of the sound collected when the elevator is operating normally, and collects the sound during diagnostic operation. The presence or absence of an abnormal sound is diagnosed by excluding the sound in the frequency band stored from the sound.
  • the abnormal sound diagnosis apparatus disclosed in Patent Document 2 acquires a normal time frequency distribution which is a reference at the time of diagnosis, and the normal time frequency distribution and the time time frequency part distribution acquired in the diagnosis mode. And the degree of abnormality is calculated, and it is determined whether an abnormality has occurred by comparing the calculated degree of abnormality with a threshold value.
  • the sound during normal operation cannot be collected and a reference for diagnosis cannot be created, the sound during normal operation can be obtained using another device with the same specifications. It is also possible to create a reference for diagnosis by collecting sound.
  • specifications such as the location of the sound collector, the size of the parts that make up the equipment, and the placement conditions of the equipment, for example, if the equipment is an elevator.
  • it is not practical from a cost standpoint to prepare equipment that has the same specifications such as hoistway size, hoistway material, basket loading capacity, and operating speed. There is a problem that it is difficult to create an appropriate standard using.
  • the present invention has been made to solve the above-described problems, and diagnoses the operating state of a device without requiring sound collection during normal operation in advance for the device to be diagnosed. For the purpose.
  • An abnormal sound diagnosis apparatus collects sound generated by a diagnosis target device and acquires sound data, and an intensity time series from time frequency distribution obtained by analyzing waveform data of the sound data.
  • An intensity time series acquisition unit for acquiring the trajectory
  • a trajectory feature extraction unit for extracting a trajectory vector by converting a trajectory showing intensity features in all time directions of the intensity time series
  • waveform data of sound data generated from a reference device Identification that is learned by using as input a vector that is a trajectory indicating intensity characteristics in all time directions of intensity time series obtained from the time-frequency distribution obtained by analyzing
  • An identification parameter storage unit that stores parameters, an identification unit that acquires a score for each state type of the diagnosis target device from the trajectory vector and the identification parameter, and a diagnosis pair with reference to the score Sound generated in the device is one that includes a a determination unit or abnormal either, and abnormality type is normal.
  • the present invention it is possible to diagnose the presence or absence of abnormal sound even for devices that cannot collect sound during normal operation in advance and create a standard for diagnosis.
  • FIG. 1 is a block diagram illustrating a configuration of an abnormal sound diagnosis apparatus according to Embodiment 1.
  • FIG. It is a figure which shows the structure of the identification part of the abnormal sound diagnostic apparatus which concerns on Embodiment 1.
  • FIG. 1 is a block diagram illustrating a configuration of an identification parameter learning device according to Embodiment 1.
  • FIG. It is a figure which shows the accumulation example of the database of the identification parameter learning apparatus which concerns on Embodiment 1.
  • FIG. 3 is a flowchart showing an operation of the abnormal sound diagnosis apparatus according to the first embodiment. 3 is a flowchart showing an operation of the abnormal sound diagnosis apparatus according to the first embodiment.
  • FIG. 6 is a flowchart showing an operation of the abnormal sound diagnosis apparatus according to the second embodiment.
  • Embodiment 1 The abnormal sound diagnosis apparatus diagnoses sound generated from a device to be diagnosed (for example, an elevator), and whether the generated sound is normal sound or abnormal sound. When the sound is abnormal, the type of abnormality is determined.
  • the device to be diagnosed is a device composed of a plurality of operating parts such as an elevator, for example, and by attaching sound collecting means for collecting the generated sound in the elevator car or outside the car. The sound generated when the car is reciprocated is collected, and the working sound of the working parts is diagnosed by determining whether the collected sound is normal or abnormal.
  • the abnormal sound diagnosis apparatus of the present invention can be applied to devices other than elevators.
  • the abnormal sound diagnosis apparatus is implemented as software on a personal computer (hereinafter referred to as a PC)
  • the PC includes a USB terminal and a LAN terminal.
  • a microphone is connected to the USB terminal via an audio interface circuit, and a diagnosis target device is connected to the LAN terminal via a LAN cable.
  • the device to be diagnosed is configured to perform a predetermined driving operation according to a control instruction output from the PC.
  • the abnormal sound diagnosis apparatus 100 is not limited to being implemented as software, and can be changed as appropriate.
  • FIG. 1 is a block diagram illustrating a configuration of an abnormal sound diagnosis apparatus 100 according to the first embodiment.
  • FIG. 1A is a diagram illustrating functional blocks of the abnormal sound diagnosis apparatus 100 according to the first embodiment.
  • the sound collection unit 1, the waveform acquisition unit 2, the time frequency analysis unit 3, the intensity time series acquisition unit 4, the locus It comprises a feature extraction unit 5, an identification parameter storage unit 6, an identification unit 7 and a determination unit 8.
  • the sound collection unit 1 is configured by a sound collector such as a microphone, for example, collects sound generated from the diagnosis target device, and outputs sound data 11 in synchronization with the operation of the diagnosis target device.
  • the sound collection unit 1 is arranged in the passenger car or outside the passenger car.
  • the waveform acquisition unit 2 includes, for example, an amplifier and an A / D converter, samples the waveform of the sound data 11 collected by the sound collection unit 1, and outputs the waveform data 12 converted into a digital signal.
  • the time frequency analysis unit 3 multiplies the waveform data 12 output from the waveform acquisition unit 2 by a time window, and performs a fast Fourier transform (hereinafter referred to as FFT) operation while shifting the time window in the time direction to convert the waveform data 12 to the time frequency. Analysis is performed to obtain a time-frequency distribution 13.
  • the intensity time series acquisition unit 4 obtains an intensity time series 14 indicating the intensity with respect to time and frequency from the time frequency distribution 13 output from the time frequency analysis unit 3.
  • the trajectory feature extraction unit 5 smoothes the intensity time series 14 output from the intensity time series acquisition unit 4 in the time direction, and extracts a trajectory vector 15 over the entire time axis.
  • the identification parameter storage unit 6 is a storage area for storing an identification parameter learned in advance, and an identification parameter for identifying whether the operation state of the device is normal or abnormal, and when the operation state of the device is abnormal An identification parameter for identifying the type of abnormality is stored. Details of learning of the identification parameter 16 stored in the identification parameter storage unit 6 will be described later.
  • the identification unit 7 collates the identification parameter 16 stored in the identification parameter storage unit 6 with the trajectory vector 15 extracted by the trajectory feature extraction unit 5, and acquires scores for a plurality of abnormal types. It is assumed that K types of abnormalities such as a normal operating state and an abnormal operating state at a specific location are set as the abnormal type.
  • the score for the K types of abnormal types is hereinafter referred to as a K-dimensional score vector 17.
  • the determination unit 8 determines whether the operation state of the device is normal or abnormal based on the K-dimensional score vector 17 of the identification unit 7, and if abnormal, also determines the type of abnormality, The result 18 is output.
  • FIG. 1B is a block diagram illustrating a hardware configuration of the abnormal sound diagnosis apparatus 100 according to the first embodiment, and includes a processor 100a and a memory 100b.
  • the sound collection unit 1, the waveform acquisition unit 2, the time frequency analysis unit 3, the intensity time series acquisition unit 4, the trajectory feature extraction unit 5, the identification unit 7 and the determination unit 8 execute a program stored in the memory 100b by the processor 100a. This is realized. Further, it is assumed that the identification parameter storage unit 6 is stored in the memory 100b.
  • FIG. 2 is an explanatory diagram showing the configuration of the identification unit 7 of the abnormal sound diagnosis apparatus 100 according to the first embodiment, and shows the configuration of the neural network in the identification unit 7.
  • the neural network shown in the example of FIG. 2 includes a hierarchical type, and includes a first hidden layer 72 and a second hidden layer 73 that are one input layer 71 and two hidden layers.
  • the input layer 71, the first hidden layer 72, and the second hidden layer 73 include a unit for simulating the function of the synapse of the cranial nerve circuit.
  • the last hidden layer also serves as the output layer.
  • the second hidden layer 73 also serves as the output layer.
  • the number M of hidden layers may be an integer of one or more layers (M ⁇ 1).
  • the input layer 71 has the same number of units as the number of dimensions (for example, L ⁇ B) of the trajectory vector 15 input from the trajectory feature extraction unit 5.
  • the second hidden layer 73 that is, the output layer has K number of nonlinear units equal to the number K of abnormal types.
  • the number of hidden layer units excluding the output layer is set to a predetermined number in view of the discrimination performance of the neural network.
  • U (m) L ⁇ B
  • U (M) K
  • U (m) represents the number of units in the mth layer.
  • the load and bias necessary for calculating the response of the hidden layer are supplied from the identification parameter 16 stored in the identification parameter storage unit 6.
  • the load and bias supplied to the mth hidden layer are w (i, j, m-1) and c (j, m-1), respectively.
  • FIG. 3A is a diagram illustrating functional blocks of the identification parameter learning apparatus 200 according to the first embodiment.
  • the sound data generation unit 21, the sound database 22, the waveform acquisition unit 23, the time frequency analysis unit 24, and the intensity time series are illustrated.
  • An acquisition unit 25, a trajectory feature extraction unit 26, a teacher vector creation unit 27, and an identification learning unit 28 are included.
  • the sound data generation unit 21 collects sound data using a plurality of devices with different specifications and operations as reference devices, or generates sound data by computer simulation. In the example of the first embodiment, a plurality of elevators having different specifications and operations are reference devices.
  • the sound database 22 stores sound data 22a and abnormality type data 22b.
  • the sound data 22a is composed of sound data generated by the sound data generation unit 21 and sound data obtained by superimposing an abnormal sound on the sound data 22a generated by the sound data generation unit 21.
  • the abnormality type data 22b is an abnormality type of the device associated with the sound data 22a, specifically, a label indicating whether the operation state of the device is normal or abnormal, and when the operation state of the device is abnormal. A label indicating the type of abnormality is accumulated.
  • FIG. 4 An example of the sound data 22a and the abnormality type data 22b stored in the sound database 22 is shown in FIG.
  • the sound data 22a is composed of “serial number”, “solid name”, and “sound data file name”
  • the abnormality type data 22b is “abnormality type C (v” corresponding to the above “serial number”. ): Example ”.
  • the abnormality type C (v) types such as “normal”, “top abnormality”, and “middle floor abnormality” are associated, and K abnormality types are set in total including “normal”. ing.
  • the waveform acquisition unit 23 samples the waveform of the sound data 22a accumulated in the sound database 22, and outputs the waveform data 31 converted into a digital signal.
  • the time frequency analysis unit 24, the intensity time series acquisition unit (parameter intensity time series acquisition unit) 25, and the trajectory feature extraction unit (parameter trajectory feature extraction unit) 26 of the abnormal sound diagnosis apparatus 100 of FIG. The same operation as the time frequency analysis unit 3, the intensity time series acquisition unit 4 and the trajectory feature extraction unit 5 is performed, and a time frequency distribution 32, an intensity time series 33 and a trajectory vector 34 are output, respectively.
  • the teacher vector creation unit 27 creates a teacher vector 35 using the abnormality type data 22b accumulated in the sound database 22.
  • the identification learning unit 28 creates learning data for learning the neural network.
  • the learning data of a neural network generally consists of input data and output data expected to be output from the neural network when the input data is given.
  • the input data is a trajectory vector 34 input from the trajectory feature extraction unit 26, and the output data is a teacher vector 35 input from the teacher vector creation unit 27.
  • the input data is V trajectory vectors 34 and the output data is V teacher vectors 35.
  • the V teacher vectors 35 created by the teacher vector creation unit 27 are K for the number of types of abnormal types, y (k, v) for the kth element of the vth teacher vector, and vth sound data.
  • Is C (v), y (k, v) is given by the following equation (3) as a vector in which the element at the C (v) -th position is 1 and the other elements are 0: .
  • the identification learning unit 28 learns the neural network using the trajectory vector 34 that is the input data obtained as described above and the teacher vector 35 that is the output data, and uses the load and bias obtained as a result of the learning. Is stored in the identification parameter storage unit 6 as the identification parameter 36.
  • the load and bias constituting the discrimination parameter 36 are the load w (i, j, m ⁇ 1) used when calculating the response of the first hidden layer 72 and the second hidden layer 73 of the discriminator 7 described above, and This corresponds to the bias c (j, m ⁇ 1).
  • FIG. 3B is a block diagram showing a hardware configuration of the identification parameter learning apparatus 200 according to Embodiment 1, and includes a processor 200a and a memory 200b.
  • the processor 200a is stored in the memory 200b. This is realized by executing the program.
  • the sound database 22 is stored in the memory 200b.
  • FIGS. 5 and 6 are flowcharts showing the operation of the abnormal sound diagnosis apparatus 100 according to Embodiment 1
  • FIG. 5 shows the operation of the sound collection unit 1 and the waveform acquisition unit 2
  • FIG. 6 shows the time-frequency analysis unit 3
  • a device to be diagnosed by the abnormal sound diagnosis apparatus 100 is simply referred to as a device.
  • the abnormal sound diagnosis apparatus 100 detects the start of operation of the device (step ST1)
  • the sound collecting unit 1 collects sound generated from the device (step ST2).
  • the waveform acquisition unit 2 samples the sound waveform 11 by acquiring and amplifying the sound data 11 collected in step ST2 and performing A / D conversion (step ST3).
  • the waveform acquisition unit 2 is a 16-bit linear having a sampling frequency of 48 kHz.
  • PCM (pulse code modulation) digital signal waveform data is converted (step ST4).
  • the abnormal sound diagnosis apparatus 100 determines whether or not the operation of the device is finished (step ST5). If the operation of the device has not ended (step ST5; NO), the process returns to step ST2 and the above-described process is repeated. On the other hand, when the operation of the device is completed (step ST5; YES), the waveform acquisition unit 2 connects the waveform data acquired in step ST4 and outputs it as a series of waveform data 12 (step ST6). This completes the sound collection and waveform data acquisition process. Next, proceeding to the flowchart of FIG. 6, an abnormal sound diagnosis process using the acquired waveform data 12 is performed.
  • the time-frequency analysis unit 3 acquires the waveform data 12 output from the waveform acquisition unit 2, and extracts a frame from the waveform data 12 while shifting a time window of, for example, 1024 points in the time direction at intervals of 16 milliseconds. Then, a time frequency distribution g (t, f), which is a frequency spectrum series, is obtained by FFT calculation for each frame to obtain a time frequency distribution 13 (step ST11).
  • t is a time index corresponding to the shift interval for shifting the time window
  • f is an index indicating the frequency of the result of the FFT operation.
  • the time t and the frequency f are integers that satisfy 0 ⁇ t ⁇ T and 0 ⁇ f ⁇ F, respectively.
  • T is the number of frames in the time direction of the time frequency distribution 13
  • the intensity time series acquisition unit 4 has a band of 1 octave width in the time frequency distribution 13 obtained in step ST11, for example, with five frequencies of 0.5 kHz, 1 kHz, 2 kHz, 4 kHz, and 8 kHz as center frequencies.
  • the sum of the frequency components included in the five frequency bands is obtained, and the intensity time series 14 of each band is obtained (step ST12).
  • G (t, b) is given by the following equation (4).
  • ⁇ (b) represents a set of frequencies f for which the sum is obtained in the time frequency distribution g (t, f) with respect to the band b.
  • the trajectory feature extraction unit 5 smoothes the intensity time series 14 in the time direction for each band (step ST13), obtains a smoothing intensity at a point that equally divides the entire time axis into L, and creates an L-dimensional intensity vector ( Step ST14).
  • the intensity time series 14 is smoothed in the time direction in five bands.
  • the intensity of the created L-dimensional intensity vector is normalized (step ST15), and the L-dimensional intensity vector of each normalized band is connected to create an L ⁇ B-dimensional trajectory vector 15 (step ST16).
  • smooth_t (x (t)) is a function that outputs a new time series obtained by smoothing the series x (t) relating to t in the subscript t direction.
  • Equation (6) ⁇ (l) is a real function representing the interpolation position with respect to the subscript t in G to (t, b), and w (l) is a function that gives a weighting coefficient for interpolation. It is given by equations (7) and (8).
  • Int (x) in Expression (8) is a function for obtaining the integer part of the argument x.
  • the identification unit 7 inputs the trajectory vector 15 input from the trajectory feature extraction unit 5 to the input layer 71 of the neural network, and uses the identification parameter stored in the identification parameter storage unit 6 to activate the output unit. And a K-dimensional score vector 17 is generated (step ST17).
  • the processing in step ST17 will be described with reference to a specific configuration example of the identification unit 7 in FIG.
  • the i-th element in the trajectory vector 15 is copied to the i-th unit of the input layer. If the value of the i-th unit in the input layer is x (i, 0), x (i, 0) is given by the following equation (10).
  • x (i, 0) ⁇ (i) (10)
  • ⁇ (i) represents the value of the i-th element of the trajectory vector 15.
  • the output of each unit is calculated in order from the first hidden layer 72 to the second hidden layer 73.
  • the output of each unit is obtained by applying a load to the output from all units in the previous layer to obtain the sum, subtracting the bias, and performing non-linear conversion using a sigmoid function.
  • x (j, m) is calculated by the following equation (11).
  • ⁇ (x) is a sigmoid function having a nonlinear input / output characteristic indicating a soft threshold characteristic, and is given by the following Expression (12).
  • m 1, x (i, 0) is required. This is because the i-th element ⁇ () of the trajectory vector 15 as shown in the above equation (10). equal to i).
  • the determination unit 8 compares the elements of the K-dimensional score vector 17 generated in step ST17, determines a possible abnormality type based on the index of the largest element (step ST18), and outputs a determination result (step ST18). Step ST19), the process is terminated. If the possible abnormality type is k *, k * is given by the following equation (15). In addition, although the structure which outputs one element with the largest score of the K-dimensional score vector 17 was shown, you may comprise so that a some element may be output with those scores.
  • FIG. 7 is a diagram illustrating an example of an abnormality type and a K-dimensional score vector referred to by the determination unit 8 of the abnormal sound diagnosis apparatus 100 according to the first embodiment.
  • K-dimensional score vectors are associated with K “abnormality types”, respectively.
  • the K-dimensional score vector becomes “1” when all the values of the K score vectors constituting the K-dimensional score vector are added.
  • the determination unit 8 determines that the possible abnormality type is “top abnormality”.
  • FIG. 8 is an explanatory diagram showing the effect of abnormal sound diagnosis performed by the abnormal sound diagnosis apparatus 100 according to the first embodiment.
  • FIG. 9 shows a result of abnormal sound diagnosis by a conventional abnormal sound diagnosis apparatus.
  • the traveling section 301 of the car 300 is divided, and the signal intensity of the sound generated at normal time for each divided section is stored as a reference value.
  • the travel section is divided into six, and the first reference value, the second reference value,..., The sixth reference value are acquired and stored.
  • the abnormality was detected in each section by comparing the stored reference value with the intensity time series of the sound data acquired at the time of diagnosis. Since the signal strength of normal sound in each section differs depending on the use and operating environment of each elevator, the reference value acquired for a certain elevator cannot be applied to the abnormal sound diagnosis of different elevators, or Even if it can be applied, there is a problem that the accuracy of the abnormal sound diagnosis is lowered. For this reason, in the conventional abnormal sound diagnosis apparatus, it is necessary to perform a learning operation in advance for each elevator and store a reference value.
  • FIG. 9B shows the result of comparison with the sound signal intensity at the time of diagnosis when a reference value created by a different elevator is applied to another elevator, and based on the signal intensity of the sound at the time of diagnosis.
  • no matter how the reference value 305 is set there is a problem that the normal operation state and the abnormal operation state of the device cannot be clearly separated based on the sound signal intensity at the time of diagnosis. It was.
  • the effect of the abnormal sound diagnosis performed by the abnormal sound diagnosis apparatus 100 according to Embodiment 1 will be described with reference to FIG.
  • the abnormal sound diagnosis apparatus 100 according to the first embodiment sounds generated while the car 300 reciprocates between the lowermost floor and the uppermost floor are collected, and the obtained sound data is collected. Then, the time frequency is analyzed, an intensity time series is obtained, and the trajectory over the entire length in the time direction of the intensity time series is vector-converted as an integral to extract a trajectory vector.
  • FIG. 8A sounds generated while the car 300 reciprocates between the lowermost floor and the uppermost floor are collected, and the obtained sound data is collected.
  • the time frequency is analyzed, an intensity time series is obtained, and the trajectory over the entire length in the time direction of the intensity time series is vector-converted as an integral to extract a trajectory vector.
  • FIG. 8A shows a result of plotting the positions of the trajectory vector 306 and the trajectory vector 307 in the space when the trajectory vector 306 and the trajectory vector 307 are input to the discriminating unit 7.
  • FIG. 8B shows a first feature axis (main axis) and a second feature axis (axis orthogonal to the main axis) by performing, for example, principal component analysis using a vector of normal individuals and a vector of abnormal individuals as a set. It is a diagram showing the arrangement of each vector in the L ⁇ 1D space where these feature axes are stretched.
  • the principal component analysis is a process for displaying the mutual positional relationship of vectors in a multidimensional space, and is not a process constituting the present invention.
  • the first feature axis and the second feature axis are not calculated by the configuration of the present invention, but are described to show that the trajectory vectors are classified in space.
  • the group 308 indicating the normality of the device and the group 309 indicating the abnormality of the device are arranged.
  • a hyperplane (straight line) orthogonal to a straight line connecting the center of gravity of the group 308 and the center of gravity of the group 309 is obtained as the boundary 310.
  • FIG. 8B shows an example in which a straight line is obtained as the boundary 310, it is assumed that a hypercurve (curve) having a complicated shape is obtained in actual diagnosis processing. In this way, it is possible to capture general characteristics that appear in the intensity time series regardless of the elevator specifications and operating environment, and there is no need to learn reference values for each individual in advance, and there are differences in elevator specifications and operating environments. Can also make a robust diagnosis.
  • the sound collection unit 1 that collects the sound generated from the device, and the waveform acquisition that acquires the waveform data obtained by sampling and converting the waveform of the collected sound data.
  • Unit 2 time frequency analysis unit 3 that performs time frequency analysis of the acquired waveform data, intensity time series acquisition unit 4 that obtains an intensity time series indicating the intensity with respect to time and frequency from the time frequency distribution, and acquired intensity
  • the trajectory feature extracting unit 5 that smoothes the time series in the time direction and extracts trajectory vectors over the entire time axis, and stores the identification parameters learned using the extracted trajectory vectors as input data and abnormal types as output data.
  • the sound collecting unit 1 is configured by one sound collector and arranged in a device to be diagnosed.
  • the sound collecting unit 1 includes a plurality of sound collectors. And may be arranged at a plurality of locations of the diagnosis target device.
  • multi-channel sound collection is performed simultaneously with the operation of the diagnosis target device, and multi-channel sound data 11 is obtained.
  • the waveform acquisition unit 2, the time frequency analysis unit 3, and the intensity time series acquisition unit 4 acquire the waveform data 12, the time frequency distribution 13, and the intensity time series 14 for the multichannel signals, respectively.
  • the trajectory feature extraction unit 5 acquires a multi-channel intensity vector from the multi-channel intensity time series 14 input from the intensity time-series acquisition unit 4. Further, the intensity vectors of the respective channels are connected in the time axis direction.
  • FIG. 10 is an explanatory diagram illustrating connection of multi-channel intensity vectors in the trajectory feature extraction unit 5 of the abnormal sound diagnosis apparatus 100 according to the first embodiment.
  • FIG. 10 shows a case where the intensity vectors of the three channels are connected, and the first channel vector 15a, the second channel vector 15b, and the third channel vector 15c are connected in the time axis direction of the vector to obtain L ⁇ B.
  • a trajectory vector 15 of three dimensions (“ ⁇ 3” is obtained by connecting intensity vectors of three channels) is generated. Since there is a connection between the channels in the intermediate layer of the neural network, the synchronicity between the channels can be learned.
  • the dimension number of the trajectory vector is L ⁇ B, but here, the dimension number of the trajectory vector is read as L ⁇ B ⁇ 3.
  • FIG. 11 is an explanatory diagram showing an effect when an abnormal sound diagnosis is performed based on a trajectory vector obtained by connecting multi-channel intensity vectors.
  • intensity time series 311, 312, and 313 indicate intensity time series obtained in the first frequency band, the second frequency band, and the third frequency band, respectively.
  • 312, and 313 are shown as a L ⁇ 1 ⁇ 3D trajectory vector 314 and a trajectory vector 315 that are connected in the time axis direction.
  • the trajectory vector 314 indicates a vector when the abnormality type is “1: abnormal”, and the trajectory vector 315 indicates a vector when the abnormal type is “0: normal”.
  • Embodiment 2 FIG. In the first embodiment, the case where the identification unit 7 has a neural network configuration has been described. In the second embodiment, a case where a support vector machine (hereinafter referred to as SVM) is applied as the identification unit will be described. I do. Since the entire configuration of the abnormal sound diagnosis apparatus 100 of the second embodiment is the same as that of the first embodiment, description of the block diagram is omitted, and an identification unit having a different configuration will be described in detail below.
  • SVM support vector machine
  • FIG. 12 is a diagram illustrating a configuration of the identification unit 7a of the abnormal sound diagnosis apparatus 100 according to the second embodiment.
  • the identification unit 7a has (K-1) K / 2 SVMs as a whole, where K is the number of abnormal types.
  • each SVM is configured to learn to classify and identify any two abnormal types of vectors among K abnormal types including normal.
  • SVM [i, j] (0 ⁇ i ⁇ j ⁇ K).
  • FIG. 13 is a flowchart showing the operation of the abnormal sound diagnosis apparatus according to the second embodiment.
  • the same steps as those in the abnormal sound diagnosis apparatus according to the first embodiment are denoted by the same reference numerals as those used in FIG. 6, and the description thereof is omitted or simplified.
  • the operations of the sound collection unit 1 and the waveform acquisition unit 2 are the same as those in the flowchart shown in FIG.
  • the identification unit 7a inputs the trajectory vector 15 to each SVM, and uses the identification parameter stored in the identification parameter storage unit 6.
  • the output value y ( ⁇ ) of the discrimination function of each SVM is calculated based on the following equation (16) (step ST21).
  • k (x 1, x 2 ) is the inner product between the mapping to a multi-dimensional space of vectors x1 ⁇ (x 1) and mapping ⁇ (x 2) to a multi-dimensional space vector x 2 ⁇ (x 1 ), ⁇ (x 2 )> (note that ⁇ (x) is a nonlinear function of the vector x that cannot be expressed by an explicit expression).
  • the kernel function for example, a Gauss kernel represented by the following equation (17) can be used. Note that ⁇ is a Gaussian kernel parameter.
  • the identification unit 7a calculates the classification output of each class from the output value of the identification function of each SVM calculated in step ST21, and the score vector value s indicating the score of 1 to K corresponding to the abnormality type (K) is calculated, and the calculated score vector value s (k) is output to the determination unit 8 as the K-dimensional score vector 17 (step ST22).
  • the determination unit 8 compares the elements of the K-dimensional score vector 17 generated in step ST17, determines a possible abnormality type based on the index of the largest element (step ST18), and outputs a determination result (step ST18). Step ST19), the process is terminated.
  • the trajectory feature extraction unit 5 creates the trajectory vector 15 over the entire length in the time direction of the intensity time series 14. It is possible to capture generalized features that appear in the intensity time series that do not depend on device specifications or operating environments. Thereby, it is not necessary to learn the criteria at the time of diagnosis for each individual, and robust diagnosis can be performed even with respect to differences in device specifications and operating environments. Moreover, the abnormal sound diagnostic apparatus which suppressed the fall of the diagnostic precision by the difference in an apparatus can be provided.
  • the trajectory vector 15 output from the trajectory feature extraction unit 5 uses the trajectory features over the entire length in the time direction of the intensity time series 14 as an L-dimensional vector by linear interpolation.
  • a trajectory of the intensity time series 14 over the entire length in the time direction may be Fourier-transformed to form an L-dimensional vector from low-order Fourier coefficients.
  • a compressed feature may be output as an L-dimensional vector by principal component analysis.
  • the above-described conversion without loss indicates that the features as they are are used as the vectors without performing processing on the vectors indicating the features over the entire length in the time direction of the intensity time series 14.
  • the conversion that allows loss is a process of reducing the number of dimensions by multiplying a vector indicating the characteristics of the intensity time series 14 over the entire length in the time direction by, for example, a matrix obtained by principal component analysis. It shows that the feature is used as a vector. It is considered that a part of the information included in the raw feature vector is lost due to the above-described reduction of the number of dimensions.
  • the trajectory vector is extracted by converting the vector into a vector, but in the reciprocating operation of the working section, it is divided into one-way sections such as the ascending section and the descending section, and the total length of the section in the time direction for each divided section It is also possible to convert the trajectory extending over to a vector and extract the trajectory vector, and to prepare the identification unit 7 for each divided section and perform the identification processing.
  • the sections to be divided are not limited to the ascending section and the descending section.
  • the ascending section may be further divided into smaller sections such as a lower section, a middle section, and a higher section.
  • the abnormal sound diagnosis apparatus can perform abnormal sound diagnosis with high accuracy with respect to differences in equipment specifications and operations, a reference value for determining abnormal sound is created for each individual. Applicable to devices that cannot be used, and suitable for diagnosing abnormal sounds in devices.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
  • Testing And Monitoring For Control Systems (AREA)
PCT/JP2015/052991 2015-02-03 2015-02-03 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム WO2016125256A1 (ja)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201580075167.7A CN107209509B (zh) 2015-02-03 2015-02-03 异常声音诊断装置、异常声音诊断系统、异常声音诊断方法以及异常声音诊断程序
DE112015006099.5T DE112015006099T5 (de) 2015-02-03 2015-02-03 Anormales-Geräusch-Diagnosegerät, Anormales-Geräusch-Diagnosesystem, Anormales-Geräusch-Diagnoseverfahren und Anormales-Geräusch-Diagnoseprogramm
PCT/JP2015/052991 WO2016125256A1 (ja) 2015-02-03 2015-02-03 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム
JP2016572982A JP6250198B2 (ja) 2015-02-03 2015-02-03 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム
KR1020177023765A KR101962558B1 (ko) 2015-02-03 2015-02-03 이상음 진단 장치, 이상음 진단 시스템, 이상음 진단 방법 및 이상음 진단 프로그램

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/052991 WO2016125256A1 (ja) 2015-02-03 2015-02-03 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム

Publications (1)

Publication Number Publication Date
WO2016125256A1 true WO2016125256A1 (ja) 2016-08-11

Family

ID=56563621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/052991 WO2016125256A1 (ja) 2015-02-03 2015-02-03 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム

Country Status (5)

Country Link
JP (1) JP6250198B2 (zh)
KR (1) KR101962558B1 (zh)
CN (1) CN107209509B (zh)
DE (1) DE112015006099T5 (zh)
WO (1) WO2016125256A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019160143A (ja) * 2018-03-16 2019-09-19 三菱重工業株式会社 サーボ機構のパラメータ推定装置及びパラメータ推定方法並びにパラメータ推定プログラム
WO2020158398A1 (ja) * 2019-01-30 2020-08-06 日本電信電話株式会社 音生成装置、データ生成装置、異常度算出装置、指標値算出装置、およびプログラム
JP2021032714A (ja) * 2019-08-26 2021-03-01 株式会社日立ビルシステム 機械設備の検査装置
CN112770012A (zh) * 2019-11-01 2021-05-07 中移物联网有限公司 信息提示方法、设备、系统以及存储介质
CN112960506A (zh) * 2021-03-29 2021-06-15 浙江新再灵科技股份有限公司 基于音频特征的电梯告警音检测系统
CN113447274A (zh) * 2020-03-24 2021-09-28 本田技研工业株式会社 异常声音判定装置以及异常声音判定方法
JP2021151902A (ja) * 2020-03-24 2021-09-30 株式会社日立ビルシステム 昇降機の検査装置および検査方法
KR20210122839A (ko) 2019-06-06 2021-10-12 미쓰비시 덴키 빌딩 테크노 서비스 가부시키 가이샤 분석 장치
JP7367226B2 (ja) 2019-10-17 2023-10-23 三菱電機株式会社 音波分離ニューラルネットワークを用いた製造自動化
JP7492443B2 (ja) 2020-11-20 2024-05-29 株式会社日立ビルシステム パターン分類装置、昇降機音診断システム、及びパターン分類方法昇降機音の診断装置、及び昇降機音診断方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6777686B2 (ja) * 2018-05-29 2020-10-28 ファナック株式会社 診断装置、診断方法及び診断プログラム
CN112262354A (zh) * 2018-06-15 2021-01-22 三菱电机株式会社 诊断装置、诊断方法及程序
JP7126256B2 (ja) * 2018-10-30 2022-08-26 国立研究開発法人宇宙航空研究開発機構 異常診断装置、異常診断方法、及びプログラム
KR102240775B1 (ko) * 2019-10-08 2021-04-16 한국콘베어공업주식회사 딥러닝 기반 소음 데이터를 이용한 전동 설비의 고장 판별 장치 및 방법
CN112183647A (zh) * 2020-09-30 2021-01-05 国网山西省电力公司大同供电公司 一种基于深度学习的变电站设备声音故障检测及定位方法
CN114486254A (zh) * 2022-02-09 2022-05-13 青岛迈金智能科技股份有限公司 一种基于时/频双域分析的自行车轴承检测方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012166935A (ja) * 2011-02-16 2012-09-06 Mitsubishi Electric Building Techno Service Co Ltd エレベータの異常音検出装置
JP2013200143A (ja) * 2012-03-23 2013-10-03 Mitsubishi Electric Corp 異常音診断装置および異常音診断システム
JP2014105075A (ja) * 2012-11-28 2014-06-09 Mitsubishi Electric Corp 故障個所推定装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003334679A (ja) * 2002-05-16 2003-11-25 Mitsubishi Electric Corp レーザ溶接の診断装置
CN101753992A (zh) * 2008-12-17 2010-06-23 深圳市先进智能技术研究所 一种多模态智能监控系统和方法
CN102348101A (zh) * 2010-07-30 2012-02-08 深圳市先进智能技术研究所 一种考场智能监控系统和方法
JP5783808B2 (ja) * 2011-06-02 2015-09-24 三菱電機株式会社 異常音診断装置
JP5930789B2 (ja) * 2012-03-23 2016-06-08 三菱電機株式会社 異常音診断装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012166935A (ja) * 2011-02-16 2012-09-06 Mitsubishi Electric Building Techno Service Co Ltd エレベータの異常音検出装置
JP2013200143A (ja) * 2012-03-23 2013-10-03 Mitsubishi Electric Corp 異常音診断装置および異常音診断システム
JP2014105075A (ja) * 2012-11-28 2014-06-09 Mitsubishi Electric Corp 故障個所推定装置

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019160143A (ja) * 2018-03-16 2019-09-19 三菱重工業株式会社 サーボ機構のパラメータ推定装置及びパラメータ推定方法並びにパラメータ推定プログラム
WO2020158398A1 (ja) * 2019-01-30 2020-08-06 日本電信電話株式会社 音生成装置、データ生成装置、異常度算出装置、指標値算出装置、およびプログラム
KR20210122839A (ko) 2019-06-06 2021-10-12 미쓰비시 덴키 빌딩 테크노 서비스 가부시키 가이샤 분석 장치
CN113767267A (zh) * 2019-06-06 2021-12-07 三菱电机大楼技术服务株式会社 分析装置
JP2021032714A (ja) * 2019-08-26 2021-03-01 株式会社日立ビルシステム 機械設備の検査装置
JP7105745B2 (ja) 2019-08-26 2022-07-25 株式会社日立ビルシステム 機械設備の検査装置
JP7367226B2 (ja) 2019-10-17 2023-10-23 三菱電機株式会社 音波分離ニューラルネットワークを用いた製造自動化
CN112770012A (zh) * 2019-11-01 2021-05-07 中移物联网有限公司 信息提示方法、设备、系统以及存储介质
JP2021151902A (ja) * 2020-03-24 2021-09-30 株式会社日立ビルシステム 昇降機の検査装置および検査方法
CN113447274A (zh) * 2020-03-24 2021-09-28 本田技研工业株式会社 异常声音判定装置以及异常声音判定方法
JP7142662B2 (ja) 2020-03-24 2022-09-27 株式会社日立ビルシステム 昇降機の検査装置および検査方法
CN113447274B (zh) * 2020-03-24 2023-08-25 本田技研工业株式会社 异常声音判定装置以及异常声音判定方法
JP7492443B2 (ja) 2020-11-20 2024-05-29 株式会社日立ビルシステム パターン分類装置、昇降機音診断システム、及びパターン分類方法昇降機音の診断装置、及び昇降機音診断方法
CN112960506A (zh) * 2021-03-29 2021-06-15 浙江新再灵科技股份有限公司 基于音频特征的电梯告警音检测系统

Also Published As

Publication number Publication date
JPWO2016125256A1 (ja) 2017-08-03
DE112015006099T5 (de) 2017-11-30
CN107209509B (zh) 2019-05-28
CN107209509A (zh) 2017-09-26
KR101962558B1 (ko) 2019-03-26
KR20170108085A (ko) 2017-09-26
JP6250198B2 (ja) 2017-12-20

Similar Documents

Publication Publication Date Title
JP6250198B2 (ja) 異常音診断装置、異常音診断システム、異常音診断方法および異常音診断プログラム
CN108319962B (zh) 一种基于卷积神经网络的刀具磨损监测方法
Dhar et al. Cross-wavelet assisted convolution neural network (AlexNet) approach for phonocardiogram signals classification
Khan et al. Automatic heart sound classification from segmented/unsegmented phonocardiogram signals using time and frequency features
CN105841961A (zh) 一种基于Morlet小波变换和卷积神经网络的轴承故障诊断方法
CN108291837B (zh) 劣化部位估计装置、劣化部位估计方法以及移动体的诊断系统
CN112036467B (zh) 基于多尺度注意力神经网络的异常心音识别方法及装置
JP6828807B2 (ja) データ解析装置、データ解析方法およびデータ解析プログラム
CN111956208B (zh) 一种基于超轻量级卷积神经网络的ecg信号分类方法
CN113855038B (zh) 基于多模型集成的心电信号危急值的预测方法及装置
Islam et al. Motor bearing fault diagnosis using deep convolutional neural networks with 2d analysis of vibration signal
CN107301409A (zh) 基于Wrapper特征选择Bagging学习处理心电图的系统及方法
CN111476339A (zh) 滚动轴承故障特征提取方法、智能诊断方法及系统
Gupta et al. Segmentation and classification of heart sounds
CN114564990A (zh) 一种基于多通道反馈胶囊网络的脑电信号分类方法
CN112257741A (zh) 一种基于复数神经网络的生成性对抗虚假图片的检测方法
CN115530788A (zh) 基于自注意力机制的心律失常分类方法
CN108647584A (zh) 基于稀疏表示和神经网络的心律不齐识别分类方法
CN113627391B (zh) 一种考虑个体差异的跨模式脑电信号识别方法
KR102404498B1 (ko) 적응적 시간-주파수 표현 기반 합성곱 신경망을 활용한 산업용 기어박스 고장진단 장치 및 방법
CN116864140A (zh) 一种心内科术后护理监测数据处理方法及其系统
CN112336369B (zh) 一种多通道心音信号的冠心病风险指数评估系统
CN116644273A (zh) 基于可释性乘法卷积网络的故障诊断方法及系统
CN114224354B (zh) 心律失常分类方法、装置及可读存储介质
CN113639985B (zh) 一种基于优化故障特征频谱的机械故障诊断与状态监测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15881073

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016572982

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 112015006099

Country of ref document: DE

ENP Entry into the national phase

Ref document number: 20177023765

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 15881073

Country of ref document: EP

Kind code of ref document: A1