WO2021036634A1 - 故障检测方法及相关产品 - Google Patents

故障检测方法及相关产品 Download PDF

Info

Publication number
WO2021036634A1
WO2021036634A1 PCT/CN2020/104821 CN2020104821W WO2021036634A1 WO 2021036634 A1 WO2021036634 A1 WO 2021036634A1 CN 2020104821 W CN2020104821 W CN 2020104821W WO 2021036634 A1 WO2021036634 A1 WO 2021036634A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter set
video sequence
vibration parameter
target
information
Prior art date
Application number
PCT/CN2020/104821
Other languages
English (en)
French (fr)
Inventor
高风波
Original Assignee
深圳市豪视智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市豪视智能科技有限公司 filed Critical 深圳市豪视智能科技有限公司
Publication of WO2021036634A1 publication Critical patent/WO2021036634A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Definitions

  • This application relates to the technical field of video surveillance, and specifically relates to a fault detection method and related products.
  • vibration detection is an important part.
  • most of the vibration detection uses accelerometer equipment.
  • the accelerometer requires a long preparation and installation time, requires physical contact with the system under test during the test (therefore, it will change the vibration response of the system under test), and can only test a limited number of discrete points, so , The problem of how to improve the efficiency of fault detection needs to be solved urgently.
  • the embodiments of the present application provide a fault detection method and related products, which can improve the efficiency of fault detection.
  • the first aspect of the embodiments of the present application provides a fault detection method, including:
  • the target failure information of the target object is determined according to the at least one vibration parameter set, and the target failure information is displayed on a display interface.
  • the second aspect of the embodiments of the present application provides a fault detection device, including:
  • An acquiring unit configured to acquire a first video sequence for a target object, where the first video sequence is an RGB video image
  • An extraction unit configured to extract at least one vibration parameter set corresponding to the target object according to the first video sequence
  • the display unit is configured to determine target failure information of the target object according to the at least one vibration parameter set, and display the target failure information on a display interface.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are configured by the above Executed by the processor, and the foregoing program includes instructions for executing the steps in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the above-mentioned computer-readable storage medium stores a computer program for electronic data exchange, wherein the above-mentioned computer program enables a computer to execute Some or all of the steps described in one aspect.
  • the embodiments of the present application provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in this application.
  • the computer program product may be a software installation package.
  • the first video sequence for the target object is obtained.
  • the first video sequence is an RGB video image
  • at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence.
  • the vibration parameters can be extracted from the video image, and the failure information can be analyzed based on the vibration parameters, so that the failure can be quickly identified Information to improve the efficiency of fault detection.
  • FIG. 1A is a schematic flowchart of an embodiment of a fault detection method provided by an embodiment of the present application
  • FIG. 1B is a schematic diagram of an application scenario demonstration provided by an embodiment of the present application.
  • FIG. 1C is a schematic diagram of a motion amplification method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an embodiment of another fault detection method provided by an embodiment of the present application.
  • FIG. 3A is a schematic structural diagram of an embodiment of a fault detection device provided by an embodiment of the present application.
  • 3B is a schematic structural diagram of an embodiment of another fault detection device provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of an embodiment of an electronic device provided by an embodiment of the present application.
  • the electronic devices described in the embodiments of this application may include smart phones (such as Android phones, iOS phones, Windows Phone phones, etc.), tablet computers, handheld computers, notebook computers, video matrixes, monitoring platforms, and mobile Internet devices (MID, Mobile Internet Devices). ) Or wearable devices, etc., the foregoing is only an example, not an exhaustive list, including but not limited to the foregoing devices. Of course, the foregoing electronic devices may also be servers.
  • FIG. 1A is a schematic flowchart of an embodiment of a fault detection method provided by an embodiment of this application.
  • the fault detection method described in this embodiment includes the following steps:
  • the target object may be one of the following: mechanical equipment (numerically controlled machine tools), transportation tools, communication tools, medical equipment, buildings, people, animals, etc., which are not limited here.
  • the first video sequence of the target object within a period of time can be acquired, and the first video sequence is an RGB video image.
  • the electronic device may include at least one camera, and the camera may be at least one of the following: an infrared camera or a visible light camera.
  • the visible light camera may be a normal camera or a light angle camera, etc., which is not limited herein.
  • the electronic device may also include an ultrasonic sensor, and the first video sequence of the target object may be acquired through the ultrasonic sensor.
  • the first video sequence may be a panoramic video image for the entire target object, or a video image of a designated part of the target object, or an internal operation image of the target object.
  • the embodiment of this application can be applied to the scene shown in FIG. 1B.
  • the first video sequence can be captured by the camera to obtain a video sequence of the target object.
  • the electronic device analyzes the video sequence to obtain the vibration parameters, and displays it on the display interface. Show vibration parameters.
  • the vibration parameter set may include at least one of the following vibration parameters: vibration frequency, vibration amplitude, vibration frequency, vibration waveform diagram, vibration period, vibration range, etc., which are not limited herein.
  • step 102 extracting at least one vibration parameter set corresponding to the target object according to the first video sequence, may include the following steps:
  • the aforementioned color space conversion may include any of the following: RGB to YCbCr color space conversion, RGB to YIQ color space conversion, RGB to HIS color space conversion, RGB to YUV color space conversion, which is not limited here.
  • the first video sequence can be transformed into the color space to obtain the transformed first video sequence.
  • the luminance component can be extracted, that is, the transformed first video sequence can be separated to obtain the luminance component information and the color space.
  • Degree component information for example, the camera shoots a piece of video containing a vibrating object under test, and then converts the sequence frame of the video from RGB color space to YIQ color space to separate the luminance component information and chrominance component information of the video frame , The subsequent steps are performed on the Y phase of the YIQ video, that is, the brightness information, which can reduce the amount of algorithm calculations and increase the running speed of the algorithm.
  • the conversion relationship between RGB and YIQ is:
  • Fourier transform can be performed on the brightness component information to obtain the frequency domain brightness component information, and the frequency domain brightness component information can be subjected to motion amplification processing to obtain the frequency domain brightness component after the amplification processing.
  • the above motion amplification processing can adopt at least the following One method: Lagrangian motion amplification algorithm, Euler motion amplification algorithm, complex phase motion amplification algorithm, RIESZ pyramid motion amplification algorithm, which is not limited here, and further, reverse the frequency domain brightness component after the amplification process Fourier transform to obtain temporal luminance component information, synthesize temporal luminance information and chrominance component information to obtain a synthesized video sequence, and perform the inverse transformation corresponding to the color space transformation on the synthesized video sequence to obtain a second video sequence At least one vibration parameter set corresponding to the target object is determined according to the second video sequence. In this way, since the feature corresponding to the vibration is amplified, it is convenient to better extract the vibration parameter from the video sequence.
  • step 28 determining at least one vibration parameter set corresponding to the target object according to the second video sequence, may include the following steps:
  • the phase correlation algorithm can be used to calculate the cross cross power spectrum between the frame sequences on the frame sequence after the video motion amplification processing.
  • the phase correlation algorithm uses the following formula to calculate the cross cross power spectrum.
  • F A frame image of a Fourier transform
  • the lower side of the division formula is the modulus of the correlation product of the two Fourier transformed signals.
  • R crosses the cross power spectrum (including frequency domain noise) of the calculation result of this step.
  • an adaptive filter bank can be used to reconstruct the motion signal, and the filter bank can be adaptively selected for filtering according to the position of the correlation peak of R. After filtering, the inverse Fourier transform is performed, and then the phase comparison (phase-by-phase comparison) is performed. At this time, the sliding window adaptive matching method can be used to estimate and extract the vibration parameters to obtain the cross cross power spectrum R′ after filtering the frequency domain noise, and then perform the inverse Fourier transform on the cross cross power spectrum to compare phase by phase , The calculation formula for the vibration information of the pixels in the video can be obtained as follows.
  • F -1 represents the inverse Fourier transform of the cross cross power spectrum
  • R′ is the cross cross power spectrum after filtering the frequency domain noise
  • r is the vibration information corresponding to the pixel in the video.
  • the filter bank can be adaptively selected for filtering according to the position of the correlation peak of the cross cross power spectrum R, to obtain the filtered cross cross power spectrum R'.
  • performing interpolation filtering processing on the cross cross power spectrum to obtain the processed cross cross power spectrum may include the following steps:
  • the cross cross power spectrum R is a frequency domain signal, which includes one or more correlation peaks.
  • the state change signal corresponding to each correlation peak can be obtained.
  • Each state change signal can reflect the state change of a certain position in the amplified video. Therefore, a predetermined length of the target state change signal segment can be extracted from the multiple state change signals to obtain multiple target state change signal segments .
  • the state change information includes vibration data and other noise information. For example, changes in illumination can also cause state changes in the video screen, and the vibration data can reflect the operating conditions of the object to be vibrated.
  • the vibration of the equipment to be tested is periodic when it is running, and the state change caused by the vibration is also periodic.
  • the state change caused by noise is often not periodic, and according to the operating conditions, periodic vibration can be used to reflect the operating conditions of the equipment, because it is aperiodic.
  • Sexual vibrations are often caused by the external environment, not by the equipment itself, and this part of the non-periodic signals cannot be used to analyze the operating conditions of the equipment.
  • Acquire noise signals that are not caused by self-vibration by acquiring aperiodic signals in state change signals. Since aperiodic signals are often signals that have little or no effect or even interference on the operating conditions of the analysis equipment, this part of the aperiodic signals can be removed to make the state change signals obtained from the amplified video more useful information.
  • each state change signal is a periodic signal.
  • a target state change signal segment with a preset length can be extracted first, the target frequency of the target change signal segment can be obtained, and then the state The frequencies of other parts in the change signal are compared with the target frequency. If the frequencies of other parts in the state change signal are not consistent with the target frequency, the state change signal can be regarded as a non-periodic signal.
  • the preset length can be set by a user to a certain value, or it can be self-adapted according to the length of the signal during signal processing. For example, the preset length can be set to 1/10 of the length of the state change signal.
  • the window size of the sliding window can be set according to the target frequency.
  • the window size of the sliding window can be set to be consistent with the target frequency, so that only signals with the same frequency as the target frequency can pass through the sliding window.
  • signals that are inconsistent with the target frequency cannot pass through the sliding window.
  • the state change signal cannot pass through the corresponding sliding window, it means that there is a signal segment whose frequency is inconsistent with the target frequency in the state change signal, that is, the state change signal is a non-periodic signal. In this way, by judging whether the frequencies of other parts of the state change signal are consistent with the target frequency by means of sliding windows, conclusions can be obtained conveniently and quickly, and the amount of calculation is smaller.
  • performing motion amplification processing on the frequency domain luminance component information to obtain the frequency domain luminance component after the amplification processing may include the following steps:
  • the feature point extraction can be at least one of the following methods: Harris corner detection, scale invariant feature transform (SIFT), SURF feature extraction algorithm, etc., which are not limited here.
  • Harris corner detection scale invariant feature transform (SIFT)
  • SIFT scale invariant feature transform
  • SURF feature extraction algorithm etc., which are not limited here.
  • you can Feature points are extracted from the frequency domain brightness component information to obtain a feature point set.
  • the feature point set is clustered to obtain multiple types of feature points.
  • the optical flow field interpolation operation is performed on each pixel in the frequency domain brightness component information to obtain a dense
  • the optical flow field tracks the motion trajectory of each type of feature points in multiple types of feature points to obtain multiple action layers.
  • Each type of feature point corresponds to an action layer.
  • the multiple action layers are filled with texture and the texture is filled.
  • the preset layer in the multiple action layers performs amplification processing to obtain the amplified frequency domain brightness component.
  • the preset layer can be set by the user or defaulted by the system. In this way, the characteristics corresponding to the vibration can be amplified, and the subsequent analysis of the impact of the vibration can be facilitated.
  • the frequency domain luminance component information can be registered to obtain a registered image. Then, when step 241 is performed, feature point extraction can be performed on the registered image to obtain feature points. Set, in this way, can prevent the movement caused by camera shake.
  • the accuracy of motion level segmentation is directly related to the final zoom effect, and this process can also be combined with manual intervention. That is, some motion layers are selected artificially, so that the tracking accuracy can be further improved.
  • performing motion amplification processing on the frequency domain luminance component information to obtain the frequency domain luminance component after the amplification processing may include the following steps:
  • the spatial decomposition may include at least one of the following algorithms: fast Fourier transform, contourlet transform, non-sampled contourlet transform (NSCT), wavelet transform, ridgelet transform, shear wave transform, etc. It is not limited here.
  • the spatial decomposition may specifically be FFT first, and then complex operable pyramid spatial decomposition.
  • the frequency domain brightness component can be decomposed in the space domain to obtain the decomposed frequency domain brightness component, and the decomposed frequency domain brightness component can be filtered in the time domain, and the frequency domain brightness component after the filtering can be performed on the frequency domain brightness component.
  • the YIQ color space of the first video sequence is converted, keeping the I and Q channels unchanged, and the Y channel is spatially decomposed (such as FFT operation), and the FFT transformed
  • the Y-channel image is decomposed in the complex and manipulable gold tower spatial domain, and the images of different scales after the Y-channel spatial domain decomposition are time-domain band-pass filtered, and the motion information of interest after the time-domain band-pass filtering is amplified, and the motion information of interest is performed Reconstruction of complex manipulable pyramids to obtain the enlarged Y channel image.
  • the reconstructed Y channel image is added to the original I and Q channel images, and then converted into RGB color space to obtain the second video sequence. In this way, it can be accurately extracted
  • the vibration parameters corresponding to the target object help to improve the accuracy of fault detection.
  • the fault information may include at least one of the following: fault level, faulty component, fault cause, etc., which is not limited herein.
  • the target failure information of the target object can be determined according to at least one vibration parameter set, and the target failure information can be displayed on the display interface.
  • the image of the failure area, the vibration video, the failure category or the danger level can be displayed on the display interface.
  • the target object may include multiple components, and the components are at least one of the following: bearings, screws, chips, circuit boards, springs, graphics cards, motor vehicles, steering wheels, etc., which are not limited herein. It is also possible to pre-store the preset mapping relationship between the mean square error and the fault information.
  • the vibration parameter set i can be obtained by taking the mean square error to obtain the target mean square error.
  • the vibration parameter set i is any vibration parameter set in at least one vibration parameter set, according to the preset mapping between the mean square error and the fault information Relationship, the target failure information corresponding to the target mean square error can be determined.
  • step 103 the following steps may be further included:
  • A1 Fit the vibration parameter set i to obtain a target fitting function, where the target fitting function includes time on the horizontal axis and time on the vertical axis related to at least one vibration parameter in the vibration parameter set i;
  • the vibration parameter set i can be fitted, such as linear fitting or nonlinear fitting, to obtain the target fitting function.
  • the horizontal axis of the target fitting function is time and vertical axis. It is related to at least one vibration parameter in the vibration parameter set i.
  • the vertical axis can be phase, or power, which is also shaped like power/frequency, etc., which is not limited here.
  • the vibration parameter set i can be fitted to obtain a target fitting function.
  • the target fitting function includes time on the horizontal axis and time on the vertical axis related to at least one vibration parameter in the vibration parameter set i, which is fitted according to the target.
  • the function estimates the maintenance information of the component corresponding to the vibration parameter set i.
  • the maintenance information can be: maintenance date, maintenance method, maintenance personnel, maintenance cost, etc., which are not limited here.
  • the target fitting function is used to predict the wear or failure or life of the component, and the prediction result can be a probability distribution, which can be understood as a range value, through which the probability distribution can be directly presented in the video image, combined with each The image of the part is correspondingly presented.
  • step 103 determining the fault information of the target object according to the at least one vibration parameter set, may include the following steps:
  • the at least one vibration parameter set is input into a preset fault detection model to obtain fault information of the target object.
  • the preset fault detection model can be set by the user or the system defaults, and the preset fault detection model can be obtained based on the convolutional neural network model.
  • at least one set of vibration parameters may be input to a preset fault detection model to obtain fault information of the target object.
  • the target object may be mechanical equipment.
  • a video sequence of mechanical equipment vibrating under working conditions can be obtained, vibration parameters are extracted from the video sequence, and mechanical equipment is analyzed based on the vibration parameters. Fault information.
  • the target object may be a person.
  • the ultrasonic sensor can obtain a video sequence of the user's heart vibration, extract the vibration parameters from the video sequence, and analyze the human heart function according to the vibration parameters.
  • pregnant women can also be tested to detect the growth of the baby.
  • step 103 the following steps may be further included:
  • At least one vibration parameter set includes a large number of vibration parameters, and each vibration parameter can correspond to a point in a preset position. Furthermore, each point can include at least one vibration parameter. Based on this, it can be based on at least one vibration parameter set. Mark the position of the target object to obtain at least one position mark. Of course, it is also possible to map at least one set of vibration parameters to the target point, divide the target object into multiple squares, and mark the squares with vibration parameters greater than the preset number in the squares to obtain at least one position mark, where, The above-mentioned preset number can be set by the user or the system defaults.
  • a video sequence of at least one location mark in a specified time period can be displayed, and the specified time period can be set by the user or the system defaults.
  • step B2 displaying the at least one position mark in the first video sequence, may include the following steps:
  • the above-mentioned display effect parameter may be at least one of the following: display brightness, display color, display hue, whether to add a display frame, whether to add an arrow, etc., which are not limited here.
  • the electronic device can also pre-store the mapping relationship between the preset feature value and the display effect parameter.
  • each mark can take a point as the center, within a certain range
  • the target vibration parameter for the position marker i can be extracted from the at least one vibration parameter set, where the position marker i is any position marker in the at least one position marker, and the target vibration parameter is processed , Obtain the target feature value, determine the target display effect parameter corresponding to the target feature value according to the mapping relationship between the preset feature value and the display effect parameter, and mark the display position i in the first video sequence according to the target display effect parameter, so , Different markers can be displayed differentiated according to the vibration difference, which helps users to see the vibration effect quickly and accurately.
  • processing the target vibration parameter to obtain the target characteristic value may include the following steps:
  • the vibration parameters may include a large amount of vibration data.
  • the above-mentioned preprocessing may include at least one of the following: signal amplification, noise reduction processing, etc. , It is not limited here.
  • the preset time domain waveform can be the waveform corresponding to the fault state, and the preset time domain waveform can be set by the user or the system defaults. In specific implementation, the time-domain waveform can be obtained from the target vibration parameters, and then the target time-domain waveform can be preprocessed to obtain the target time-domain waveform.
  • the target time-domain waveform is compared with the preset time-domain waveform, and the target time-domain waveform is compared with the preset time-domain waveform.
  • the extreme points may include maximum points or minimum points, which may be based on the multiple extreme points. Point to determine the target characteristic value.
  • the target time-domain waveform and the preset time-domain waveform fail to compare, it can indicate that the target object is operating normally, and the vibration display may not be performed, or it can prompt the user that the device is operating normally.
  • the mean square error of the plurality of extreme value points may be used as the target feature value, or the number of the plurality of extreme value points may be used as the target feature value.
  • step B225 determining the target feature value according to the multiple extreme points, may include the following steps:
  • B2252 Divide the multiple target extreme points into two categories, the first category includes multiple maximum points, and the second category includes multiple minimum points;
  • the aforementioned preset threshold can be set by the user or the system defaults.
  • the multiple target extreme points are divided into two categories.
  • the first category includes multiple Maximum points
  • the second category includes multiple minimum points
  • determine the first mean square deviation between multiple maximum points in the first category and determine the first mean square difference between multiple minimum points in the second category
  • Two-mean-square error the absolute value of the ratio between the first mean square error and the second mean square deviation is used as the target feature value.
  • the extreme points are firstly screened to obtain stable extreme points, which can be more analyzed The vibration of the target object.
  • these extreme points are divided into two categories, maximum points and minimum points, and the mean square deviation is calculated respectively through these two categories, and then the absolute value of the final ratio is used as the target characteristic value. Since the target characteristic value fully grasps the distribution of the amplitude in the upward and downward movement, such target characteristic value can accurately reflect the vibration trend and help improve the accuracy of the fault information.
  • the first video sequence for the target object is obtained through the fault detection method of the embodiment of the present application
  • the first video sequence is an RGB video image
  • at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence, and according to at least A vibration parameter set determines the target failure information of the target object, and displays the target failure information on the display interface.
  • the vibration parameters can be extracted from the video image, and the failure information can be analyzed based on the vibration parameters, which can quickly identify the failure information and improve Improve the efficiency of fault detection.
  • FIG. 2 is a schematic flowchart of an embodiment of a fault detection method provided by an embodiment of this application.
  • the fault detection method described in this embodiment includes the following steps:
  • the first video sequence for the target object is obtained.
  • the first video sequence is an RGB video image, and at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence.
  • the vibration parameter set i is at least one vibration parameter set For any vibration parameter set, according to the preset mapping relationship between the mean square error and the fault information, determine the target fault information corresponding to the target mean square error, and linearly fit the vibration parameter set i to obtain the target fitting function, target fitting
  • the function includes time on the horizontal axis and time on the vertical axis, which is related to at least one vibration parameter in the vibration parameter set i.
  • the maintenance information of the component corresponding to the vibration parameter set i is estimated according to the objective fitting function. In this way, the vibration can be extracted from the video image. Parameters, and analyze the fault information according to the vibration parameters, which can quickly identify the fault information, improve the efficiency of fault detection, and also predict when and how the target object needs maintenance, which improves the maintenance efficiency.
  • FIG. 3A is a schematic structural diagram of an embodiment of a fault detection device provided by an embodiment of this application.
  • the fault detection device described in this embodiment includes: an acquisition unit 301, an extraction unit 302, and a display unit 303, and the details are as follows:
  • the obtaining unit 301 is configured to obtain a first video sequence for a target object, where the first video sequence is an RGB video image;
  • the extraction unit 302 is configured to extract at least one vibration parameter set corresponding to the target object according to the first video sequence
  • the display unit 303 is configured to determine target failure information of the target object according to the at least one vibration parameter set, and display the target failure information on a display interface.
  • the first video sequence for the target object is acquired through the fault detection device of the embodiment of the present application.
  • the first video sequence is an RGB video image, and at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence.
  • a vibration parameter set determines the target failure information of the target object, and displays the target failure information on the display interface. In this way, the vibration parameters can be extracted from the video image, and the failure information can be analyzed based on the vibration parameters, which can quickly identify the failure information and improve Improve the efficiency of fault detection.
  • the extracting unit 302 is specifically configured to:
  • At least one vibration parameter set corresponding to the target object is determined according to the second video sequence.
  • the extracting unit 302 is specifically configured to:
  • the extracting unit 302 is specifically configured to:
  • each vibration parameter set corresponds to a component of the target object
  • the display unit 303 is specifically configured to:
  • the vibration parameter set i is any vibration parameter set in the at least one vibration parameter set
  • the target fault information corresponding to the target mean square error is determined.
  • FIG. 3B is another modified structure of the fault detection device described in FIG. 3A. Compared with FIG. 3A, it may further include: a fitting unit 304 and a determining unit 305, which are specifically as follows:
  • the fitting unit 304 is configured to perform linear fitting on the vibration parameter set i to obtain a target fitting function.
  • the target fitting function includes time on the horizontal axis and at least one vibration from the vibration parameter set i on the vertical axis. Parameter related;
  • the determining unit 305 is configured to estimate the maintenance information of the component corresponding to the vibration parameter set i according to the target fitting function.
  • FIG. 4 is a schematic structural diagram of an embodiment of an electronic device provided by an embodiment of this application.
  • the electronic device described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, such as a CPU; and a memory 4000, the aforementioned input device 1000, output device 2000, processor 3000, and The memory 4000 is connected through the bus 5000.
  • the aforementioned input device 1000 may specifically be a touch panel, a physical button, or a mouse.
  • the aforementioned output device 2000 may specifically be a display screen.
  • the aforementioned memory 4000 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the foregoing memory 4000 is used to store a set of program codes, and the foregoing input device 1000, output device 2000, and processor 3000 are used to call the program codes stored in the memory 4000 to perform the following operations:
  • the above-mentioned processor 3000 is used for:
  • the target failure information of the target object is determined according to the at least one vibration parameter set, and the target failure information is displayed on a display interface.
  • a first video sequence for a target object is acquired, the first video sequence is an RGB video image, and at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence , Determine the target failure information of the target object based on at least one vibration parameter set, and display the target failure information on the display interface.
  • the vibration parameters can be extracted from the video image, and the failure information can be analyzed based on the vibration parameters, so that the failure can be quickly identified Information to improve the efficiency of fault detection.
  • the processor 3000 is specifically configured to:
  • At least one vibration parameter set corresponding to the target object is determined according to the second video sequence.
  • the processor 3000 is specifically configured to:
  • the processor 3000 is specifically configured to:
  • each vibration parameter set corresponds to a component of the target object
  • the processor 3000 is specifically configured to:
  • the vibration parameter set i is any vibration parameter set in the at least one vibration parameter set
  • the target fault information corresponding to the target mean square error is determined.
  • the aforementioned processor 3000 is further specifically configured to:
  • the objective fitting function includes the horizontal axis being time and the vertical axis being related to at least one vibration parameter in the vibration parameter set i;
  • the maintenance information of the component corresponding to the vibration parameter set i is estimated according to the target fitting function.
  • An embodiment of the present application further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes part or all of the steps of any fault detection method recorded in the above method embodiment when the program is executed.
  • An embodiment of the present application further provides a computer program product, wherein the foregoing computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the foregoing computer program is operable to cause a computer to execute any of the foregoing Some or all of the steps described in the method.
  • the computer program product may be a software installation package.
  • this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware.
  • this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as a part of the hardware, and can also be distributed in other forms, such as through the Internet or other wired or wireless telecommunication systems.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供了一种故障检测方法及相关产品,所述方法包括:获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。采用本申请实施例可以从视频图像中提取到振动参数,并依据振动参数分析出故障信息,能够快速识别出故障信息,提升了故障检测效率。

Description

故障检测方法及相关产品 技术领域
本申请涉及视频监控技术领域,具体涉及一种故障检测方法及相关产品。
背景技术
所有的机械和运动系统都会产生各种各样的振动,其中一些振动反映的是系统的正常运动状态,而另外一些则反映了系统的异常运动状态(例如,设备部故障、轴连接不平衡等)。因此,要预测性维护系统设备,振动检测都是重要的一环。现在振动检测大多采用加速度计设备。尽管比较精确和可靠,但是加速度计需要长的准备和安装时间,在测试时需要和被测系统物理接触(因此会改变被测系统的振动响应),并且仅能测试很有限的离散点,因此,如何提升故障检测效率的问题亟待解决。
发明内容
本申请实施例提供了一种故障检测方法及相关产品,可以提升故障检测效率。
本申请实施例第一方面提供了一种故障检测方法,包括:
获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
本申请实施例第二方面提供了一种故障检测装置,包括:
获取单元,用于获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
提取单元,用于依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
显示单元,用于依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口,以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
实施本申请实施例,具有如下有益效果:
可以看出,通过本申请实施例故障检测方法及相关产品,获取针对目标对象的第一视频序列,第一视频序列为RGB视频图像,依据第一视频序列提取目标对象对应的至少一个振动参数集,依据至少一个振动参数集确定目标对象的目标故障信息,并在显示界面展示目标故障信息,如此,可以从视频图像中提取到振动参数,并依据振动参数分析出故障信息,能够快速识别出故障信息,提升了故障检测效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种故障检测方法的实施例流程示意图;
图1B是本申请实施例提供的应用场景演示示意图;
图1C是本申请实施例提供的运动放大方法的演示示意图;
图2是本申请实施例提供的另一种故障检测方法的实施例流程示意图;
图3A是本申请实施例提供的一种故障检测装置的实施例结构示意图;
图3B是本申请实施例提供的另一种故障检测装置的实施例结构示意图;
图4是本申请实施例提供的一种电子设备的实施例结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置展示该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所描述电子设备可以包括智能手机(如Android手机、iOS手机、Windows Phone手机等)、平板电脑、掌上电脑、笔记本电脑、视频矩阵、监控平台、移动互联网设备(MID,Mobile Internet Devices)或穿戴式设备等,上述仅是举例,而非穷举,包含但不限于上述装置,当然,上述电子设备还可以为服务器。
请参阅图1A,为本申请实施例提供的一种故障检测方法的实施例流程示意图。本实施例中所描述的故障检测方法,包括以下步骤:
101、获取针对目标对象的第一视频序列,所述视频序列为RGB视频图像。
其中,目标对象可以为以下一种:机械设备(数控机床)、交通工具、通讯工具、医疗器械、建筑物、人、动物等等,在此不作限定。可以获取目标对象在一段时间内的第一视频序列,第一视频序列为RGB视频图像。
本申请实施例中,电子设备可以包括至少一个摄像头,摄像头可以为以下至少一种:红外摄像头或者可见光摄像头,可见光摄像头可以为普通摄像头或者光角摄像头等等,在此不作限定。当然,电子设备还可以包括超声波传感器,通过超声波传感器可以获取目标对象的第一视频序列。第一视频序列可以为针对整个目标对象的全景视频图像,或者,目标对象的指定部位的视频图像,或者,目标对象的内部运作图像。本申请实施例可以应用于图1B所示的场景中,第一视频序列可以由摄像头对目标对象进行拍摄得到一视频序列,由电子设备对该视频序列进行分析,得到振动参数,并在显示界面展示振动参数。
102、依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集。
其中,振动参数集可以包括以下至少一种振动参数:振动频率、振动幅度、振动次数、振动波形图、振动周期、振动范围等等,在此不作限定。
可选地,上述步骤102,依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集,可包括如下步骤:
21、将所述第一视频序列进行颜色空间变换,得到变换后的所述第一视频序列;
22、对变换后的所述第一视频序列进行分离,得到亮度分量信息和色度分量信息;
23、对所述亮度分量信息进行傅里叶变换,得到频域亮度分量信息;
24、对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量;
25、对所述放大处理后的所述频域亮度分量进行反傅里叶变换,得到时域亮度分量信息;
26、将所述时域亮度信息和所述色度分量信息进行合成,得到合成视频序列;
27、对所述合成视频序列进行所述颜色空间变换对应的反变换,得到第二视频序列;
28、依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集。
其中,上述颜色空间变换可以包括以下任一中:RGB转YCbCr颜色空间变换、RGB转YIQ颜色空间变换、RGB转HIS颜色空间变换、RGB转YUV颜色空间变换,在此不作限定。具体实现中,可以将第一视频序列进行颜色空间变换,得到变换后的第一视频序列,如此,能够便提取亮度分量,即对变换后的第一视频序列进行分离,得到亮度分量信息和色度分量信息,举例说明下,由摄像机拍摄含有振动的被测物体的一段视频,然后将该视频的序列帧由RGB颜色空间转换到YIQ颜色空间,分离视频帧的亮度分量信息和色度分量信息,后续的步骤都在YIQ视频的Y相也就是亮度信息进行运算,这样可减少算法运算量提高算法的运行速度。RGB和YIQ的转换关系为:
Y=0.299*R+0.587*G+0.114*B;
I=0.596*R–0.275*G–0.321*B;
Q=0.212*R-0.523*G+0.311*B。
进一步地,可以对亮度分量信息进行傅里叶变换,得到频域亮度分量信息,对频域亮度分量信息进行运动放大处理,得到放大处理后的频域亮度分,上述运动放大处理可以采用如下至少一种方式:拉格朗日运动放大算法、欧拉运动放大算法、复数相位运动放大算法、RIESZ金字塔运动放大算法,在此不做限定,进一步地,对放大处理后的频域亮度分量进行反傅里叶变换,得到时域亮度分量信息,将时域亮度信息和色度分量信息进行合成,得到合成视频序列,对合成视频序列进行所述颜色空间变换对应的反变换,得到第二视频序列,依据第二视频序列确定目标对象对应的至少一个振动参数集,如此,由于将振动对应的特征进行放大处理,便于更好地从视频序列中提取出振动参数。
进一步可选地,上述步骤28,依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集,可包括如下步骤:
281、确定所述第二视频序列对应的帧序列间的交叉互功率谱;
282、对所述交叉互功率谱进行插值滤波处理,得到处理后的所述交叉互功率谱;
283、对处理后的所述交叉互功率谱进行反傅里叶变换,并逐相位比较,得到所述目标对象对应的至少一个振动参数集。
具体实现中,可以对视频运动放大处理后的帧序列采用相位相关算法计算帧序列间的交叉互功率谱,具体地,相位相关算法采用如下的公式计算交叉互功率谱。
Figure PCTCN2020104821-appb-000001
其中,F a为a帧图像的傅立叶变换,
Figure PCTCN2020104821-appb-000002
为b帧图像的傅里叶变换的共轭信号,除式的下边为两个傅里叶变换的信号的相关积的模。R为本步骤的计算结果交叉互功率谱(包含频域噪音)。
进一步地,可以采用自适应滤波器组来重建运动信号,根据R的相关峰的位置自适应选择滤波器组进行滤波,滤波之后再进行反傅里叶变换,再进行相位比较(逐相位比较),此时,可会采用滑窗的自适应匹配方法来估计和提取振动参数,得到滤除频域噪音后的交叉互功率谱R′,再对交叉互功率谱进行反傅立叶变换,逐相位比较,便可得到视频中像素的振动信息计算公式如下。
r=F -1{R′}
其中,F -1表示对交叉互功率谱进行反傅里叶变换,R′为滤除频域噪音后的交叉互功率谱,得到r为视频中的像素对应的振动信息。
其中,可根据交叉互功率谱R的相关峰的位置自适应选择滤波器组进行滤波,得到滤波后的交叉互功率谱R'。
可选地,上述步骤282中,对所述交叉互功率谱进行插值滤波处理,得到处理后的所述交叉互功率谱,可包括以下步骤:
2821、获取所述交叉互功率谱对应的多个状态变化信号,状态变化信号为时域信号;
2822、从所述多个状态变化信号中提取预设长度的目标状态变化信号段,得到多个目标状态变化信号段,并获取所述多个目标状态变化信号段中每一目标状态变化信号段的目标频率,得到多个目标频率;
2823、根据所述多个状态变化信号中每一状态变化信号对应的目标频率设定对应的滑窗,得到多个滑窗,将所述多个状态变化信号中每一状态变化信号发送至对应的滑窗;
2824、将所述多个状态变化信号中不能通过对应的滑窗的状态变化信号作为非周期信号,得到至少一个非周期信号;
2825、去除所述多个状态变化信号中的所述至少一个非周期信号,得到;滤波后的交叉互功率谱。
其中,交叉互功率谱R为频域信号,其中包括一个或多个相关峰。将交叉互功率谱R进行反傅里叶变换之后即可得到各相关峰对应的状态变化信号。每个状态变化信号可以反应放大视频中的某个位置的状态变化情况,因此,可从所述多个状态变化信号中提取预设长度的目标状态变化信号段,得到多个目标状态变化信号段。状态变化信息包括振动数据和其他噪声信息,例如光照的变化也会导致视频画面中的状态变化,振动数据可以反应待振动物体的运行状况。待检测设备运行时自身的振动是周期性的,振动引起的状态变化也是周期性的。很多噪声信息虽然会导致放大视频中各像素点的状态变化,但是,噪声引起的状态变化常常不是周期性的,而且根据运行状况,呈周期性的振动才能用于反应设备运行状况,因为非周期性的振动常常是由外界环境导致的,而不是由设备自身引起的,这部分非周期性的信号也不能用于分析设备的运行状况。通过获取状态变化信号中的非周期信 号获取不是由自身振动引起的噪声信号。由于非周期信号常常是对分析设备运行状况作用不大或者没有作用甚至会有干扰的信号,那么可去除这部分非周期信号,以使得从放大视频中获取到的状态变化信号中,有用信息更多。
其中,在获取多个状态变化信号之后,可判断各状态变化信号是否为周期信号,具体地,可先提取预设长度的目标状态变化信号段,获取目标变化信号段的目标频率,然后将状态变化信号中的其他部分的频率与目标频率比对,如果状态变化信号中的其他部分的频率与目标频率不一致,则可认为该状态变化信号为非周期信号。预设长度可以由用户设定一个确定的值,也可以在信号处理过程中根据信号的长度自行适配,例如可将预设长度设定为状态变化信号长度的1/10。获得目标状态变换信号段的目标频率之后,根据目标频率设定滑窗的窗口大小,例如可将滑窗的窗口大小设定为与目标频率一致,这样只有频率与目标频率一致的信号才能通过滑窗,而与目标频率不一致的信号则不能通过滑窗。如果状态变化信号不能通过对应的滑窗,则说明该状态变化信号中存在频率与目标频率不一致的信号段,也即该状态变化信号为非周期信号。如此,通过滑窗的方式判断状态变化信号中的其他部分的频率与目标频率是否一致,可以方便快捷地得到结论,计算量更小。
进一步可选地,上述步骤24,对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量,可包括如下步骤:
2411、对所述频域亮度分量信息进行特征点提取,得到特征点集;
2412、对所述特征点集进行聚类,得到多类特征点;
2413、对所述频域亮度分量信息中的每一像素点进行光流场插值运算,得到稠密光流场;
2414、跟踪所述多类特征点中每一类特征点的运动轨迹,得到多个动作层,每一类特征点对应一个动作层;
2415、对所述多个动作层进行纹理填充,并将纹理填充后的所述多个动作层中的预设层进行放大处理,得到放大处理后的所述频域亮度分量。
其中,特征点提取可以为以下至少一种方式:harris角点检测、尺度不变特征提取(scale invariant feature transform,SIFT)、SURF特征提取算法等等,在此不作限定,具体实现中,可以对频域亮度分量信息进行特征点提取,得到特征点集,对特征点集进行聚类,得到多类特征点,对频域亮度分量信息中的每一像素点进行光流场插值运算,得到稠密光流场,跟踪多类特征点中每一类特征点的运动轨迹,得到多个动作层,每一类特征点对应一个动作层,对多个动作层进行纹理填充,并将纹理填充后的多个动作层中的预设层进行放大处理,得到放大处理后的频域亮度分量,预设层可以由用户自行设置或者系统默认。如此,可以实现将振动对应的特征进行放大,后续便于分析振动带来的影响。
可选地,在上述步骤241之前,还可以对频域亮度分量信息进行配准,得到配准图像,则在执行上述步骤241时,则可以对该配准图像进行特征点提取,得到特征点集,如此,可以防止由于相机晃动而产生的动作。
具体实现中,考虑到特征点的跟踪可能受到图像阴暗、遮挡等其他因素的影响,从而影响跟踪结果,运动层次分割的准确程度直接关系到最终的放大效果,而这个过程还可以结合人工干预,即人为选择一些运动层,如此,可以进一步提升跟踪精度。
进一步可选地,上述步骤24,对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量,可包括如下步骤:
2421、对所述频域亮度分量进行空域分解,得到分解后的所述频域亮度分量;
2422、对分解后的所述频域亮度分量进行时域滤波,滤波后的所述频域亮度分量;
2423、对所述频域亮度分量进行放大处理,得到放大后的所述频域亮度分量;
2424、对所述频域亮度分量进行所述空域分解对应的反变换,得到反变换后的所述放 大处理后的所述频域亮度分量。
其中,空域分解可以包括以下至少一种算法:快速傅里叶变换、轮廓波变换、非下采样轮廓波变换(Non-sampled contourlet transform,NSCT)、小波变换、脊波变换、剪切波变换等等,在此不作限定,举例说明下,空域分解具体可以为先FFT,再进行复数可操作金字塔空域分解。具体实现中,可以对频域亮度分量进行空域分解,得到分解后的频域亮度分量,对分解后的频域亮度分量进行时域滤波,滤波后的频域亮度分量,对频域亮度分量进行放大处理,得到放大后的频域亮度分量,对频域亮度分量进行空域分解对应的反变换,得到反变换后的放大处理后的所述频域亮度分量,能够将图像中的深层次细节加以放大,有助于精准提取图像中的关键特征点,在特征点受到图像阴暗、遮挡等影响的情况下,依旧可以实现较好跟踪特征点。
具体实现中,举例说明下,如图1C所示,将第一视频序列转化的YIQ色彩空间,保持I、Q通道不变,对Y通道进行空域分解(如FFT操作),将FFT变换后的Y通道图像进行复数可操纵金子塔空域分解,将Y通道空域分解后的不同尺度的图像进行时域带通滤波,放大时域带通滤波后感兴趣的运动信息,对感兴趣的运动信息进行复数可操纵金字塔重建,得到放大后的Y通道图像,最后将重建的Y通道图像与原来的I、Q通道图像相加,再转化为RGB色彩空间,得到第二视频序列,如此,能够精准提取目标对象对应的振动参数,有助于提升故障检测精度。
103、依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
其中,本申请实施例中,故障信息可以包括以下至少一种:故障等级、故障部件、故障原因等等,在此不作限定。具体实现中,可以依据至少一个振动参数集确定目标对象的目标故障信息,并在显示界面展示该目标故障信息,例如,可以在显示界面展示故障区域的图像、振动视频、故障类别或者危险等级,以及,还可以基于用户需求输出维护方案等等,在此不作限定。
可选地,每一振动参数集对应所述目标对象的一个部件;上述步骤103,依据所述至少一个振动参数集确定所述目标对象的故障信息,可包括如下步骤:
31、将振动参数集i进行取均方差,得到目标均方差,所述振动参数集i为所述至少一个振动参数集中的任一振动参数集;
32、按照预设的均方差与故障信息之间的映射关系,确定所述目标均方差对应的所述目标故障信息。
其中,目标对象可以包括多个部件,部件为以下至少一种:轴承、螺丝、芯片、电路板、弹簧、显卡、发动车、方向盘等等,在此不作限定。还可以预先存储预设的均方差与故障信息之间的映射关系。具体实现中,可以将振动参数集i进行取均方差,得到目标均方差,振动参数集i为至少一个振动参数集中的任一振动参数集,按照预设的均方差与故障信息之间的映射关系,可以确定目标均方差对应的目标故障信息。
进一步可选地,上述步骤103之后,还可以包括如下步骤:
A1、将振动参数集i进行拟合,得到目标拟合函数,所述目标拟合函数包括横轴为时间,纵轴为与所述振动参数集i中的至少一个振动参数相关;
A2、依据所述目标拟合函数预估所述振动参数集i对应的部件的维护信息。
具体实现中,以振动参数集i为例,可以将振动参数集i进行拟合,如线性拟合或者非线性拟合,得到目标拟合函数,目标拟合函数的横轴为时间、纵轴为振动参数集i中的至少一个振动参数相关,如:纵轴可以为相位,或者,功率,又形如功率/频率等等,在此不作限定。具体实现中,可以将振动参数集i进行拟合,得到目标拟合函数,目标拟合函数包括横轴为时间,纵轴为与振动参数集i中的至少一个振动参数相关,依据目标拟合函数 预估振动参数集i对应的部件的维护信息,维修信息可以为:维修日期、维修方式、维修人员、维修费用等等,在此不作限定。
另外,通过目标拟合函数预估部件的耗损或者故障或者寿命情况,其预测结果可以是概率分布,可以理解为一个范围值,通过该概率分布可以直观的呈现在视频图像中,结合图像中各个部件的影像对应呈现。
上述步骤103,依据所述至少一个振动参数集确定所述目标对象的故障信息,可包括如下步骤:
将所述至少一个振动参数集输入到预设故障检测模型,得到所述目标对象的故障信息。
其中,预设故障检测模型可以由用户自行设置或者系统默认,该预设故障检测模型可以基于卷积神经网络模型得到。具体实现中,可以将至少一个振动参数集输入到预设故障检测模型,得到该目标对象的故障信息。
举例说明下,本申请实施例中,目标对象可以为机械设备,基于上述故障检测装置可以获取机械设备在工作状态下产生振动的视频序列,依据该视频序列提取振动参数,依据振动参数分析机械设备的故障信息。
举例说明下,本申请实施例中,目标对象可以为人,基于上述故障检测装置可以由超声波传感器获取用户关于心脏振动的视频序列,依据该视频序列提取振动参数,依据振动参数分析人的心脏功能,以及实现健康诊断,当然,还可以对孕妇进行检测,检测婴儿的成长情况。
可选地,上述步骤103之后,还可以包括如下步骤:
B1、依据所述至少一个振动参数集对所述目标对象进行位置标记,得到至少一个位置标记。
B2、在所述第一视频序列中展示所述至少一个位置标记。
其中,至少一个振动参数集包括大量的振动参数,每一振动参数均可以对应预设部位中的一个点,进而,每个点至少可以包括一个振动参数,基于此,可以依据至少一个振动参数集对目标对象进行位置标记,得到至少一个位置标记。当然,也可以将至少一个振动参数集映射在目标点,将目标对象划分为多个方格,将方格中振动参数数量大于预设数量的方格进行标记,得到至少一个位置标记,其中,上述预设数量可以由用户自行设置或者系统默认。
其中,具体实现中,可以展示至少一个位置标记中的任一位置标记在指定时间段的视频序列,指定时间段可以由用户自行设置或者系统默认。
可选地,上述步骤B2,在所述第一视频序列中展示所述至少一个位置标记,可以包括如下步骤:
B21、从所述至少一个振动参数集中提取针对位置标记i的目标振动参数,所述位置标记i为所述至少一个位置标记中的任一位置标记;
B22、对所述目标振动参数进行处理,得到目标特征值;
B23、按照预设的特征值与展示效果参数之间的映射关系,确定所述目标特征值对应的目标展示效果参数;
B24、依据所述目标展示效果参数在所述第一视频序列中展示所述位置标记i。
其中,上述展示效果参数可以为以下至少一种:显示亮度、显示颜色、显示色调、是否需要加显示框、是否需要加箭头等等,在此不作限定。电子设备中还可以预先存储预设的特征值与展示效果参数之间的映射关系,具体实现中,由于不同的振动参数对应不同的标记,每一标记可以取以一个点作为中心,一定范围内的振动参数作为该点的振动参数,进而,可以从至少一个振动参数集中提取针对位置标记i的目标振动参数,位置标记i为至少一个位置标记中的任一位置标记,对目标振动参数进行处理,得到目标特征值,按照预 设的特征值与展示效果参数之间的映射关系,确定目标特征值对应的目标展示效果参数,依据目标展示效果参数在第一视频序列中展示位置标记i,如此,可以针对振动差异,对不同的标记进行差异化显示,有助于用户快速且精准看到振动效果。
进一步可选地,上述步骤B22,对所述目标振动参数进行处理,得到目标特征值,可包括如下步骤:
B221、从所述目标振动参数中获取时域波形;
B222、对所述时域波形进行预处理,得到目标时域波形;
B223、将所述目标时域波形与预设时域波形进行比对;
B224、在所述目标时域波形与所述预设时域波形进行比对成功时,确定所述目标时域波形的多个极值点;
B225、依据所述多个极值点确定所述目标特征值。
其中,由于振动是一直振动,例如,设备在运动,则其部件也是在一直转动的,振动参数中可以包括大量振动数据,上述预处理可以包括以下至少一种:信号放大、降噪处理等等,在此不作限定。预设时域波形可以为故障状态对应的波形,预设时域波形可以由用户自行设置或者系统默认。具体实现中,可以从目标振动参数中获取时域波形,进而,对该目标时域波形进行预处理,得到目标时域波形,将目标时域波形与预设时域波形进行比对,在目标时域波形与所述预设时域波形进行比对成功时,确定目标时域波形的多个极值点,极值点可以包括极大值点或者极小值点,可以依据多个极值点确定目标特征值,反之,在目标时域波形与所述预设时域波形进行比对失败时,则可以说明目标对象运作正常,可以不进行振动展示,或者,提示用户设备运转正常。具体地,例如,将该多个极值点的均方差作为目标特征值,或者,可以将该多个极值点的数量作为目标特征值。
可选地,上述步骤B225,依据所述多个极值点确定所述目标特征值,可以包括如下步骤:
B2251、从所述多个极值点中选取绝对值大于预设阈值的极值点,得到多个目标极值点;
B2252、将所述多个目标极值点分为两类,第一类包括多个极大值点,第二类包括多个极小值点;
B2253、确定所述第一类中多个极大值点之间的第一均方差;
B2254、确定所述第二类中的多个极小值之间的第二均方差;
B2255、将所述第一均方差与所述第二均方差之间的比值的绝对值作为所述目标特征值。
其中,上述预设阈值可以由用户自行设置或者系统默认。具体实现中,可以从多个极值点中选取绝对值大于预设阈值的极值点,得到多个目标极值点,将多个目标极值点分为两类,第一类包括多个极大值点,第二类包括多个极小值点,确定第一类中多个极大值点之间的第一均方差,确定第二类中的多个极小值之间的第二均方差,将第一均方差与第二均方差之间的比值的绝对值作为目标特征值,如此,首先对极值点进行筛选,得到稳定的极值点,这些极值点更能分析目标对象的振动情况,另外,将这些极值点分为两类,极大值点和极小值点,通过该两类分别计算均方差,再将最终的比值的绝对值作为目标特征值,目标特征值由于充分把握了振幅在向上以及向下运动的分布情况,因此,这样的目标特征值能够精准反映振动趋势,有助于提升故障信息精度。
可以看出,通过本申请实施例故障检测方法,获取针对目标对象的第一视频序列,第一视频序列为RGB视频图像,依据第一视频序列提取目标对象对应的至少一个振动参数集,依据至少一个振动参数集确定目标对象的目标故障信息,并在显示界面展示目标故障信息,如此,可以从视频图像中提取到振动参数,并依据振动参数分析出故障信息,能够快速识 别出故障信息,提升了故障检测效率。
与上述一致地,请参阅图2,为本申请实施例提供的一种故障检测方法的实施例流程示意图。本实施例中所描述的故障检测方法,包括以下步骤:
201、获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
202、依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
203、将振动参数集i进行取均方差,得到目标均方差,所述振动参数集i为所述至少一个振动参数集中的任一振动参数集;
204、按照预设的均方差与故障信息之间的映射关系,确定所述目标均方差对应的目标故障信息。
205、将振动参数集i进行线性拟合,得到目标拟合函数,所述目标拟合函数包括横轴为时间,纵轴为与所述振动参数集i中的至少一个振动参数相关;
206、依据所述目标拟合函数预估所述振动参数集i对应的部件的维护信息。
其中,上述步骤201-步骤206所描述的故障检测方法可参考图1A所描述的故障检测方法的对应步骤。
可以看出,通过本申请实施例故障检测方法及相关产品,获取针对目标对象的第一视频序列,第一视频序列为RGB视频图像,依据第一视频序列提取目标对象对应的至少一个振动参数集,依据至少一个振动参数集确定目标对象的目标故障信息,并在显示界面展示目标故障信息,将振动参数集i进行取均方差,得到目标均方差,振动参数集i为至少一个振动参数集中的任一振动参数集,按照预设的均方差与故障信息之间的映射关系,确定目标均方差对应的目标故障信息,将振动参数集i进行线性拟合,得到目标拟合函数,目标拟合函数包括横轴为时间,纵轴为与振动参数集i中的至少一个振动参数相关,依据目标拟合函数预估振动参数集i对应的部件的维护信息如此,可以从视频图像中提取到振动参数,并依据振动参数分析出故障信息,能够快速识别出故障信息,提升了故障检测效率,还能够预估目标对象何时需要维护以及如何维护,提升了维护效率。
与上述一致地,以下为实施上述故障检测方法的装置,具体如下:
请参阅图3A,为本申请实施例提供的一种故障检测装置的实施例结构示意图。本实施例中所描述的故障检测装置,包括:获取单元301、提取单元302和显示单元303,具体如下:
获取单元301,用于获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
提取单元302,用于依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
显示单元303,用于依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
可以看出,通过本申请实施例故障检测装置,获取针对目标对象的第一视频序列,第一视频序列为RGB视频图像,依据第一视频序列提取目标对象对应的至少一个振动参数集,依据至少一个振动参数集确定目标对象的目标故障信息,并在显示界面展示目标故障信息,如此,可以从视频图像中提取到振动参数,并依据振动参数分析出故障信息,能够快速识别出故障信息,提升了故障检测效率。
可选地,在所述依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集方面,所述提取单元302具体用于:
将所述第一视频序列进行颜色空间变换,得到变换后的所述第一视频序列;
对变换后的所述第一视频序列进行分离,得到亮度分量信息和色度分量信息;
对所述亮度分量信息进行傅里叶变换,得到频域亮度分量信息;
对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量;
对所述放大处理后的所述频域亮度分量进行反傅里叶变换,得到时域亮度分量信息;
将所述时域亮度信息和所述色度分量信息进行合成,得到合成视频序列;
对所述合成视频序列进行所述颜色空间变换对应的反变换,得到第二视频序列;
依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集。
可选地,在所述依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集方面,所述提取单元302具体用于:
确定所述第二视频序列对应的帧序列间的交叉互功率谱;
对所述交叉互功率谱进行插值滤波处理,得到处理后的所述交叉互功率谱;
对处理后的所述交叉互功率谱进行反傅里叶变换,并逐相位比较,得到所述目标对象对应的至少一个振动参数集。
可选地,在所述对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量方面,所述提取单元302具体用于:
对所述频域亮度分量信息进行特征点提取,得到特征点集;
对所述特征点集进行聚类,得到多类特征点;
对所述频域亮度分量信息中的每一像素点进行光流场插值运算,得到稠密光流场;
跟踪所述多类特征点中每一类特征点的运动轨迹,得到多个动作层,每一类特征点对应一个动作层;
对所述多个动作层进行纹理填充,并将纹理填充后的所述多个动作层中的预设层进行放大处理,得到放大处理后的所述频域亮度分量。
可选地,每一振动参数集对应所述目标对象的一个部件;
在所述依据所述至少一个振动参数集确定所述目标对象的故障信息方面,所述显示单元303具体用于:
将振动参数集i进行取均方差,得到目标均方差,所述振动参数集i为所述至少一个振动参数集中的任一振动参数集;
按照预设的均方差与故障信息之间的映射关系,确定所述目标均方差对应的所述目标故障信息。
可选地,如图3B,图3B为图3A所描述的故障检测装置的又一变型结构,其与图3A相比较,还可以包括:拟合单元304和确定单元305,具体如下:
拟合单元304,用于将振动参数集i进行线性拟合,得到目标拟合函数,所述目标拟合函数包括横轴为时间,纵轴为与所述振动参数集i中的至少一个振动参数相关;
确定单元305,用于依据所述目标拟合函数预估所述振动参数集i对应的部件的维护信息。
可以理解的是,本实施例的故障检测装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
与上述一致地,请参阅图4,为本申请实施例提供的一种电子设备的实施例结构示意图。本实施例中所描述的电子设备,包括:至少一个输入设备1000;至少一个输出设备2000;至少一个处理器3000,例如CPU;和存储器4000,上述输入设备1000、输出设备2000、处理器3000和存储器4000通过总线5000连接。
其中,上述输入设备1000具体可为触控面板、物理按键或者鼠标。
上述输出设备2000具体可为显示屏。
上述存储器4000可以是高速RAM存储器,也可为非易失存储器(non-volatile memory),例如磁盘存储器。上述存储器4000用于存储一组程序代码,上述输入设备1000、输出设备2000和处理器3000用于调用存储器4000中存储的程序代码,执行如下操作:
上述处理器3000,用于:
获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
可以看出,通过本申请实施例的所描述的电子设备,获取针对目标对象的第一视频序列,第一视频序列为RGB视频图像,依据第一视频序列提取目标对象对应的至少一个振动参数集,依据至少一个振动参数集确定目标对象的目标故障信息,并在显示界面展示目标故障信息,如此,可以从视频图像中提取到振动参数,并依据振动参数分析出故障信息,能够快速识别出故障信息,提升了故障检测效率。
可选地,在所述依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集方面,上述处理器3000具体用于:
将所述第一视频序列进行颜色空间变换,得到变换后的所述第一视频序列;
对变换后的所述第一视频序列进行分离,得到亮度分量信息和色度分量信息;
对所述亮度分量信息进行傅里叶变换,得到频域亮度分量信息;
对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量;
对所述放大处理后的所述频域亮度分量进行反傅里叶变换,得到时域亮度分量信息;
将所述时域亮度信息和所述色度分量信息进行合成,得到合成视频序列;
对所述合成视频序列进行所述颜色空间变换对应的反变换,得到第二视频序列;
依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集。
可选地,在所述依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集方面,上述处理器3000具体用于:
确定所述第二视频序列对应的帧序列间的交叉互功率谱;
对所述交叉互功率谱进行插值滤波处理,得到处理后的所述交叉互功率谱;
对处理后的所述交叉互功率谱进行反傅里叶变换,并逐相位比较,得到所述目标对象对应的至少一个振动参数集。
可选地,在所述对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量方面,上述处理器3000具体用于:
对所述频域亮度分量信息进行特征点提取,得到特征点集;
对所述特征点集进行聚类,得到多类特征点;
对所述频域亮度分量信息中的每一像素点进行光流场插值运算,得到稠密光流场;
跟踪所述多类特征点中每一类特征点的运动轨迹,得到多个动作层,每一类特征点对应一个动作层;
对所述多个动作层进行纹理填充,并将纹理填充后的所述多个动作层中的预设层进行放大处理,得到放大处理后的所述频域亮度分量。
可选地,每一振动参数集对应所述目标对象的一个部件;
在所述依据所述至少一个振动参数集确定所述目标对象的故障信息方面,上述处理器3000具体用于:
将振动参数集i进行取均方差,得到目标均方差,所述振动参数集i为所述至少一个振动参数集中的任一振动参数集;
按照预设的均方差与故障信息之间的映射关系,确定所述目标均方差对应的所述目标 故障信息。
可选地,上述处理器3000还具体用于:
将振动参数集i进行线性拟合,得到目标拟合函数,所述目标拟合函数包括横轴为时间,纵轴为与所述振动参数集i中的至少一个振动参数相关;
依据所述目标拟合函数预估所述振动参数集i对应的部件的维护信息。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任何一种故障检测方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例上述任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
本领域技术人员应明白,本申请的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机程序存储/分布在合适的介质中,与其它硬件一起提供或作为硬件的一部分,也可以采用其他分布形式,如通过Internet或其它有线或无线电信系统。
本申请是参照本申请实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (15)

  1. 一种故障检测方法,其特征在于,包括:
    获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
    依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
    依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
  2. 根据权利要求1所述的方法,其特征在于,所述依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集,包括:
    将所述第一视频序列进行颜色空间变换,得到变换后的所述第一视频序列;
    对变换后的所述第一视频序列进行分离,得到亮度分量信息和色度分量信息;
    对所述亮度分量信息进行傅里叶变换,得到频域亮度分量信息;
    对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量;
    对所述放大处理后的所述频域亮度分量进行反傅里叶变换,得到时域亮度分量信息;
    将所述时域亮度信息和所述色度分量信息进行合成,得到合成视频序列;
    对所述合成视频序列进行所述颜色空间变换对应的反变换,得到第二视频序列;
    依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集。
  3. 根据权利要求2所述的方法,其特征在于,所述依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集,包括:
    确定所述第二视频序列对应的帧序列间的交叉互功率谱;
    对所述交叉互功率谱进行插值滤波处理,得到处理后的所述交叉互功率谱;
    对处理后的所述交叉互功率谱进行反傅里叶变换,并逐相位比较,得到所述目标对象对应的至少一个振动参数集。
  4. 根据权利要求2或3所述的方法,其特征在于,所述对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量,包括:
    对所述频域亮度分量信息进行特征点提取,得到特征点集;
    对所述特征点集进行聚类,得到多类特征点;
    对所述频域亮度分量信息中的每一像素点进行光流场插值运算,得到稠密光流场;
    跟踪所述多类特征点中每一类特征点的运动轨迹,得到多个动作层,每一类特征点对应一个动作层;
    对所述多个动作层进行纹理填充,并将纹理填充后的所述多个动作层中的预设层进行放大处理,得到放大处理后的所述频域亮度分量。
  5. 根据权利1-4任一项所述的方法,其特征在于,每一振动参数集对应所述目标对象的一个部件;
    所述依据所述至少一个振动参数集确定所述目标对象的故障信息,包括:
    将振动参数集i进行取均方差,得到目标均方差,所述振动参数集i为所述至少一个振动参数集中的任一振动参数集;
    按照预设的均方差与故障信息之间的映射关系,确定所述目标均方差对应的所述目标故障信息。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    将振动参数集i进行线性拟合,得到目标拟合函数,所述目标拟合函数包括横轴为时间,纵轴为与所述振动参数集i中的至少一个振动参数相关;
    依据所述目标拟合函数预估所述振动参数集i对应的部件的维护信息。
  7. 一种故障检测装置,其特征在于,包括:
    获取单元,用于获取针对目标对象的第一视频序列,所述第一视频序列为RGB视频图像;
    提取单元,用于依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集;
    显示单元,用于依据所述至少一个振动参数集确定所述目标对象的目标故障信息,并在显示界面展示所述目标故障信息。
  8. 根据权利要求7所述的装置,其特征在于,在所述依据所述第一视频序列提取所述目标对象对应的至少一个振动参数集方面,所述提取单元具体用于:
    将所述第一视频序列进行颜色空间变换,得到变换后的所述第一视频序列;
    对变换后的所述第一视频序列进行分离,得到亮度分量信息和色度分量信息;
    对所述亮度分量信息进行傅里叶变换,得到频域亮度分量信息;
    对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量;
    对所述放大处理后的所述频域亮度分量进行反傅里叶变换,得到时域亮度分量信息;
    将所述时域亮度信息和所述色度分量信息进行合成,得到合成视频序列;
    对所述合成视频序列进行所述颜色空间变换对应的反变换,得到第二视频序列;
    依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集。
  9. 根据权利要求8所述的装置,其特征在于,在所述依据所述第二视频序列确定所述目标对象对应的至少一个振动参数集方面,所述提取单元具体用于:
    确定所述第二视频序列对应的帧序列间的交叉互功率谱;
    对所述交叉互功率谱进行插值滤波处理,得到处理后的所述交叉互功率谱;
    对处理后的所述交叉互功率谱进行反傅里叶变换,并逐相位比较,得到所述目标对象对应的至少一个振动参数集。
  10. 根据权利要求8或9所述的装置,其特征在于,在所述对所述频域亮度分量信息进行运动放大处理,得到放大处理后的所述频域亮度分量方面,所述提取单元具体用于:
    对所述频域亮度分量信息进行特征点提取,得到特征点集;
    对所述特征点集进行聚类,得到多类特征点;
    对所述频域亮度分量信息中的每一像素点进行光流场插值运算,得到稠密光流场;
    跟踪所述多类特征点中每一类特征点的运动轨迹,得到多个动作层,每一类特征点对应一个动作层;
    对所述多个动作层进行纹理填充,并将纹理填充后的所述多个动作层中的预设层进行放大处理,得到放大处理后的所述频域亮度分量。
  11. 根据权利7-10任一项所述的装置,其特征在于,每一振动参数集对应所述目标对象的一个部件;
    在所述依据所述至少一个振动参数集确定所述目标对象的故障信息方面,所述显示单元具体用于:
    将振动参数集i进行取均方差,得到目标均方差,所述振动参数集i为所述至少一个振 动参数集中的任一振动参数集;
    按照预设的均方差与故障信息之间的映射关系,确定所述目标均方差对应的所述目标故障信息。
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括:拟合单元和确定单元,其中,
    所述拟合单元,用于将振动参数集i进行线性拟合,得到目标拟合函数,所述目标拟合函数包括横轴为时间,纵轴为与所述振动参数集i中的至少一个振动参数相关;
    所述确定单元,用于依据所述目标拟合函数预估所述振动参数集i对应的部件的维护信息。
  13. 一种电子设备,其特征在于,包括处理器、存储器,所述存储器用于存储一个或多个程序,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-6任一项所述的方法中的步骤的指令。
  14. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-6任一项所述的方法。
  15. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求1-6任一项所述的方法。
PCT/CN2020/104821 2019-04-26 2020-07-27 故障检测方法及相关产品 WO2021036634A1 (zh)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201910345977 2019-04-26
CN201910810243.4A CN110602485B (zh) 2019-04-26 2019-08-29 故障检测方法及相关产品
CN201910810243.4 2019-08-29

Publications (1)

Publication Number Publication Date
WO2021036634A1 true WO2021036634A1 (zh) 2021-03-04

Family

ID=68856367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104821 WO2021036634A1 (zh) 2019-04-26 2020-07-27 故障检测方法及相关产品

Country Status (2)

Country Link
CN (1) CN110602485B (zh)
WO (1) WO2021036634A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110595602B (zh) * 2019-04-26 2021-10-15 深圳市豪视智能科技有限公司 振动检测方法及相关产品
CN110602485B (zh) * 2019-04-26 2020-09-29 深圳市豪视智能科技有限公司 故障检测方法及相关产品
CN112200019B (zh) * 2020-09-22 2024-02-09 上海罗曼照明科技股份有限公司 一种快速建筑夜景照明灯光故障检测方法
CN113128474B (zh) * 2021-05-17 2022-06-03 重庆大学 一种基于计算机视觉及变分模态分解的结构模态识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104048744A (zh) * 2014-07-08 2014-09-17 安徽常春藤光电智能科技有限公司 一种非接触式的基于影像的实时在线振动测量方法
CN106355563A (zh) * 2016-08-31 2017-01-25 河南工业大学 一种图像去雾方法和装置
CN108414240A (zh) * 2018-03-15 2018-08-17 广东工业大学 一种检测机器异常振动的方法与装置
JP2018143006A (ja) * 2018-06-07 2018-09-13 株式会社ニコン レンズユニット
CN109520690A (zh) * 2018-10-30 2019-03-26 西安交通大学 一种基于视频的旋转机械转子模态振型全局测量装置及方法
CN110602485A (zh) * 2019-04-26 2019-12-20 深圳市豪视智能科技有限公司 故障检测方法及相关产品

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551901B (zh) * 2009-05-25 2010-12-29 中国人民解放军国防科学技术大学 动态遮挡图像的实时补偿和增强方法
US9990555B2 (en) * 2015-04-30 2018-06-05 Beijing Kuangshi Technology Co., Ltd. Video detection method, video detection system and computer program product
US10178381B2 (en) * 2015-07-14 2019-01-08 Microsoft Technology Licensing, Llc Depth-spatial frequency-response assessment
JP2018085627A (ja) * 2016-11-24 2018-05-31 川原 功 動画解像度評価用画像発生装置
CN106897694A (zh) * 2017-02-24 2017-06-27 西安天和防务技术股份有限公司 用于国土资源监控的违建场景识别方法
CN109350030B (zh) * 2018-08-17 2020-04-21 西安电子科技大学 基于相位放大处理人脸视频心率信号的系统及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104048744A (zh) * 2014-07-08 2014-09-17 安徽常春藤光电智能科技有限公司 一种非接触式的基于影像的实时在线振动测量方法
CN106355563A (zh) * 2016-08-31 2017-01-25 河南工业大学 一种图像去雾方法和装置
CN108414240A (zh) * 2018-03-15 2018-08-17 广东工业大学 一种检测机器异常振动的方法与装置
JP2018143006A (ja) * 2018-06-07 2018-09-13 株式会社ニコン レンズユニット
CN109520690A (zh) * 2018-10-30 2019-03-26 西安交通大学 一种基于视频的旋转机械转子模态振型全局测量装置及方法
CN110602485A (zh) * 2019-04-26 2019-12-20 深圳市豪视智能科技有限公司 故障检测方法及相关产品

Also Published As

Publication number Publication date
CN110602485A (zh) 2019-12-20
CN110602485B (zh) 2020-09-29

Similar Documents

Publication Publication Date Title
WO2021036633A1 (zh) 振动检测方法及相关产品
WO2021036634A1 (zh) 故障检测方法及相关产品
CN109670474B (zh) 一种基于视频的人体姿态估计方法、装置及设备
EP3674852B1 (en) Method and apparatus with gaze estimation
CN110998659B (zh) 图像处理系统、图像处理方法、及程序
CN113408508B (zh) 基于Transformer的非接触式心率测量方法
WO2019072243A1 (zh) 动作识别、姿势估计的方法及装置
CN110349082B (zh) 图像区域的裁剪方法和装置、存储介质及电子装置
CN108875533B (zh) 人脸识别的方法、装置、系统及计算机存储介质
CN110363763B (zh) 图像质量评价方法、装置、电子设备及可读存储介质
CN106137175B (zh) 生理参数估计
Zhang et al. An algorithm for no-reference image quality assessment based on log-derivative statistics of natural scenes
WO2021114896A1 (zh) 一种基于计算机视觉的异常检测方法、装置及电子设备
WO2021036631A1 (zh) 基于视频进行振动分析方法及相关产品
US20200279102A1 (en) Movement monitoring system
CN112766045B (zh) 场景变化检测方法、系统、电子装置及存储介质
WO2021052020A1 (zh) 振动检测系统
CN112307984A (zh) 基于神经网络的安全帽检测方法和装置
JP6786098B2 (ja) 歩容解析装置
CN109492585B (zh) 一种活体检测方法和电子设备
JPWO2021181627A5 (ja) 画像処理装置、画像認識システム、画像処理方法および画像処理プログラム
Xiao A Multi-scale Structure SIMilarity metric for image fusion qulity assessment
Choi et al. Appearance-based gaze estimation using kinect
CN116091529A (zh) 一种适用于便携式设备的弱视实时图像处理方法及设备
CN108537105A (zh) 一种家庭环境下的危险行为识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20858093

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20858093

Country of ref document: EP

Kind code of ref document: A1