CN110602485B - Fault detection method and related product - Google Patents

Fault detection method and related product Download PDF

Info

Publication number
CN110602485B
CN110602485B CN201910810243.4A CN201910810243A CN110602485B CN 110602485 B CN110602485 B CN 110602485B CN 201910810243 A CN201910810243 A CN 201910810243A CN 110602485 B CN110602485 B CN 110602485B
Authority
CN
China
Prior art keywords
video sequence
target
parameter set
vibration parameter
frequency domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910810243.4A
Other languages
Chinese (zh)
Other versions
CN110602485A (en
Inventor
高风波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Haoxi Intelligent Technology Co ltd
Original Assignee
Shenzhen Haoxi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Haoxi Intelligent Technology Co ltd filed Critical Shenzhen Haoxi Intelligent Technology Co ltd
Publication of CN110602485A publication Critical patent/CN110602485A/en
Priority to PCT/CN2020/104821 priority Critical patent/WO2021036634A1/en
Application granted granted Critical
Publication of CN110602485B publication Critical patent/CN110602485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Abstract

The embodiment of the application provides a fault detection method and a related product, wherein the method comprises the following steps: acquiring a first video sequence aiming at a target object, wherein the first video sequence is an RGB video image; extracting at least one vibration parameter set corresponding to the target object according to the first video sequence; and determining target fault information of the target object according to the at least one vibration parameter set, and displaying the target fault information on a display interface. By adopting the method and the device, the vibration parameters can be extracted from the video image, the fault information can be analyzed according to the vibration parameters, the fault information can be rapidly identified, and the fault detection efficiency is improved.

Description

Fault detection method and related product
Technical Field
The application relates to the technical field of video monitoring, in particular to a fault detection method and a related product.
Background
All mechanical and kinematic systems produce a wide variety of vibrations, some of which reflect normal kinematic conditions of the system and others of which reflect abnormal kinematic conditions of the system (e.g., equipment section failure, shaft connection imbalance, etc.). Therefore, vibration detection is an important link for predictive maintenance of system equipment. Currently, accelerometer devices are mostly used for vibration detection. Although accurate and reliable, the accelerometer requires long preparation and installation time, needs to be in physical contact with the system under test during testing (thus changing the vibration response of the system under test), and can only test very limited discrete points, so the problem of how to improve the failure detection efficiency needs to be solved.
Disclosure of Invention
The embodiment of the application provides a fault detection method and a related product, and the fault detection efficiency can be improved.
A first aspect of an embodiment of the present application provides a fault detection method, including:
acquiring a first video sequence aiming at a target object, wherein the first video sequence is an RGB video image;
extracting at least one vibration parameter set corresponding to the target object according to the first video sequence;
and determining target fault information of the target object according to the at least one vibration parameter set, and displaying the target fault information on a display interface.
A second aspect of the embodiments of the present application provides a fault detection apparatus, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first video sequence aiming at a target object, and the first video sequence is an RGB video image;
the extracting unit is used for extracting at least one vibration parameter set corresponding to the target object according to the first video sequence;
and the display unit is used for determining the target fault information of the target object according to the at least one vibration parameter set and displaying the target fault information on a display interface.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that, by the fault detection method and the related products in the embodiments of the application, the first video sequence for the target object is obtained, the first video sequence is an RGB video image, the at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence, the target fault information of the target object is determined according to the at least one vibration parameter set, and the target fault information is displayed on the display interface.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1A is a schematic flowchart of an embodiment of a fault detection method provided in an embodiment of the present application;
FIG. 1B is a schematic diagram illustrating an application scenario provided by an embodiment of the present application;
fig. 1C is a schematic diagram illustrating a motion amplification method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an embodiment of another fault detection method provided in an embodiment of the present application;
fig. 3A is a schematic structural diagram of an embodiment of a fault detection apparatus provided in an embodiment of the present application;
fig. 3B is a schematic structural diagram of another fault detection apparatus provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device described in the embodiment of the present application may include a smart Phone (e.g., an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a video matrix, a monitoring platform, a Mobile Internet device (MID, Mobile Internet Devices), or a wearable device, which are merely examples, but not exhaustive, and include but are not limited to the foregoing Devices, and of course, the electronic device may also be a server.
Fig. 1A is a schematic flow chart of an embodiment of a fault detection method according to an embodiment of the present application. The fault detection method described in this embodiment includes the following steps:
101. a first video sequence aiming at a target object is obtained, wherein the video sequence is an RGB video image.
Wherein the target object may be one of: mechanical equipment (numerically controlled machine tools), vehicles, communication tools, medical instruments, buildings, people, animals, and the like, without limitation. A first video sequence of the target object over a period of time may be acquired, the first video sequence being an RGB video image.
In this embodiment, the electronic device may include at least one camera, and the camera may be at least one of the following: the infrared camera or the visible light camera, and the visible light camera may be a common camera or an optical angle camera, and the like, which is not limited herein. Of course, the electronic device may further comprise an ultrasound sensor by which the first video sequence of the target object may be acquired. The first video sequence may be a panoramic video image of the entire target object, or a video image of a specified portion of the target object, or an internal operation image of the target object. The embodiment of the application can be applied to the scene shown in fig. 1B, the first video sequence can be obtained by shooting the target object by the camera, the electronic device analyzes the video sequence to obtain the vibration parameter, and the vibration parameter is displayed on the display interface.
102. And extracting at least one vibration parameter set corresponding to the target object according to the first video sequence.
Wherein the set of vibration parameters may comprise at least one of the following vibration parameters: vibration frequency, vibration amplitude, vibration frequency, vibration waveform pattern, vibration period, vibration range, etc., without limitation.
Optionally, in the step 102, extracting at least one vibration parameter set corresponding to the target object according to the first video sequence may include the following steps:
21. performing color space transformation on the first video sequence to obtain a transformed first video sequence;
22. separating the transformed first video sequence to obtain luminance component information and chrominance component information;
23. performing Fourier transform on the brightness component information to obtain frequency domain brightness component information;
24. performing motion amplification processing on the frequency domain brightness component information to obtain the frequency domain brightness component after amplification processing;
25. performing inverse Fourier transform on the amplified frequency domain brightness component to obtain time domain brightness component information;
26. synthesizing the time domain brightness information and the chrominance component information to obtain a synthesized video sequence;
27. performing inverse transformation corresponding to the color space transformation on the synthesized video sequence to obtain a second video sequence;
28. and determining at least one vibration parameter set corresponding to the target object according to the second video sequence.
Wherein the color space transformation may include any one of: RGB to YCbCr color space conversion, RGB to YIQ color space conversion, RGB to HIS color space conversion, RGB to YUV color space conversion, without limitation. In a specific implementation, the first video sequence may be subjected to color space conversion to obtain a converted first video sequence, so that a luminance component may be extracted, that is, the converted first video sequence is separated to obtain luminance component information and chrominance component information, for example, a video camera captures a section of video containing a vibrating object to be measured, then a sequence frame of the video is converted from an RGB color space to a YIQ color space, luminance component information and chrominance component information of a video frame are separated, and subsequent steps are all performed on a Y-phase of the YIQ video, that is, luminance information, so that an arithmetic operation amount may be reduced and an operation speed of an algorithm may be increased. The conversion relation between RGB and YIQ is as follows:
Y=0.299*R+0.587*G+0.114*B;
I=0.596*R–0.275*G–0.321*B;
Q=0.212*R-0.523*G+0.311*B。
further, fourier transform may be performed on the luminance component information to obtain frequency domain luminance component information, and motion amplification processing may be performed on the frequency domain luminance component information to obtain the frequency domain luminance component after amplification processing, where the motion amplification processing may adopt at least one of the following manners: the method includes the steps of performing a lagrange motion amplification algorithm, an euler motion amplification algorithm, a complex phase motion amplification algorithm, and a RIESZ pyramid motion amplification algorithm, without limitation, further performing inverse fourier transform on a frequency domain luminance component after amplification processing to obtain time domain luminance component information, synthesizing the time domain luminance component information and chrominance component information to obtain a synthesized video sequence, performing inverse transform corresponding to the color space transform on the synthesized video sequence to obtain a second video sequence, and determining at least one vibration parameter set corresponding to a target object according to the second video sequence.
Further optionally, in step 28, the determining at least one vibration parameter set corresponding to the target object according to the second video sequence may include the following steps:
281. determining a cross-power spectrum between frame sequences corresponding to the second video sequence;
282. carrying out interpolation filtering processing on the cross power spectrum to obtain the processed cross power spectrum;
283. and performing inverse Fourier transform on the processed cross power spectrum, and comparing the cross power spectrum phase by phase to obtain at least one vibration parameter set corresponding to the target object.
In a specific implementation, a phase correlation algorithm may be used to calculate a cross power spectrum between frame sequences for the frame sequence after the video motion amplification processing, and specifically, the phase correlation algorithm uses the following formula to calculate the cross power spectrum.
Figure BDA0002184802790000061
Wherein, FaFor a-frame imagesThe fourier transform is carried out on the signal,
Figure BDA0002184802790000062
for the conjugate signal of the fourier transform of the b frame image, the lower side of the divisor is the modulus of the correlation product of the two fourier transformed signals. And R is the cross-power spectrum (including frequency-domain noise) of the calculation result of the step.
Further, a self-adaptive filter bank can be used for reconstructing a motion signal, the filter bank is selected for filtering in a self-adaptive mode according to the position of a correlation peak of R, inverse Fourier transform is performed after filtering, phase comparison (phase-by-phase comparison) is performed, at this time, a sliding window self-adaptive matching method can be used for estimating and extracting vibration parameters to obtain a cross power spectrum R' after frequency domain noise is filtered, inverse Fourier transform is performed on the cross power spectrum, and phase-by-phase comparison is performed, so that a vibration information calculation formula of pixels in the video can be obtained as follows.
r=F-1{R′}
Wherein, F-1And performing inverse Fourier transform on the cross power spectrum, wherein R' is the cross power spectrum after frequency domain noise is filtered, and R is vibration information corresponding to pixels in the video.
Wherein, a filter bank can be selected for filtering in a self-adaptive manner according to the position of the related peak of the cross-power spectrum R, and the cross-power spectrum R' after filtering is obtained.
Optionally, in the step 282, performing interpolation filtering processing on the cross-power spectrum to obtain the processed cross-power spectrum, which may include the following steps:
2821. acquiring a plurality of state change signals corresponding to the cross power spectrum, wherein the state change signals are time domain signals;
2822. extracting target state change signal segments with preset lengths from the plurality of state change signals to obtain a plurality of target state change signal segments, and obtaining a target frequency of each target state change signal segment in the plurality of target state change signal segments to obtain a plurality of target frequencies;
2823. setting a corresponding sliding window according to a target frequency corresponding to each state change signal in the state change signals to obtain a plurality of sliding windows, and sending each state change signal in the state change signals to the corresponding sliding window;
2824. taking the state change signal which cannot pass through the corresponding sliding window in the plurality of state change signals as a non-periodic signal to obtain at least one non-periodic signal;
2825. removing the at least one non-periodic signal from the plurality of state change signals to obtain; filtered cross-power spectra.
Wherein the cross-power spectrum R is a frequency domain signal comprising one or more correlation peaks. And performing inverse Fourier transform on the cross power spectrum R to obtain state change signals corresponding to each correlation peak. Each state change signal can reflect the state change condition of a certain position in the amplified video, so that a target state change signal segment with a preset length can be extracted from the plurality of state change signals to obtain a plurality of target state change signal segments. The state change information includes vibration data and other noise information, for example, the change of illumination can also cause the state change in the video picture, and the vibration data can reflect the operation condition of the object to be vibrated. The vibration of the device to be detected is periodic during operation, and the state change caused by the vibration is also periodic. Although a lot of noise information can cause the state change of each pixel point in the amplified video, the state change caused by the noise is often not periodic, and the periodic vibration can be used for reflecting the operation condition of the equipment according to the operation condition, because the non-periodic vibration is often caused by the external environment and not caused by the equipment, the non-periodic signal cannot be used for analyzing the operation condition of the equipment. A noise signal which is not caused by self-vibration is acquired by acquiring a non-periodic signal in the state change signal. Since the non-periodic signal is often a signal that has little or no effect on the operating conditions of the analysis device, or even has an interfering effect, this portion of the non-periodic signal can be removed so that more information is available in the state change signal obtained from the amplified video.
Specifically, a target state change signal segment with a preset length may be extracted first, a target frequency of the target state change signal segment is obtained, then the frequency of the other part in the state change signal is compared with the target frequency, and if the frequency of the other part in the state change signal is not consistent with the target frequency, the state change signal may be considered to be an aperiodic signal. The preset length may be set to a certain value by a user, or may be adapted during signal processing according to the length of the signal, for example, the preset length may be set to 1/10 of the length of the state change signal. After the target frequency of the target state transition signal segment is obtained, the window size of the sliding window is set according to the target frequency, for example, the window size of the sliding window may be set to be consistent with the target frequency, so that only signals with the frequency consistent with the target frequency can pass through the sliding window, and signals with the frequency inconsistent with the target frequency cannot pass through the sliding window. If the state change signal can not pass through the corresponding sliding window, it is indicated that a signal segment with a frequency inconsistent with the target frequency exists in the state change signal, that is, the state change signal is an aperiodic signal. Therefore, whether the frequency of other parts in the state change signal is consistent with the target frequency or not is judged in a sliding window mode, a conclusion can be conveniently and quickly obtained, and the calculated amount is smaller.
Further optionally, in step 24, performing motion amplification processing on the frequency domain luminance component information to obtain the frequency domain luminance component after amplification processing, which may include the following steps:
2411. extracting characteristic points of the frequency domain brightness component information to obtain a characteristic point set;
2412. clustering the feature point set to obtain multi-type feature points;
2413. performing optical flow field interpolation operation on each pixel point in the frequency domain brightness component information to obtain a dense optical flow field;
2414. tracking the motion track of each type of feature point in the multiple types of feature points to obtain multiple action layers, wherein each type of feature point corresponds to one action layer;
2415. and performing texture filling on the plurality of action layers, and performing amplification processing on a preset layer in the plurality of action layers after the texture filling to obtain the amplified frequency domain brightness component.
The feature point extraction may be at least one of the following methods: the method includes the steps of harris corner detection, Scale Invariant Feature Transform (SIFT), SURF feature extraction algorithm and the like, and is not limited herein, in a specific implementation, feature point extraction may be performed on frequency domain luminance component information to obtain a feature point set, the feature point set is clustered to obtain multiple types of feature points, optical flow field interpolation operation is performed on each pixel point in the frequency domain luminance component information to obtain a dense optical flow field, a motion trajectory of each type of feature point in the multiple types of feature points is tracked to obtain multiple action layers, each type of feature point corresponds to one action layer, texture filling is performed on the multiple action layers, preset layers in the multiple action layers after texture filling are amplified to obtain an amplified frequency domain luminance component, and the preset layers may be set by a user or default to a system. Therefore, the characteristics corresponding to the vibration can be amplified, and the influence brought by the vibration can be conveniently analyzed subsequently.
Optionally, before the step 241, the frequency domain luminance component information may be registered to obtain a registered image, and when the step 241 is executed, the registered image may be subjected to feature point extraction to obtain a feature point set, so that an action caused by camera shake may be prevented.
In specific implementation, the tracking of the feature points is considered to be possibly influenced by other factors such as image shading and shielding, so that the tracking result is influenced, the accuracy of motion level segmentation is directly related to the final amplification effect, the process can be combined with manual intervention, namely, some motion levels are selected manually, and therefore the tracking precision can be further improved.
Further optionally, in step 24, performing motion amplification processing on the frequency domain luminance component information to obtain the frequency domain luminance component after amplification processing, which may include the following steps:
2421. performing spatial domain decomposition on the frequency domain brightness component to obtain the decomposed frequency domain brightness component;
2422. performing time domain filtering on the decomposed frequency domain brightness component, and obtaining the filtered frequency domain brightness component;
2423. amplifying the frequency domain brightness component to obtain the amplified frequency domain brightness component;
2424. and performing inverse transformation corresponding to the spatial domain decomposition on the frequency domain brightness component to obtain the frequency domain brightness component after the amplification treatment after the inverse transformation.
Wherein the spatial decomposition may include at least one of the following algorithms: for example, the spatial domain decomposition may be FFT followed by complex operable pyramid spatial domain decomposition, without limitation, such as fast fourier transform, contourlet transform, Non-sampled contourlet transform (NSCT), wavelet transform, ridgelet transform, and shear wave transform. In the specific implementation, the frequency domain luminance component may be subjected to spatial domain decomposition to obtain a decomposed frequency domain luminance component, the decomposed frequency domain luminance component is subjected to temporal filtering, the filtered frequency domain luminance component is subjected to amplification processing on the frequency domain luminance component to obtain an amplified frequency domain luminance component, the frequency domain luminance component is subjected to inverse transformation corresponding to the spatial domain decomposition to obtain the amplified frequency domain luminance component after inverse transformation, and deep-level details in an image can be amplified, so that the method is beneficial to accurately extracting key feature points in the image, and better tracking of the feature points can be still achieved under the condition that the feature points are influenced by darkness, occlusion and the like of the image.
In a specific implementation, as shown in fig. 1C by way of example, a YIQ color space converted from a first video sequence is maintained to I, Q channels, a Y channel is subjected to spatial domain decomposition (such as FFT operation), a Y channel image after FFT conversion is subjected to complex steerable pyramid spatial domain decomposition, images of different scales after the Y channel spatial domain decomposition are subjected to time domain bandpass filtering, interested motion information after the time domain bandpass filtering is amplified, complex steerable pyramid reconstruction is performed on the interested motion information to obtain an amplified Y channel image, and finally the reconstructed Y channel image is added to an original I, Q channel image and converted into an RGB color space to obtain a second video sequence.
103. And determining target fault information of the target object according to the at least one vibration parameter set, and displaying the target fault information on a display interface.
In this embodiment, the fault information may include at least one of the following: failure class, failed component, failure cause, etc., and are not limited herein. In a specific implementation, the target failure information of the target object may be determined according to at least one vibration parameter set, and the target failure information may be displayed on the display interface, for example, an image of a failure area, a vibration video, a failure category or a risk level may be displayed on the display interface, and a maintenance scheme may be output based on a user requirement, which is not limited herein.
Optionally, each set of vibration parameters corresponds to a component of the target object; the step 103 of determining the fault information of the target object according to the at least one vibration parameter set may include the following steps:
31. performing mean square error taking on a vibration parameter set i to obtain a target mean square error, wherein the vibration parameter set i is any one vibration parameter set in the at least one vibration parameter set;
32. and determining the target fault information corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fault information.
Wherein the target object may comprise a plurality of components, the components being at least one of: bearings, screws, chips, circuit boards, springs, graphics cards, motor vehicles, steering wheels, and the like, without limitation. The mapping relationship between the preset mean square error and the fault information can be stored in advance. In a specific implementation, a mean square error of the vibration parameter set i may be taken to obtain a target mean square error, the vibration parameter set i is any one of the at least one vibration parameter set, and target fault information corresponding to the target mean square error may be determined according to a preset mapping relationship between the mean square error and the fault information.
Further optionally, after the step 103, the following steps may be further included:
a1, fitting a vibration parameter set i to obtain a target fitting function, wherein the target fitting function comprises a horizontal axis which is time and a vertical axis which is related to at least one vibration parameter in the vibration parameter set i;
and A2, estimating the maintenance information of the component corresponding to the vibration parameter set i according to the target fitting function.
In a specific implementation, taking the vibration parameter set i as an example, fitting, such as linear fitting or nonlinear fitting, may be performed on the vibration parameter set i to obtain a target fitting function, where a horizontal axis of the target fitting function is time, and a vertical axis of the target fitting function is related to at least one vibration parameter in the vibration parameter set i, such as: the vertical axis may be phase, or power, or form power/frequency, etc., and is not limited herein. In specific implementation, the vibration parameter set i may be fitted to obtain a target fitting function, where the target fitting function includes a horizontal axis as time and a vertical axis related to at least one vibration parameter in the vibration parameter set i, and maintenance information of a component corresponding to the vibration parameter set i is estimated according to the target fitting function, and the maintenance information may be: date of repair, manner of repair, maintenance personnel, cost of repair, etc., are not limited thereto.
In addition, the wear, failure or service life condition of the component is estimated through the target fitting function, the prediction result can be probability distribution which can be understood as a range value, and the probability distribution can be visually presented in the video image and correspondingly presented by combining images of all components in the image.
The step 103 of determining the fault information of the target object according to the at least one vibration parameter set may include the following steps:
and inputting the at least one vibration parameter set into a preset fault detection model to obtain fault information of the target object.
The preset fault detection model can be set by a user or defaulted by a system, and can be obtained based on a convolutional neural network model. In a specific implementation, at least one vibration parameter set may be input to a preset fault detection model to obtain fault information of the target object.
For example, in this embodiment of the application, the target object may be a mechanical device, the fault detection device may obtain a video sequence in which the mechanical device vibrates in a working state, extract a vibration parameter according to the video sequence, and analyze fault information of the mechanical device according to the vibration parameter.
For example, in this embodiment of the application, the target object may be a person, and based on the above fault detection apparatus, a video sequence of the user about cardiac vibration may be acquired by the ultrasonic sensor, a vibration parameter is extracted according to the video sequence, a cardiac function of the person is analyzed according to the vibration parameter, and a health diagnosis is implemented, and of course, a pregnant woman may also be detected, and a growth condition of the baby is detected.
Optionally, after the step 103, the following steps may be further included:
and B1, carrying out position marking on the target object according to the at least one vibration parameter set to obtain at least one position mark.
B2, displaying the at least one position marker in the first video sequence.
The at least one vibration parameter set comprises a large number of vibration parameters, each vibration parameter can correspond to one point in the preset position, each point can at least comprise one vibration parameter, and based on the vibration parameters, the target object can be subjected to position marking according to the at least one vibration parameter set to obtain at least one position mark. Of course, at least one vibration parameter set may also be mapped to the target point, the target object is divided into a plurality of squares, and the squares in which the number of vibration parameters is greater than the preset number are marked to obtain at least one position mark, where the preset number may be set by the user or default by the system.
In a specific implementation, a video sequence of any one of the at least one position marker in a specified time period may be displayed, and the specified time period may be set by a user or default by a system.
Optionally, the step B2, displaying the at least one position marker in the first video sequence, may include the following steps:
b21, extracting target vibration parameters aiming at a position mark i from the at least one vibration parameter set, wherein the position mark i is any one of the at least one position mark;
b22, processing the target vibration parameters to obtain a target characteristic value;
b23, determining a target display effect parameter corresponding to the target characteristic value according to a preset mapping relation between the characteristic value and the display effect parameter;
b24, displaying the position mark i in the first video sequence according to the target display effect parameter.
Wherein, the display effect parameter may be at least one of the following parameters: display brightness, display color, display hue, whether a display frame needs to be added, whether an arrow needs to be added, and the like, which are not limited herein. The electronic device may further pre-store a mapping relationship between a preset characteristic value and a display effect parameter, and in a specific implementation, different vibration parameters correspond to different marks, each mark may take a point as a center, and a vibration parameter within a certain range is taken as a vibration parameter of the point, and further, a target vibration parameter for a position mark i may be extracted from at least one vibration parameter set, the position mark i is any one of at least one position mark, the target vibration parameter is processed to obtain a target characteristic value, a target display effect parameter corresponding to the target characteristic value is determined according to the mapping relationship between the preset characteristic value and the display effect parameter, and the position mark i is displayed in the first video sequence according to the target display effect parameter, so that different marks can be displayed differently according to vibration differences, the user is helped to see the vibration effect quickly and accurately.
Further optionally, in the step B22, the processing the target vibration parameter to obtain the target characteristic value may include the following steps:
b221, acquiring a time domain waveform from the target vibration parameter;
b222, preprocessing the time domain waveform to obtain a target time domain waveform;
b223, comparing the target time domain waveform with a preset time domain waveform;
b224, when the target time domain waveform is successfully compared with the preset time domain waveform, determining a plurality of extreme points of the target time domain waveform;
and B225, determining the target characteristic value according to the extreme points.
Wherein, since the vibration is a constant vibration, for example, when the device is in motion, the component is also in constant rotation, the vibration parameter may include a large amount of vibration data, and the preprocessing may include at least one of the following: signal amplification, noise reduction processing, etc., and are not limited herein. The preset time domain waveform may be a waveform corresponding to a fault state, and the preset time domain waveform may be set by a user or default by a system. In specific implementation, a time domain waveform can be obtained from a target vibration parameter, and then the target time domain waveform is preprocessed to obtain a target time domain waveform, the target time domain waveform is compared with a preset time domain waveform, when the target time domain waveform is successfully compared with the preset time domain waveform, a plurality of extreme points of the target time domain waveform are determined, the extreme points can comprise maximum points or minimum points, a target characteristic value can be determined according to the plurality of extreme points, otherwise, when the target time domain waveform is unsuccessfully compared with the preset time domain waveform, the target object can be indicated to operate normally, vibration display can not be performed, or user equipment is prompted to operate normally. Specifically, for example, the mean square error of the plurality of extreme points is taken as the target feature value, or the number of the plurality of extreme points may be taken as the target feature value.
Optionally, in the step B225, determining the target feature value according to the extreme points may include the following steps:
b2251, selecting extreme points with absolute values larger than a preset threshold value from the extreme points to obtain a plurality of target extreme points;
b2252, dividing the target extreme points into two types, wherein the first type includes a plurality of maximum point and the second type includes a plurality of minimum point;
b2253, determining a first mean square error among the maximum value points in the first class;
b2254, determining a second mean square error between the plurality of minima in the second class;
and B2255, taking the absolute value of the ratio between the first mean square error and the second mean square error as the target characteristic value.
The preset threshold value can be set by the user or defaulted by the system. In the specific implementation, the extreme points whose absolute value is greater than the preset threshold value can be selected from the plurality of extreme points to obtain a plurality of target extreme points, the plurality of target extreme points are divided into two types, the first type comprises a plurality of extreme points, the second type comprises a plurality of minimum points, a first mean square error between the plurality of extreme points in the first type is determined, a second mean square error between the plurality of minimum points in the second type is determined, the absolute value of the ratio between the first mean square error and the second mean square error is used as a target characteristic value, so that the extreme points are firstly screened to obtain stable extreme points which can analyze the vibration condition of a target object, in addition, the extreme points are divided into two types, the maximum points and the minimum points, the mean square errors are respectively calculated by the two types, then the absolute value of the final ratio is used as the target characteristic value, and the distribution condition of the amplitude of the target characteristic value in the upward and downward movement is fully grasped, therefore, the target characteristic value can accurately reflect the vibration trend, and the accuracy of fault information is improved.
Therefore, according to the fault detection method, the first video sequence aiming at the target object is obtained, the first video sequence is an RGB video image, the at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence, the target fault information of the target object is determined according to the at least one vibration parameter set, and the target fault information is displayed on the display interface.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of a fault detection method according to an embodiment of the present application. The fault detection method described in this embodiment includes the following steps:
201. acquiring a first video sequence aiming at a target object, wherein the first video sequence is an RGB video image;
202. extracting at least one vibration parameter set corresponding to the target object according to the first video sequence;
203. performing mean square error taking on a vibration parameter set i to obtain a target mean square error, wherein the vibration parameter set i is any one vibration parameter set in the at least one vibration parameter set;
204. and determining target fault information corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fault information.
205. Performing linear fitting on a vibration parameter set i to obtain a target fitting function, wherein the target fitting function comprises a horizontal axis which is time and a vertical axis which is related to at least one vibration parameter in the vibration parameter set i;
206. and estimating maintenance information of the component corresponding to the vibration parameter set i according to the target fitting function.
The fault detection method described in the above steps 201 to 206 may refer to corresponding steps of the fault detection method described in fig. 1A.
It can be seen that, by the method for detecting a fault and the related product according to the embodiments of the present application, a first video sequence for a target object is obtained, the first video sequence is an RGB video image, at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence, target fault information of the target object is determined according to the at least one vibration parameter set, the target fault information is displayed on a display interface, a mean square error is taken from a vibration parameter set i to obtain a target mean square error, the vibration parameter set i is any one of the at least one vibration parameter set, the target fault information corresponding to the target mean square error is determined according to a mapping relationship between a preset mean square error and the fault information, the vibration parameter set i is linearly fitted to obtain a target fitting function, the target fitting function includes a horizontal axis as time, and a vertical axis as a relation with at least one vibration parameter of the vibration parameter set i, the maintenance information of the part corresponding to the vibration parameter set i is estimated according to the target fitting function, so that the vibration parameters can be extracted from the video image, the fault information is analyzed according to the vibration parameters, the fault information can be rapidly identified, the fault detection efficiency is improved, the time when and how the target object needs to be maintained can be estimated, and the maintenance efficiency is improved.
In accordance with the above, the following is a device for implementing the fault detection method, specifically as follows:
please refer to fig. 3A, which is a schematic structural diagram of an embodiment of a fault detection apparatus according to an embodiment of the present disclosure. The failure detection device described in this embodiment includes: the acquiring unit 301, the extracting unit 302 and the displaying unit 303 are specifically as follows:
an obtaining unit 301, configured to obtain a first video sequence for a target object, where the first video sequence is an RGB video image;
an extracting unit 302, configured to extract at least one vibration parameter set corresponding to the target object according to the first video sequence;
the display unit 303 is configured to determine target failure information of the target object according to the at least one vibration parameter set, and display the target failure information on a display interface.
Therefore, by the fault detection device, the first video sequence aiming at the target object is obtained, the first video sequence is an RGB video image, the at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence, the target fault information of the target object is determined according to the at least one vibration parameter set, and the target fault information is displayed on the display interface.
Optionally, in terms of the extracting at least one vibration parameter set corresponding to the target object according to the first video sequence, the extracting unit 302 is specifically configured to:
performing color space transformation on the first video sequence to obtain a transformed first video sequence;
separating the transformed first video sequence to obtain luminance component information and chrominance component information;
performing Fourier transform on the brightness component information to obtain frequency domain brightness component information;
performing motion amplification processing on the frequency domain brightness component information to obtain the frequency domain brightness component after amplification processing;
performing inverse Fourier transform on the amplified frequency domain brightness component to obtain time domain brightness component information;
synthesizing the time domain brightness information and the chrominance component information to obtain a synthesized video sequence;
performing inverse transformation corresponding to the color space transformation on the synthesized video sequence to obtain a second video sequence;
and determining at least one vibration parameter set corresponding to the target object according to the second video sequence.
Optionally, in terms of the determining at least one vibration parameter set corresponding to the target object according to the second video sequence, the extracting unit 302 is specifically configured to:
determining a cross-power spectrum between frame sequences corresponding to the second video sequence;
carrying out interpolation filtering processing on the cross power spectrum to obtain the processed cross power spectrum;
and performing inverse Fourier transform on the processed cross power spectrum, and comparing the cross power spectrum phase by phase to obtain at least one vibration parameter set corresponding to the target object.
Optionally, in respect to the motion amplification processing on the frequency domain luminance component information to obtain the frequency domain luminance component after the amplification processing, the extracting unit 302 is specifically configured to:
extracting characteristic points of the frequency domain brightness component information to obtain a characteristic point set;
clustering the feature point set to obtain multi-type feature points;
performing optical flow field interpolation operation on each pixel point in the frequency domain brightness component information to obtain a dense optical flow field;
tracking the motion track of each type of feature point in the multiple types of feature points to obtain multiple action layers, wherein each type of feature point corresponds to one action layer;
and performing texture filling on the plurality of action layers, and performing amplification processing on a preset layer in the plurality of action layers after the texture filling to obtain the amplified frequency domain brightness component.
Optionally, each set of vibration parameters corresponds to a component of the target object;
in the aspect of determining the fault information of the target object according to the at least one vibration parameter set, the display unit 303 is specifically configured to:
performing mean square error taking on a vibration parameter set i to obtain a target mean square error, wherein the vibration parameter set i is any one vibration parameter set in the at least one vibration parameter set;
and determining the target fault information corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fault information.
Alternatively, as shown in fig. 3B, fig. 3B is a further modified structure of the fault detection apparatus depicted in fig. 3A, which may further include, compared with fig. 3A: the fitting unit 304 and the determining unit 305 are specifically as follows:
a fitting unit 304, configured to perform linear fitting on the vibration parameter set i to obtain a target fitting function, where the target fitting function includes a horizontal axis as time and a vertical axis as correlation with at least one vibration parameter in the vibration parameter set i;
a determining unit 305, configured to estimate maintenance information of a component corresponding to the vibration parameter set i according to the target fitting function.
It can be understood that the functions of each program module of the fault detection apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
In accordance with the above, please refer to fig. 4, which is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present disclosure. The electronic device described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, e.g., a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 being connected by a bus 5000.
The input device 1000 may be a touch panel, a physical button, or a mouse.
The output device 2000 may be a display screen.
The memory 4000 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 4000 is used for storing a set of program codes, and the input device 1000, the output device 2000 and the processor 3000 are used for calling the program codes stored in the memory 4000 to execute the following operations:
the processor 3000 is configured to:
acquiring a first video sequence aiming at a target object, wherein the first video sequence is an RGB video image;
extracting at least one vibration parameter set corresponding to the target object according to the first video sequence;
and determining target fault information of the target object according to the at least one vibration parameter set, and displaying the target fault information on a display interface.
It can be seen that, according to the electronic device described in the embodiment of the present application, a first video sequence for a target object is obtained, the first video sequence is an RGB video image, at least one vibration parameter set corresponding to the target object is extracted according to the first video sequence, target failure information of the target object is determined according to the at least one vibration parameter set, and the target failure information is displayed on a display interface.
Optionally, in the aspect of extracting at least one vibration parameter set corresponding to the target object according to the first video sequence, the processor 3000 is specifically configured to:
performing color space transformation on the first video sequence to obtain a transformed first video sequence;
separating the transformed first video sequence to obtain luminance component information and chrominance component information;
performing Fourier transform on the brightness component information to obtain frequency domain brightness component information;
performing motion amplification processing on the frequency domain brightness component information to obtain the frequency domain brightness component after amplification processing;
performing inverse Fourier transform on the amplified frequency domain brightness component to obtain time domain brightness component information;
synthesizing the time domain brightness information and the chrominance component information to obtain a synthesized video sequence;
performing inverse transformation corresponding to the color space transformation on the synthesized video sequence to obtain a second video sequence;
and determining at least one vibration parameter set corresponding to the target object according to the second video sequence.
Optionally, in the aspect of determining at least one vibration parameter set corresponding to the target object according to the second video sequence, the processor 3000 is specifically configured to:
determining a cross-power spectrum between frame sequences corresponding to the second video sequence;
carrying out interpolation filtering processing on the cross power spectrum to obtain the processed cross power spectrum;
and performing inverse Fourier transform on the processed cross power spectrum, and comparing the cross power spectrum phase by phase to obtain at least one vibration parameter set corresponding to the target object.
Optionally, in the aspect of performing motion amplification processing on the frequency domain luminance component information to obtain the frequency domain luminance component after amplification processing, the processor 3000 is specifically configured to:
extracting characteristic points of the frequency domain brightness component information to obtain a characteristic point set;
clustering the feature point set to obtain multi-type feature points;
performing optical flow field interpolation operation on each pixel point in the frequency domain brightness component information to obtain a dense optical flow field;
tracking the motion track of each type of feature point in the multiple types of feature points to obtain multiple action layers, wherein each type of feature point corresponds to one action layer;
and performing texture filling on the plurality of action layers, and performing amplification processing on a preset layer in the plurality of action layers after the texture filling to obtain the amplified frequency domain brightness component.
Optionally, each set of vibration parameters corresponds to a component of the target object;
in the aspect of determining the fault information of the target object according to the at least one vibration parameter set, the processor 3000 is specifically configured to:
performing mean square error taking on a vibration parameter set i to obtain a target mean square error, wherein the vibration parameter set i is any one vibration parameter set in the at least one vibration parameter set;
and determining the target fault information corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fault information.
Optionally, the processor 3000 is further specifically configured to:
performing linear fitting on a vibration parameter set i to obtain a target fitting function, wherein the target fitting function comprises a horizontal axis which is time and a vertical axis which is related to at least one vibration parameter in the vibration parameter set i;
and estimating maintenance information of the component corresponding to the vibration parameter set i according to the target fitting function.
The present application further provides a computer storage medium, where the computer storage medium may store a program, and the program includes some or all of the steps of any one of the fault detection methods described in the above method embodiments when executed.
Embodiments of the present application also provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps of any of the methods described above in embodiments of the present application. The computer program product may be a software installation package.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (7)

1. A method of fault detection, comprising:
acquiring a first video sequence aiming at a target object, wherein the first video sequence is an RGB video image;
extracting at least one vibration parameter set corresponding to the target object according to the first video sequence;
determining target fault information of the target object according to the at least one vibration parameter set, and displaying the target fault information on a display interface;
wherein the extracting at least one vibration parameter set corresponding to the target object according to the first video sequence includes:
performing color space transformation on the first video sequence to obtain a transformed first video sequence;
separating the transformed first video sequence to obtain luminance component information and chrominance component information;
performing Fourier transform on the brightness component information to obtain frequency domain brightness component information;
performing motion amplification processing on the frequency domain brightness component information to obtain the frequency domain brightness component after amplification processing;
performing inverse Fourier transform on the amplified frequency domain brightness component to obtain time domain brightness component information;
synthesizing the time domain brightness component information and the chrominance component information to obtain a synthesized video sequence;
performing inverse transformation corresponding to the color space transformation on the synthesized video sequence to obtain a second video sequence;
determining at least one vibration parameter set corresponding to the target object according to the second video sequence;
wherein, the motion amplification processing is performed on the frequency domain luminance component information to obtain the frequency domain luminance component after the amplification processing, and the motion amplification processing includes:
extracting characteristic points of the frequency domain brightness component information to obtain a characteristic point set;
clustering the feature point set to obtain multi-type feature points;
performing optical flow field interpolation operation on each pixel point in the frequency domain brightness component information to obtain a dense optical flow field;
tracking the motion track of each type of feature point in the multiple types of feature points to obtain multiple action layers, wherein each type of feature point corresponds to one action layer;
and performing texture filling on the plurality of action layers, and performing amplification processing on a preset layer in the plurality of action layers after the texture filling to obtain the amplified frequency domain brightness component.
2. The method of claim 1, wherein the determining at least one set of vibration parameters corresponding to the target object from the second video sequence comprises:
determining a cross-power spectrum between frame sequences corresponding to the second video sequence;
carrying out interpolation filtering processing on the cross power spectrum to obtain the processed cross power spectrum;
and performing inverse Fourier transform on the processed cross power spectrum, and comparing the cross power spectrum phase by phase to obtain at least one vibration parameter set corresponding to the target object.
3. The method of claim 1 or 2, wherein each set of vibration parameters corresponds to a component of the target object;
the determining the fault information of the target object according to the at least one vibration parameter set comprises:
performing mean square error taking on a vibration parameter set i to obtain a target mean square error, wherein the vibration parameter set i is any one vibration parameter set in the at least one vibration parameter set;
and determining the target fault information corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fault information.
4. The method of claim 3, further comprising:
performing linear fitting on a vibration parameter set i to obtain a target fitting function, wherein the target fitting function comprises a horizontal axis which is time and a vertical axis which is related to at least one vibration parameter in the vibration parameter set i;
and estimating maintenance information of the component corresponding to the vibration parameter set i according to the target fitting function.
5. A fault detection device, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first video sequence aiming at a target object, and the first video sequence is an RGB video image;
the extracting unit is used for extracting at least one vibration parameter set corresponding to the target object according to the first video sequence;
the display unit is used for determining target fault information of the target object according to the at least one vibration parameter set and displaying the target fault information on a display interface;
wherein, in the extracting at least one vibration parameter set corresponding to the target object according to the first video sequence, the extracting unit is specifically configured to:
performing color space transformation on the first video sequence to obtain a transformed first video sequence;
separating the transformed first video sequence to obtain luminance component information and chrominance component information;
performing Fourier transform on the brightness component information to obtain frequency domain brightness component information;
performing motion amplification processing on the frequency domain brightness component information to obtain the frequency domain brightness component after amplification processing;
performing inverse Fourier transform on the amplified frequency domain brightness component to obtain time domain brightness component information;
synthesizing the time domain brightness component information and the chrominance component information to obtain a synthesized video sequence;
performing inverse transformation corresponding to the color space transformation on the synthesized video sequence to obtain a second video sequence;
determining at least one vibration parameter set corresponding to the target object according to the second video sequence;
wherein, in the aspect of performing motion amplification processing on the frequency domain luminance component information to obtain the amplified frequency domain luminance component, the extraction unit is specifically configured to:
extracting characteristic points of the frequency domain brightness component information to obtain a characteristic point set;
clustering the feature point set to obtain multi-type feature points;
performing optical flow field interpolation operation on each pixel point in the frequency domain brightness component information to obtain a dense optical flow field;
tracking the motion track of each type of feature point in the multiple types of feature points to obtain multiple action layers, wherein each type of feature point corresponds to one action layer;
and performing texture filling on the plurality of action layers, and performing amplification processing on a preset layer in the plurality of action layers after the texture filling to obtain the amplified frequency domain brightness component.
6. An electronic device comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-4.
7. A computer-readable storage medium storing a computer program for execution by a processor to implement the method of any one of claims 1-4.
CN201910810243.4A 2019-04-26 2019-08-29 Fault detection method and related product Active CN110602485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/104821 WO2021036634A1 (en) 2019-04-26 2020-07-27 Fault detection method and related product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910345977 2019-04-26
CN201910345977X 2019-04-26

Publications (2)

Publication Number Publication Date
CN110602485A CN110602485A (en) 2019-12-20
CN110602485B true CN110602485B (en) 2020-09-29

Family

ID=68856367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910810243.4A Active CN110602485B (en) 2019-04-26 2019-08-29 Fault detection method and related product

Country Status (2)

Country Link
CN (1) CN110602485B (en)
WO (1) WO2021036634A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602485B (en) * 2019-04-26 2020-09-29 深圳市豪视智能科技有限公司 Fault detection method and related product
CN110595602B (en) * 2019-04-26 2021-10-15 深圳市豪视智能科技有限公司 Vibration detection method and related product
CN112200019B (en) * 2020-09-22 2024-02-09 上海罗曼照明科技股份有限公司 Rapid building night scene lighting lamp fault detection method
CN113128474B (en) * 2021-05-17 2022-06-03 重庆大学 Structural modal identification method based on computer vision and variational modal decomposition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104048744A (en) * 2014-07-08 2014-09-17 安徽常春藤光电智能科技有限公司 Non-contact real-time online vibration measurement method based on images
CN106355563A (en) * 2016-08-31 2017-01-25 河南工业大学 Image defogging method and device
CN106897694A (en) * 2017-02-24 2017-06-27 西安天和防务技术股份有限公司 For the squatter building scene recognition method of land resources monitoring
JP2018085627A (en) * 2016-11-24 2018-05-31 川原 功 Moving image resolution evaluation image generation device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551901B (en) * 2009-05-25 2010-12-29 中国人民解放军国防科学技术大学 Method for compensating and enhancing dynamic shielded image in real time
US9990555B2 (en) * 2015-04-30 2018-06-05 Beijing Kuangshi Technology Co., Ltd. Video detection method, video detection system and computer program product
US10178381B2 (en) * 2015-07-14 2019-01-08 Microsoft Technology Licensing, Llc Depth-spatial frequency-response assessment
CN108414240B (en) * 2018-03-15 2020-08-11 广东工业大学 Method and device for detecting abnormal vibration of machine
JP2018143006A (en) * 2018-06-07 2018-09-13 株式会社ニコン Lens unit
CN109350030B (en) * 2018-08-17 2020-04-21 西安电子科技大学 System and method for processing human face video heart rate signal based on phase amplification
CN109520690A (en) * 2018-10-30 2019-03-26 西安交通大学 A kind of rotary machine rotor Mode Shape global measuring device and method based on video
CN110602485B (en) * 2019-04-26 2020-09-29 深圳市豪视智能科技有限公司 Fault detection method and related product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104048744A (en) * 2014-07-08 2014-09-17 安徽常春藤光电智能科技有限公司 Non-contact real-time online vibration measurement method based on images
CN106355563A (en) * 2016-08-31 2017-01-25 河南工业大学 Image defogging method and device
JP2018085627A (en) * 2016-11-24 2018-05-31 川原 功 Moving image resolution evaluation image generation device
CN106897694A (en) * 2017-02-24 2017-06-27 西安天和防务技术股份有限公司 For the squatter building scene recognition method of land resources monitoring

Also Published As

Publication number Publication date
CN110602485A (en) 2019-12-20
WO2021036634A1 (en) 2021-03-04

Similar Documents

Publication Publication Date Title
CN110595602B (en) Vibration detection method and related product
CN110602485B (en) Fault detection method and related product
JP5701005B2 (en) Object detection apparatus, object detection method, surveillance camera system, and program
CN111225611B (en) Systems and methods for facilitating analysis of wounds in a target object
US9536321B2 (en) Apparatus and method for foreground object segmentation
US7940992B2 (en) Visual discrimination model for single image applications
CN110059666B (en) Attention detection method and device
CN110197474B (en) Image processing method and device and training method of neural network model
CN108024027A (en) Image processing equipment, image processing method and storage medium
WO2021114896A1 (en) Computer vision-based anomaly detection method and apparatus, and electronic device
CN112766045B (en) Scene change detection method, system, electronic device and storage medium
CN112001362A (en) Image analysis method, image analysis device and image analysis system
CN110595603B (en) Video-based vibration analysis method and related product
Kuo et al. Improved visual information fidelity based on sensitivity characteristics of digital images
CN114331951A (en) Image detection method, image detection device, computer, readable storage medium, and program product
CN102783174B (en) Image processing equipment, content delivery system, image processing method and program
CN106611417B (en) Method and device for classifying visual elements into foreground or background
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN111179222A (en) Intelligent detection method and device for cerebral hemorrhage point and computer readable storage medium
Xiao A Multi-scale Structure SIMilarity metric for image fusion qulity assessment
CN108629219B (en) Method and device for identifying one-dimensional code
Fang et al. Image quality assessment of multi-exposure image fusion for both static and dynamic scenes
JP6034529B1 (en) Surface condition diagnostic equipment
Redi et al. How to apply spatial saliency into objective metrics for JPEG compressed images?
CN111724347A (en) Chest radiography image anomaly detection method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant