CN110553714A - Intelligent vibration augmented reality testing method and related product - Google Patents

Intelligent vibration augmented reality testing method and related product Download PDF

Info

Publication number
CN110553714A
CN110553714A CN201910819912.4A CN201910819912A CN110553714A CN 110553714 A CN110553714 A CN 110553714A CN 201910819912 A CN201910819912 A CN 201910819912A CN 110553714 A CN110553714 A CN 110553714A
Authority
CN
China
Prior art keywords
vibration
target
video
processor
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910819912.4A
Other languages
Chinese (zh)
Other versions
CN110553714B (en
Inventor
高风波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Haoxi Intelligent Technology Co Ltd
Shenzhen Guangning Co Ltd
Original Assignee
Shenzhen Haoxi Intelligent Technology Co Ltd
Shenzhen Guangning Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Haoxi Intelligent Technology Co Ltd, Shenzhen Guangning Co Ltd filed Critical Shenzhen Haoxi Intelligent Technology Co Ltd
Priority to CN201910819912.4A priority Critical patent/CN110553714B/en
Publication of CN110553714A publication Critical patent/CN110553714A/en
Priority to PCT/CN2020/105796 priority patent/WO2021036672A1/en
Application granted granted Critical
Publication of CN110553714B publication Critical patent/CN110553714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H9/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves by using radiation-sensitive means, e.g. optical means

Abstract

The embodiment of the application provides an intelligent vibration augmented reality testing method and a related product, wherein the method comprises the following steps: the camera acquires an initial vibration video of a detected object, wherein the detected object comprises an automobile skylight component; the processor determines vibration information of target feature points of the automobile skylight component from the initial vibration video, the detected object comprises at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point; the processor performs vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video of the automobile skylight component; the display displays the target vibration video, and therefore cost in test analysis can be reduced.

Description

Intelligent vibration augmented reality testing method and related product
Technical Field
The application relates to the technical field of internet, in particular to an intelligent vibration augmented reality testing method and a related product.
Background
The Internet is a huge network formed by connecting networks in series, and the networks are connected by a group of universal protocols to form a single huge international network logically. This method of interconnecting computer networks may be referred to as "internetworking", and on this basis, a worldwide internetworking network, referred to as the internet, has been developed to cover the world, i.e., a network structure of interconnected networks. The internet plus is a new state of internet development under the innovation 2.0 and is a new state of internet morphological evolution and the hastening economic social development thereof under the promotion of knowledge social innovation 2.0. The internet plus is a further practical achievement of internet thinking, promotes the economic form to continuously evolve, thereby driving the vitality of social and economic entities and providing a wide network platform for reformation, innovation and development. In popular terms, the internet plus is the internet plus all traditional industries, but the internet and the traditional industries are deeply fused by utilizing an information communication technology and an internet platform instead of simply adding the internet and the traditional industries, so that a new development ecology is created. The method represents a new social form, namely, the optimization and integration of the Internet in social resource configuration are fully exerted, the innovation achievements of the Internet are deeply integrated in all the fields of economy and society, the innovation power and the productivity of the whole society are improved, and a wider new economic development form taking the Internet as infrastructure and realizing tools is formed.
the traditional fault monitoring mechanism generally adopts local detection equipment, for example, Laser Doppler Vibrometers (LDVs) are arranged in a special room, local vibration detection, fault prediction and the like are carried out through the equipment, but LDVs have the defects of high price, limited use environment (the measurement result is seriously deteriorated by the environmental influences of temperature, illumination and the like of the test environment), small test area, difficulty in realizing remote monitoring and the like, and difficulty in meeting the increasingly intelligent vibration detection requirements in various scenes.
vibration is a very common research problem in engineering application, and related data show that more than 60% of the devices are subjected to state detection and fault diagnosis by adopting a vibration detection method. At present, there are many detection systems for vibration signals, and the acquisition methods are basically: the vibration acceleration signal sensor and the like are arranged on a component which can excite vibration of mechanical equipment, and the vibration of the test point has certain representativeness, so that parameters such as exciting force, vibration amplitude or frequency of an engine and the like can be accurately and stably reflected. Often, after the sensor or the instrument is manufactured and assembled, a series of comprehensive tests must be carried out on the designed indexes of the sensor or the instrument to determine the quality of the actual use performance of the sensor or the instrument, and the whole process is a necessary calibration process.
Due to the complexity of vibration and the complexity of a measurement field, the existing method adopting a vibration acceleration signal sensor to carry out vibration acquisition needs to install a plurality of groups of sensors when equipment is detected, for a plurality of equipment which is not provided with sensors in advance, field special wiring is needed for acquiring data, and finally, the acquired vibration information is tested and analyzed, so that the cost is high during test and analysis.
disclosure of Invention
the embodiment of the application provides an intelligent vibration augmented reality testing method and a related product, and the cost during testing and analysis can be reduced.
Specifically, the data transmission flow in the vibration detection method disclosed in the embodiment of the present application may be based on the internet + technology, so as to form a local + cloud or server distributed intelligent vibration detection system, on one hand, the local may perform accurate original image acquisition and preprocessing through an acquisition device, on the other hand, the cloud or server may predict the fault of the detected target based on the acquired distributed data by combining various special fault detection models obtained through statistical analysis of a big data technology, so as to implement deep fusion of the internet and the conventional fault monitoring industry, improve the intelligence and accuracy of fault monitoring, and meet the increasing intelligent vibration detection requirements in various scenes.
A first aspect of an embodiment of the present application provides a vibration augmented reality testing method, which is applied to a testing apparatus, where the testing apparatus includes a camera, a display, and a processor, and the camera, the display, and the processor are coupled and connected, and the method includes:
the camera acquires an initial vibration video of a detected object, wherein the detected object comprises an automobile skylight component;
the processor determines vibration information of target feature points of the automobile skylight component from the initial vibration video, the detected object comprises at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
the processor performs vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video of the automobile skylight component;
The display shows the target vibration video.
A second aspect of an embodiment of the present application provides a vibration augmented reality testing apparatus, which is applied to a testing apparatus, the testing apparatus includes a camera, a display and a processor, the camera, the display and the processor are coupled, the vibration augmented reality testing apparatus includes an obtaining unit, a determining unit, an amplifying unit and a displaying unit, wherein,
The acquisition unit is used for acquiring an initial vibration video of a detected object, and the detected object comprises an automobile skylight component;
the determining unit is configured to determine vibration information of a target feature point of the sunroof component from the initial vibration video, where the detected object includes at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
the amplifying unit is used for carrying out vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video of the automobile skylight component;
The display unit is used for displaying the target vibration video.
A third aspect of the embodiments of the present application provides a terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the step instructions in the first aspect of the embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps as described in the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has at least the following beneficial effects:
In the embodiment of the application, by acquiring an initial vibration video of a detected object, determining vibration information of a target feature point of the detected object from the initial vibration video, wherein the detected object comprises at least one vibration feature point, the target feature point is any one of the at least one vibration feature point, and performing vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video and display the target vibration video, therefore, compared with the prior art that a vibration acceleration signal sensor is adopted to perform vibration acquisition on the detected object, when equipment is detected, a plurality of groups of sensors need to be installed, the vibration information of the detected object in the video can be extracted only by acquiring the initial vibration video, the initial vibration video is amplified according to the vibration information to obtain the target vibration video, and the vibration of the target object can be amplified and displayed in the target video, so that the vibration in the vibration video can be amplified and displayed according to the vibration information, and the cost during vibration information test analysis can be reduced to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a vibration augmented reality testing system according to an embodiment of the present application;
fig. 2A is a schematic flow chart of a vibration augmented reality testing method according to an embodiment of the present application;
Fig. 2B is a schematic diagram of an image frame partitioning strategy according to an embodiment of the present application;
fig. 2C is a schematic diagram of a time-domain variation waveform of gray values of a pixel point according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of another vibration augmented reality testing method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating another vibration augmented reality testing method according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a vibration augmented reality testing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
the terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal), and so on. For convenience of description, the above-mentioned apparatuses are collectively referred to as electronic devices.
in order to better understand the vibration augmented reality testing method provided by the embodiment of the present application, first, a brief description is given below to a vibration augmented reality testing system applying the vibration augmented reality testing method. Referring to fig. 1, fig. 1 is a schematic structural diagram of a vibration augmented reality testing system according to an embodiment of the present disclosure. As shown in fig. 1, the display system includes a camera 101 and a display subsystem 102, the display subsystem 102 includes a display and a processor, wherein the camera 101 obtains an initial vibration video of a detected object, the camera 101 sends the initial vibration video to the display subsystem 102, the display subsystem 102 determines vibration information of a target feature point of the detected object from the initial vibration video, the detected object includes at least one vibration feature point, the target feature point is any one of the at least one vibration feature point, the display subsystem 102 performs vibration amplification processing on the initial vibration video according to the vibration information by using a preset vibration amplification method to obtain a target vibration video, the display subsystem 102 displays the target vibration video, and therefore, compared with the existing scheme, the method using a vibration acceleration signal sensor performs vibration acquisition on the detected object, when the device is detected, a plurality of groups of sensors are required to be installed, the initial vibration video can be acquired only, vibration information of a detected object in the video is extracted, the initial vibration video is amplified according to the vibration information to obtain a target vibration video, the vibration of the target object can be amplified and displayed in the target video, the vibration in the vibration video can be amplified and processed according to the vibration information and displayed, and the cost during test and analysis can be reduced to a certain extent.
referring to fig. 2A, fig. 2A is a schematic flow chart of a vibration augmented reality testing method according to an embodiment of the present application. As shown in fig. 2A, the testing method is applied to a testing apparatus, the testing apparatus includes a camera, a display and a processor, the camera, the display and the processor are coupled, and the displaying method includes steps 201 and 204, which are as follows:
201. the camera acquires an initial vibration video of a detected object, wherein the detected object comprises an automobile skylight component.
When the initial vibration video of the detected object is obtained, the initial vibration video can be obtained by shooting through a camera. The device for detecting the object having periodic vibration may be, for example: industrial motors, automobile engines, and the like. The camera may be located at a distance from the detected object, the distance being set by empirical values or historical data. When the camera shoots an initial vibration video, the camera and a detected object have the following states:
1) the detected equipment is in an ideal working environment before screening so as to ensure relatively stable natural vibration frequency and detection environment;
2) The initial vibration video is shot in the environment that light rays do not directly irradiate the surface of the detected object;
3) the device to be tested and the camera are not affected by other micro-vibrations.
alternatively, the ideal working environment may be understood as the working environment set by empirical values. Other minor vibrations may be, for example, vibrations caused by a human, for example, a person touching a table with a camera placed on the table.
optionally, the initial vibration video is a video that is not subjected to amplification processing and is directly acquired through a camera, wherein some micro vibrations of the detected object are not easy to observe. The initial vibration video includes a plurality of video frames.
202. The processor determines vibration information of target characteristic points of the automobile skylight component from the initial vibration video, the detected object comprises at least one vibration characteristic point, and the target characteristic point is any one of the at least one vibration characteristic point.
optionally, the target feature points may be: the environmental image of the detected object is at one or more of the relatively stationary points during the video capture process. The environment image may be, for example, an image of an environment in which the detected object is located, such as a background image.
203. And the processor performs vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video of the automobile skylight component.
the vibration information may be a vibration spectrogram of the target feature point.
optionally, the vibration amplification processing on the initial vibration video may be: and amplifying the amplitude of the vibration waveform of the target characteristic point in the initial vibration video, thereby achieving the purpose of amplifying the vibration of the target characteristic point. The enlargement processing is local enlargement processing, and may also be whole enlargement processing, for example, if all the points in the video are target feature points, the presented effect is a whole enlargement effect.
204. The display shows a target vibration video.
Optionally, the method for displaying the target vibration video may be: the display screen of the electronic device can be used for displaying in a real-time displaying mode, videos can be stored, and the videos are displayed when the display instruction is received. The display instruction may be an instruction issued by a user, and the user may issue the display instruction by clicking a display instruction generation button or the like in response to the electronic apparatus, or of course, the display instruction may also be issued in other manners, for example, the display instruction is issued by another electronic apparatus, and the electronic apparatus forwards the instruction to the electronic apparatus storing the target vibration video.
in the example, a camera is used for video acquisition, vibration information is obtained according to the video information, and finally the vibration information is displayed through a video, compared with the prior art, the vibration acquisition of the detected object is performed by adopting a vibration acceleration signal sensor method, because a plurality of groups of sensors are required to be installed when the equipment is detected, the initial vibration video can be acquired only, the vibration information of the detected object in the video is extracted, the initial vibration video is amplified according to the vibration information to obtain the target vibration video, the vibration of the target object in the target video can be amplified and displayed, the vibration in the vibration video can be amplified and displayed according to the vibration information, and the cost for acquiring and displaying the vibration information can be reduced to a certain extent.
In a possible embodiment, the initial vibration video includes a plurality of video frames, the target feature point in the target video frame includes at least one target pixel point, and the target video frame is any one of the plurality of video frames, and a possible vibration information for determining the target feature point of the detected object from the initial vibration video may include steps a1-A8, which are specifically as follows:
a1, determining a reference vibration area of the automobile skylight component by the processor;
A2, the processor randomly determines at least one reference video frame from a plurality of video frames;
a3, determining a reference characteristic point by a processor according to at least one reference video frame, wherein the reference characteristic point is a point at which an environment image of a detected object is relatively static in the video acquisition process;
A4, determining a target feature point from a reference vibration region by a processor according to a reference feature point, an initial vibration video and a preset feature point acquisition method, wherein the target feature point comprises at least one pixel point;
A5, acquiring brightness information of at least one target pixel point from a plurality of video frames by a processor;
A6, determining a plurality of brightness change factors of each target pixel point in at least one target pixel point by the processor according to the brightness information;
A7, determining a target brightness change factor of each target characteristic pixel point from a plurality of brightness change factors of each target pixel point by the processor by adopting a preset brightness factor determination method;
and A8, determining the vibration information of the target characteristic points by the processor according to the target brightness change factors of each target characteristic pixel point.
optionally, one possible method for determining the reference vibration region includes: the processor acquires the vibration audio frequency of the automobile skylight component; the processor determines a reference vibration region of the sunroof component from the vibration audio. The method for acquiring the vibration audio of the skylight component by the processor can be that the vibration audio is acquired by the audio collector, and the vibration audio can be acquired by moving around the skylight component and acquiring while moving, or certainly, the vibration audio can be acquired by fixing the audio collector at a position in a vehicle and acquiring the vibration audio through the fixed position. The method of determining the reference vibration region from the vibration audio may be: determining an audio generation part of sound according to the audio information, and taking the circular area with the center of circle and the preset radius of the audio generation part as a reference vibration area. The preset radius is set by empirical values or historical data. The method for determining the audio generation part of the sound according to the audio information can be determined according to the volume of the audio, wherein the volume close to the audio generation part is larger, and the volume far away from the audio generation part is smaller.
alternatively, another possible method for determining the reference vibration region includes: the processor queries a preset skylight abnormal sound database to obtain at least one reference position of skylight abnormity; the processor determines a reference vibration region of the sunroof component based on the at least one reference position. The skylight abnormal sound database comprises at least one reference position of skylight abnormity, the reference position is a position where skylight components are easy to generate abnormal sound and is set through experience values or historical data. The method for determining the reference vibration region from the at least one reference position may be: the determination can be performed according to the vibration audio of the skylight component, the reference vibration area is determined according to the direction of receiving the vibration audio, and at the moment, the position of the audio collector for collecting the vibration audio is a fixed position. The method for determining the reference vibration area according to the direction of receiving the vibration audio comprises the following steps: the reference vibration position located in the direction is taken as a reference vibration region.
a possible method for extracting multiple stable motion feature points in an image of a vibrating object according to a reference feature point, a target video and a preset motion feature point extraction strategy may be:
Performing rectangular partition on each frame image included in at least one frame image in the target video frame, and combining the partitions into a plurality of basic partitions according to N x N; detecting whether the edge partition in each basic partition contains a relative motion characteristic point, wherein the relative motion characteristic point is a motion characteristic point which carries out relative motion aiming at a reference characteristic point; if not, determining that the basic partition does not contain the target feature point; and if so, determining whether the movement distance of the relative movement characteristic point is in a preset range, and if so, determining the relative movement characteristic point as a target characteristic point.
Specifically, for the target video, besides the vibration picture of the object to be detected, some relatively static background images are also included, for example, when the shaking of the electric wire is shot, the surrounding house is shot; shooting the floor of the indoor building when the engine vibrates, and the like. And acquiring points on the object in a relatively static state as reference characteristic points, and extracting target characteristic points in the image of the vibrating object according to a preset motion characteristic point extraction strategy.
For at least one frame of image in the target video, not every area contains moving points, and if each area in the image frame is checked one by one, a large amount of time is needed to obtain stable moving characteristic points. Then, the motion feature points can be extracted by adopting a proper partitioning strategy to improve the efficiency. Referring to fig. 2B, fig. 2B is a block diagram of an image frame partitioning strategy according to an embodiment of the present disclosure, as shown in fig. 2B, each image frame may be rectangular partitioned according to each pixel, where one partition corresponds to one pixel, such as partition 21, and then the partitions are combined into a plurality of basic partitions according to N × N, where N is an integer greater than 2. For example, when N is 3, the base partition 22 is obtained, and then it is detected whether the edge partition in each base partition contains the relative motion feature point, for example, in the base partition 22, the edge partition marked by a black dot is detected, and if the relative motion feature point is not contained therein, it is determined that the base partition 22 does not contain the stable motion feature point.
if the edge partition contains the relative motion characteristic point, whether the motion distance of the relative motion characteristic point is in a preset range or not is determined, the target characteristic point represents that the vibrating object does reciprocating motion, and the corresponding motion distance is in a certain range. When the movement distance of the movement characteristic point exceeds the preset range, the movement characteristic point is considered to be a point which moves violently, and the target characteristic point cannot be judged. The method for obtaining the preset range can be determined according to an empirical value, and can also be used for clustering the movement distances of all relative movement characteristic points to obtain a cluster containing the most data, then determining the preset range of the movement distances according to a convergence value, wherein if the convergence value is 0.9, the numerical range which is satisfied by 90% of the movement distances is the preset range.
In this example, the reference feature points are selected in advance, the reference feature points are relatively static points, and then the target feature points are determined according to the reference feature points and the like, so that the target feature points can be automatically obtained, and meanwhile, the target feature points are determined in a reference feature point mode, and the accuracy in obtaining the target feature points can be improved.
Another method for determining a target feature point from a reference vibration region according to a reference feature point, an initial vibration video and a preset feature point acquisition method includes steps a41-a43, which are specifically as follows:
A41, dividing a plurality of video frames into a plurality of video frame sets according to the sliding stroke of the automobile skylight component, wherein each video frame set comprises at least one video frame, each video frame set corresponds to a section of sliding track of the automobile skylight, and the directions of the sliding tracks corresponding to different video frame sets are different;
a42, performing frame skipping detection on each video frame set, and determining a video frame set needing vibration feature point screening;
And A43, determining the target characteristic points from the reference vibration area according to the video frames in the video frame set, the reference characteristic points and the preset characteristic point acquisition method.
the sliding stroke of the automobile skylight component can be understood as the stroke for opening and closing the automobile skylight component, the directions of sliding tracks during opening and closing are different, and when video frames are collected, the video frames of the continuous multiple opening and closing processes of the automobile skylight component can be collected within a preset time period. The preset time period is set by an empirical value or historical data.
optionally, the method of dividing the plurality of video frames into a plurality of video frame sets may be: and classifying the video frames with the same sliding track direction and the same sliding stroke into a video frame set in one opening or closing process.
Optionally, the method for performing frame skipping detection on each video frame set to determine the video frame set that needs to be subjected to the vibration feature point screening may be: judging whether the video frame at the first position of the video frame set has vibration when the same stroke is opened or closed, and if so, determining the video frame as the video frame set needing vibration characteristic point screening. The first position video frame can be understood as the first video frame when video shooting is performed on the same trip. Judging whether the leading position video frame has vibration or not can be understood as: and judging whether the video frames are the same or not, if so, determining that the video frames are not vibrated, and if not, determining that the video frames are vibrated.
optionally, according to the video frame in the video frame set, the reference feature point and the preset feature point obtaining method, the method for determining the target feature point from the reference vibration region may be obtained by referring to the method for determining the target feature point in the foregoing embodiment.
optionally, the target feature point is a selected feature point, and at the position of the target feature point, one or more target pixel points are combined, and the position of the target feature point may change in each video frame, so that the target pixel points forming the target feature point may also change. When the brightness information of at least one target pixel point is obtained, the brightness information may be brightness information of different pixel points, but corresponding relations exist among the pixel points of the target feature points in different video frames, and the corresponding relations can be directly determined by the target feature points.
optionally, the luminance information may be represented by a gray scale value, and different gray scale values may represent different luminance information.
Optionally, the method for determining multiple luminance change factors of at least one target pixel point according to the luminance information may be: the luminance information is used to determine a plurality of luminance variation factors by an optical flow method, each of the luminance variation factors can be represented by a waveform, and the luminance variation factors can be variation factors generated by vibration, light variation, position variation and the like. Where optical flow can be used to track multiple points within a region of interest, since two components of motion (vertical and horizontal) can be detected during tracking, it becomes important to select the components for analysis and detection, and we note that horizontal motion is mainly due to dynamic balance sway. Therefore, the horizontal component is omitted herein. The vertical component signal may pass through a butterworth bandpass digital filter, and the frequency band of this filter may be selected to be 0.75 to 2hz, but of course, other frequency band values are also possible, and this is merely an example and is not limited specifically. The predicted mechanical model frequency range of the device under test is provided in the application and is obtained by spectral analysis using fast fourier transform. The preset mechanical model frequency range of the tested equipment is mainly used for predicting the corresponding power of the vibration of the mechanical equipment.
Optionally, a possible method for determining a preset brightness factor includes steps a71-a72, where the method for determining a target brightness change factor of each target feature pixel from a plurality of brightness change factors of each target pixel includes:
A71, the processor performs principal component analysis on the brightness change factor of at least one target pixel point by adopting a principal component decomposition method to obtain a reference brightness change factor meeting preset characteristics;
and A72, filtering the reference brightness variation factor by the processor to obtain the target brightness variation factor.
among them, Principal Component Analysis (PCA) is a dimension reduction processing method that can select a specific luminance variation factor caused by vibration from among a plurality of luminance variation factors. The preset feature is a change in luminance caused by vibration.
Optionally, when the reference luminance variation factor is filtered to obtain the target luminance variation factor, the adopted filtering method may be an interpolation filtering method or the like.
in this example, the principal component decomposition method is used to perform the dimension reduction processing to obtain the reference luminance variation factor, and then the reference luminance variation factor is subjected to the filtering processing to obtain the target luminance variation factor.
in a possible embodiment, the vibration information includes a vibration spectrogram, and a possible method for performing vibration amplification processing on the initial vibration video by using a preset vibration amplification method according to the vibration information to obtain the target vibration video includes steps B1-B4, which are as follows:
b1, the processor acquires a plurality of reference vibration mode information of the target characteristic points from the vibration spectrogram;
B2, determining at least one target vibration mode information from the plurality of reference vibration mode information by the processor;
b3, the processor obtains the target magnification of at least one piece of target vibration mode information;
and B4, the processor adopts the target amplification factor to amplify the amplitude of at least one vibration mode information to obtain a target vibration video.
The method for acquiring the multiple pieces of reference vibration mode information from the vibration spectrogram may be: and determining a plurality of harmonic components of the vibration waveform of the target characteristic point according to the vibration spectrogram, and taking the harmonic components as a plurality of reference vibration mode information. The waveform of each harmonic component can be directly reflected in the vibration spectrogram, and a plurality of harmonic components can be directly determined from the spectrogram. Certainly, a vibration waveform expression of the target characteristic point can also be obtained through the vibration frequency spectrum, taylor expansion is performed on the vibration waveform expression to obtain a taylor expansion formula, after the taylor expansion, parameter formulas with different powers can be directly extracted from the taylor expansion formula, and the parameter formulas with different powers are waveform expressions of harmonic components.
Optionally, when at least one piece of target vibration mode information is determined from the multiple pieces of reference vibration mode information, the determination may be performed according to a specific requirement of vibration that needs to be amplified, for example, 2-th harmonic component needs to be amplified, the 2-th harmonic component in the reference vibration mode information may be used as the target vibration mode information, and the determination may be performed according to a specific requirement of vibration that needs to be amplified, and may be determined through an empirical value or historical data.
Optionally, when the target magnification factor of the at least one target vibration mode information is obtained, the target magnification factor corresponding to the detected object may be extracted from the database, and different detected objects may have different magnification factors, which may be specifically determined by a mapping relationship between the detected object and the magnification factor, where the mapping relationship is established by an empirical value or historical data and is pre-stored in the electronic device.
Optionally, when the amplification processing is performed, the amplitude of the vibration mode information may be amplified, so as to achieve an effect of amplifying the vibration information. The vibration of the amplified target characteristic points can be observed by human eyes, so that the practicability of the video display method can be improved.
In this example, by extracting a plurality of pieces of vibration mode information of the target feature point, determining the target vibration mode information from the vibration mode information, and performing amplitude amplification processing on the target vibration mode information, the small vibration can be amplified to become vibration that can be seen by human eyes, and thus the practicability of the video display method can be improved.
in a possible embodiment, in the scheme of the present application, filtering processing may be further performed on the reference vibration mode information, and a possible filtering method includes steps C1-C4, which are specifically as follows:
c1, determining an initial filtering bandwidth corresponding to the multiple pieces of reference vibration mode information by the processor according to the frequency bands of the multiple pieces of reference vibration mode information;
c2, determining bandwidth correction factors corresponding to the multiple pieces of reference vibration mode information by the processor according to the positions of the multiple pieces of reference vibration mode information in the initial vibration video;
c3, determining a target filtering bandwidth corresponding to the multiple reference vibration mode information by the processor according to the initial filtering bandwidth and the bandwidth correction factor;
and C4, filtering the multiple reference vibration mode information by the processor by adopting the target filtering bandwidth to obtain the multiple filtered reference vibration mode information.
In a picture corresponding to the initial vibration video, the state change conditions at different positions are different, and the frequency distribution of the corresponding reference vibration mode information is also different. If the filtering is performed with the same filtering bandwidth, the filtering bandwidth needs to be set to a larger value in order to avoid filtering the useful signal, but this may result in the noise signal not being filtered. In the filtering method, the initial filtering bandwidth corresponding to each piece of reference vibration mode information is determined according to the frequency band corresponding to each piece of reference vibration mode information, the frequency band corresponding to the reference vibration mode information is positively correlated with the initial filtering bandwidth, and the larger the frequency band corresponding to the reference vibration mode information is, the wider the corresponding initial filtering bandwidth is, so that the filtering bandwidth can be better adapted to each piece of reference vibration mode information, not only can noise signals be effectively filtered, but also useful signals can be prevented from being filtered, and the filtering effect can be better.
for example, B × (1-a) may be used as an initial filtering bandwidth, B is a frequency band corresponding to each piece of reference vibration mode information, a may be determined according to actual requirements, a may be set to a larger value when a better drying effect needs to be obtained, and a may be set to a smaller value when more useful signals need to be obtained. Of course, in other embodiments, the setting method of the initial filtering bandwidth is not limited to the above example, and is not limited herein.
Optionally, the effect of the vibration at different positions in the initial vibration video on the operation condition of the camera is different. Then the bandwidth correction factor corresponding to the reference vibration mode information can be determined according to the corresponding position of each reference vibration mode information in the detected video. And adjusting the initial filtering bandwidth by using the bandwidth correction factor to ensure that the filtering bandwidth is more reasonable.
For example, a picture corresponding to the initial vibration video may be divided into a plurality of regions according to the magnitude of the effect of the vibration of the corresponding position of the reference vibration mode information in the detection video on the operation condition of the camera, and a preset bandwidth correction factor is set for each region, so that when the bandwidth correction factor is determined, the preset bandwidth correction factor corresponding to the region where the corresponding position of each reference vibration mode information in the detection video is located may be directly obtained, and the preset bandwidth correction factor is used as the bandwidth correction factor corresponding to the reference vibration mode information. Therefore, the bandwidth correction factor can be obtained quickly and accurately.
Optionally, the user may set a bandwidth weighting factor for each region in the picture corresponding to the initial vibration video according to the structure of the device to be detected, and then use the product of the preset bandwidth modification factor and the bandwidth weighting factor as the related bandwidth modification factor of the corresponding region, so that the obtained filtering bandwidth of each related peak is more reasonable. For example, the vibration of some key parts in the camera can reflect the operation condition of the camera more than other parts, and the bandwidth weighting coefficient of the corresponding region of the key part in the picture of the initial vibration video can be set to a larger value, so that the filtering bandwidth is larger, and more useful information can be extracted.
optionally, one possible method for determining a target filtering bandwidth corresponding to information of multiple reference vibration modes according to an initial filtering bandwidth and a bandwidth correction factor includes: and multiplying the broadband correction factor by the initial filtering bandwidth to obtain the target filtering bandwidth.
In this example, the target filtering bandwidth obtained by the method comprehensively considers the corresponding position of the correlation peak in the initial vibration video and the frequency band corresponding to the reference vibration mode information, so that the filtering bandwidth is more adaptive to each reference vibration mode information, and therefore, a useful state change signal can be extracted from each reference vibration mode information.
In a possible embodiment, the vibration information can be analyzed to obtain an abnormal vibration probability value; and if the abnormal vibration probability value is higher than a preset abnormal vibration probability value threshold, sending alarm information. The method for analyzing the vibration information to obtain the vibration attitude information may be: comparing the vibration spectrogram with a preset vibration spectrogram to obtain the similarity between the vibration spectrogram and the preset vibration spectrogram; and determining the abnormal vibration probability value according to the similarity. The method for comparing the vibration spectrogram with the preset vibration spectrogram can be as follows: performing feature extraction on the vibration spectrogram to obtain feature data; and comparing the characteristic data with the preset characteristic data of the vibration frequency spectrum to obtain the similarity. When feature extraction is performed, a feature extraction algorithm may be used for extraction, for example: directional gradient histogram, local binary pattern, etc. According to the similarity, the method for determining the abnormal vibration probability value is that a value obtained by subtracting the similarity from 1 is used as the abnormal vibration probability value. The preset abnormal vibration probability value threshold may be set by an empirical value or historical data. The alarm information may be, for example: the voice warning message, the vibration warning message, etc. are only for illustration and are not limited in detail.
In a possible embodiment, the user may click a feature point by clicking a display screen of the electronic device, so as to display the vibration information at the feature point, which specifically includes steps D1-D3, as follows:
D1, the processor receives a touch instruction input by a user;
d2, determining the feature points to be displayed, which need to be displayed with vibration information, from the at least one vibration feature point by the processor according to the touch instruction;
D3, the processor displays the vibration information of the characteristic points to be displayed.
The receiving of the touch instruction input by the user may be: and clicking a specific position of the target video by the user, and inputting a touch instruction in a clicking mode. The specific position may be a position where the vibration feature point is located, or may be another position in the video, and is not limited specifically herein.
optionally, the feature points to be displayed may be determined according to the position touched by the touch instruction. For example, if the touch position coincides with the position of the vibration feature point, the vibration feature point is used as a feature point to be displayed.
Optionally, the manner of displaying the vibration information of the feature points to be displayed may be: and displaying vibration information at the position of the characteristic point to be displayed in a pop-up window mode, wherein the vibration information can be a vibration oscillogram, a vibration frequency spectrogram, a vibration parameter and the like.
In one possible embodiment, the user may wear augmented reality glasses to perform vibration data testing on the detected object, and the method includes steps E1-E7, as follows:
E1, enabling a processor user to wear augmented reality glasses, starting a vibration data test mode, carrying out video acquisition on an object to be detected by the augmented reality glasses, and sending the acquired video to a cloud server in a wireless mode;
e2, after receiving the video, the processor cloud server determines vibration information of a target feature point of a detected object from the video, wherein the detected object comprises at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
e3, the processor sends the vibration information of each feature point to the augmented reality glasses;
E4, the processor receives a touch instruction of a user;
e5, determining the vibration characteristic points needing vibration information display according to the touch control instruction by the processor;
e6, the processor displays the vibration information of the vibration characteristic points.
the vibration data test mode can be understood as a state of carrying out vibration video acquisition on the detected object, and after the vibration data test mode is started, the vibration video acquisition can be carried out on the detected object.
Optionally, the step E2 may refer to a specific implementation procedure of the step 202, and the steps E4 and E5 may refer to a specific implementation manner of the steps D1 and D2.
Optionally, the method for displaying the vibration information of the vibration feature point may be: when the augmented reality glasses are displayed, if the positions of the glasses are changed, the display positions of the collected feature points in the glasses are correspondingly changed, and the positions of the display information can move along with the movement of the feature points.
Optionally, after receiving the vibration information of each feature point, the vibration information of each feature point may be directly displayed on the augmented reality glasses.
optionally, in order to improve the security of the communication between the augmented reality glasses and the cloud server, the security may be improved by the following method:
Before data transmission, a secure communication channel is established, and data transmission is performed through the secure communication channel, one possible method for establishing the secure communication channel relates to a cloud server, augmented reality glasses and proxy equipment, wherein the proxy equipment is credible third-party equipment, and specifically comprises the following steps:
S1, initialization: and in the initialization stage, the registration of the cloud server and the augmented reality glasses in the proxy equipment, the subscription of the theme and the generation of system parameters are mainly completed. The cloud server and the augmented reality glasses register with the agent device, the issuing and subscribing of the theme can be participated only through the registered cloud server and the registered augmented reality glasses, and the augmented reality glasses subscribe the related theme to the agent device. The agent device generates a system public Parameter (PK) and a master key (MSK), and sends the PK to the registered cloud server and the augmented reality glasses.
s2, encryption and release: and the encryption and release stage mainly comprises the steps that the cloud server encrypts the load corresponding to the subject to be released and sends the load to the agent equipment. Firstly, the cloud server encrypts a load by adopting a symmetric encryption algorithm to generate a Ciphertext (CT), and then an access structure is formulatedPK and generated according to cloud serverand encrypting the symmetric key, and finally sending the encrypted key and the encrypted load to the proxy equipment. And after receiving the encrypted key and the encrypted CT sent by the cloud server, the proxy equipment filters and forwards the key and the CT to the augmented reality glasses.
Optionally, an access structureIs an access tree structure. Each non-leaf node of the access tree is a threshold, denoted by Kxis represented by 0<=Kx<Num (x), num (x) indicates the number of child nodes. When K isxnum (x), the non-leaf node represents the and gate; when K isxwhen 1, the non-leaf node represents an or gate; each leaf node of the access tree represents an attribute. The attribute set satisfying an access tree structure can be defined as: let T ber is the access tree of the root node, TxIs a subtree of T with x as the root node. If T isx(S) < 1 > indicates that the attribute set S satisfies the access structure Tx. If node x is a leaf node, T is a set of attributes S if and only if the attribute att (x) associated with leaf node x is an element of attribute set Sx(S) ═ 1. If node x is a non-leaf node, at least Kxchild node z satisfies TzWhen (S) is 1, Tx(S)=1。
s3, private key generation: the private key generation stage is mainly that the agent device generates a corresponding secret key for the augmented reality glasses to decrypt the CT received thereafter. Augmented reality glasses provide a set of attributes A to a proxy devicei(the attribute can be the information of the characteristics, roles and the like of the subscriber), the proxy device collects A according to PK and attributeiAnd the master key MSK generates a private key SK and then sends the generated private key SK to the augmented reality glasses.
Optionally, attribute set AiIs a global set of U ═ A1,A2,…,AnA subset of. Attribute set AiThe attribute information indicating the augmented reality glasses i (i-th augmented reality glasses) may be a feature, a character, or the like of the augmented reality glasses, and the global set U indicates a set of attribute information of all the augmented reality glasses as a default attribute of the augmented reality glasses.
S4, decryption: the decryption stage is mainly a process of decrypting the encrypted load by the augmented reality glasses and extracting the civilization. After receiving the encrypted secret key and the CT sent by the agent device, the augmented reality glasses decrypt the encrypted secret key according to the PK and the SK to obtain a symmetric secret key. If its attribute set AiAccess structure satisfying ciphertextthe ciphertext can be successfully decrypted, so that the safety of the communication process is guaranteed.
by constructing the secure communication channel, the security of communication between the augmented reality glasses and the cloud server can be improved to a certain extent, the possibility that an illegal user steals data transmitted between the legal augmented reality glasses and the cloud server is reduced, and the occurrence of the situation that the illegal user steals important data in the system through an intrusion system and a tampering system is also reduced.
In a possible embodiment, after the vibration information is displayed, the vibration data may be further analyzed to determine a type of the vibration, such as normal vibration or abnormal vibration, the vibration data includes a vibration waveform map, the vibration waveform map may be obtained by transforming a vibration spectrogram, the transformation method may be an inverse fourier transformation method, and the abnormal analysis method may include steps F1-F7, which are as follows:
f1, determining at least one vibration maximum value and at least one vibration minimum value by the processor according to the vibration waveform diagram;
F2, performing mean operation on the at least one vibration maximum value by the processor to obtain a target vibration maximum value mean value, and performing mean operation on the at least one vibration minimum value to obtain a target vibration minimum value mean value;
F3, if the target vibration maximum value mean value is outside a preset first vibration extreme value interval and the target vibration minimum value mean value is outside a preset second vibration extreme value interval, determining that the vibration data are suspected abnormal vibration data, wherein the minimum value of the first vibration extreme value interval is larger than the maximum value of the second vibration extreme value interval;
F4, the processor determines a target vibration maximum value from the at least one vibration maximum value, wherein the target vibration maximum value is the maximum value of the at least one vibration maximum value, and determines a target vibration minimum value from the at least one vibration minimum value, and the target vibration minimum value is the minimum value of the at least one vibration minimum value;
f5, subtracting the preset vibration maximum value from the target vibration maximum value by the processor to obtain a first difference value, and subtracting the preset vibration minimum value from the target vibration minimum value to obtain a second difference value;
F6, subtracting the absolute value of the first difference from the absolute value of the second difference by the processor to obtain a third difference;
And F7, if the absolute value of the second difference and the first difference are both larger than a preset first difference threshold, and the third difference is smaller than a preset second difference threshold, determining the suspected abnormal vibration information as the abnormal vibration information, wherein the preset first difference threshold is larger than a preset second difference threshold.
The method for determining the vibration maximum value and the vibration minimum value according to the vibration waveform can be as follows: and determining a vibration maximum value and a vibration minimum value by a differential method.
Optionally, the first vibration extremum interval, the second vibration extremum interval, the preset first difference threshold, and the second preset difference threshold are set by empirical values or historical data.
In this example, an extreme value of the vibration waveform is obtained by applying a vibration waveform diagram, then the vibration data is determined as suspected abnormal vibration data according to the extreme value, then the suspected abnormal vibration data is determined as abnormal vibration data according to the target vibration maximum value and the target vibration minimum value, and the accuracy of determining the abnormal vibration data can be improved to a certain extent through two times of determination and analysis of the abnormal vibration data.
in one possible embodiment, the damage information of the skylight member can be obtained and displayed, and the method includes steps G1-G4, as follows:
g1, determining a reference damaged area of the automobile skylight component according to the reference vibration area by the processor;
g2, determining a target damaged area from the reference damaged area by the processor according to the vibration information of the target characteristic points;
G3, determining a target component from the target damaged area by the processor by adopting a preset component damage determination method, wherein the target component is a damaged component in the automobile skylight component and damage information of the target component;
G4, displaying the damaged information.
Wherein the initial vibration video may be analyzed to determine a reference damaged area of the sunroof component from the reference vibration area. When the reference damaged area is determined, an area with a vibration frequency higher than a first frequency threshold value in the reference vibration area can be used as the reference damaged area, and because the vibration frequency of a normal automobile skylight component is lower when the automobile bumps, the area with a higher vibration frequency can be used as the reference damaged area. The region having the connecting member in the reference vibration region may also be used as the reference damaged region. The connecting member may be a screw or the like, for example.
The determination may be performed according to the vibration information, and if the waveform of the vibration information is an abnormal vibration waveform, an area within a preset range of the target feature point may be used as a reference damaged area, the preset area may be a circle center of the feature point, and a circular area with a preset radius is used as the reference damaged area. The preset radius is set by empirical values or historical data.
optionally, if the vibration frequency of the target feature point is higher than the second frequency threshold, the region where the target feature point is located is taken as the target damage region. The first frequency threshold is less than the second frequency threshold.
optionally, by using a preset component damage determining method, the method for determining the target component from the target damaged area may be: the component damage determination model determines the target component and the damage information, and the component damage determination model can be trained through a supervised learning method or an unsupervised learning method. The samples of the learning method during learning are as follows: damaged area, damaged part, and damaged information of the part.
In the example, a reference damaged area can be determined from the reference vibration area, a target damaged area is determined from the reference damaged area according to vibration information of the target feature points, and finally, the damaged component and the damaged information are determined and displayed.
in one possible embodiment, the processor may screen at least one image sequence for the enlargement process from the image sequences in the initial vibration video, the image sequence for the enlargement process including the vibration feature points. A sequence of images in an initial vibro-video may also be referred to as a sequence of sub-band images
Optionally, a possible method for selecting from image sequences in an initial vibration video at least one image sequence for use in the up-scaling process includes: determining a foreground image and a background image of the plurality of sub-band image sequences, wherein the foreground image comprises a region image of the detected product, which moves back and forth, and the background image is an image except the image of the detected product; determining the area ratio of the foreground image in the sub-band image; determining the sub-partition number of the foreground image according to the area ratio and a preset sub-partition calculation formula, and dividing the foreground image into a plurality of foreground sub-partitions according to the sub-partition number; for each sub-band image sequence, performing the following operations (1) to (6) to obtain a gray value variation frequency of the each sub-band image sequence: (1) determining a tested pixel point of each foreground sub-partition of each sub-band image in a currently processed sub-band image sequence; (2) generating a gray value time domain variation oscillogram of each pixel point to be detected according to the gray values of a plurality of sub-band images contained in the current sub-band image sequence by each pixel point to be detected; (3) performing the following (a) (b) (c) operations for each foreground sub-partition: (a) determining whether the currently processed foreground sub-partition contains a detected pixel point with periodically changed gray value according to a gray value time domain change oscillogram of a plurality of detected pixel points contained in the currently processed foreground sub-partition; (b) if so, marking the currently processed foreground sub-partition as the selected foreground sub-partition; (c) if not, marking the currently processed foreground subarea as an unselected foreground subarea; (4) splicing the foreground sub-partitions with adjacent relations into a vibration reference area according to the area relevance aiming at the marked selected plurality of foreground sub-partitions; (5) determining a plurality of reference pixel points with periodically changed gray values among the plurality of pixel points in the vibration reference region, and determining the gray value change frequency of each reference pixel point; (6) weighting and calculating the gray value change frequency of the plurality of reference pixel points in the vibration reference region to obtain the gray value change frequency of the currently processed sub-band image sequence; and screening out at least one sub-band image sequence which accords with a preset reference vibration frequency according to the gray value change frequency of each sub-band image sequence.
Determining foreground images and background images of the plurality of sub-band image sequences according to the gray value of the pixel point of each image in the plurality of sub-band image sequences; reference feature points in a relatively static state in the initial vibration video can also be determined from the plurality of sub-band image sequences. Secondly, determining the area ratio of the foreground image in the sub-band image according to each sub-band image in the plurality of sub-band image sequences and the foreground image of each sub-band image, and calculating the number of foreground sub-partitions of each foreground image according to the area ratio and a preset calculation formula, wherein the larger the area ratio of the foreground image is, the larger the number of the foreground sub-partitions is.
In a specific implementation, after the plurality of foreground sub-partitions are obtained, the detected pixel point of each foreground sub-partition is determined according to a preset strategy, as shown in fig. 2C, fig. 2C is a schematic diagram of a time domain variation waveform diagram of the gray value of one pixel point, wherein the horizontal axis is time, and the vertical axis is the gray value of the pixel point, and the diagram is generated according to the gray value variation of a certain pixel point to be detected of each sub-band image in the sub-band image sequence.
determining whether the currently processed foreground sub-partition contains a detected pixel point with periodically changed gray value according to a gray value time domain change oscillogram of a plurality of detected pixel points contained in the currently processed foreground sub-partition, and selecting the foreground sub-partition where the pixel point is located and marking if the gray value of the pixel point is periodically changed when the gray value time domain change oscillogram is determined to have a periodically changed waveform; and splicing the marked foreground sub-partitions according to the area relevance or the image color space relevance to obtain a vibration reference area. Determining a plurality of pixels which are periodically changed in the reference vibration region according to the gray value change of the pixels in the reference vibration region, and calculating the gray value change frequency of each reference pixel according to a formula, for example: ht1representing the gray value of a certain pixel point at the time t1, Ht2represents tThe gray value of the point at the time 2, the gray value change frequency of the point isweighting the gray value change frequencies of a plurality of pixel points of the reference vibration region, for example, the gray value change frequencies of the reference vibration regionthen determining the gray value change frequency of the currently processed sub-band image sequence according to the gray value change frequency H; and then screening to obtain at least one sub-band image sequence which accords with the preset reference vibration frequency. Or adding the gray value change frequencies H of different time spans in at least one sub-band image sequence to obtain a plurality of different change frequency values, selecting the preset reference vibration frequency from the plurality of different change frequency values, and determining at least one sub-band image sequence conforming to the preset reference vibration frequency.
In the specific implementation, the initial vibration video of the detected product can be further divided into a plurality of sub-band image sequences, the plurality of image sequences are divided into a plurality of image groups, the gray value of the pixel point of each group is determined, at least one sub-band image sequence is determined according to the gray value change of the pixel point, and the sub-band image sequence is amplified.
Therefore, in this example, the vibration detection device can determine the image area for detecting the gray value of the pixel point based on the area ratio of the foreground image in the sub-band image sequence, so as to improve the processing efficiency of the initial vibration video and avoid the operation amount which needs to be increased when the higher-resolution image is amplified.
In one possible example, the preset sub-partition calculation formula is: y ═ 5 × 2x]Wherein x is the area ratio, y is the number of sub-partitions, and x is greater than 0 and less than or equal to 1.
and the coefficient and the base number of the preset formula can be adjusted according to the size of the sub-band image.
for example, when the area ratio is 50%,The number of the partitions isthe result can be rounded off.
Referring to fig. 3, fig. 3 is a schematic flow chart of another vibration augmented reality testing method according to an embodiment of the present application. As shown in fig. 3, the method is applied to a testing device, the testing device includes a camera, a display and a processor, the camera, the display and the processor are coupled, and the vibration augmented reality testing method includes steps 301 and 307, specifically as follows:
301. the method comprises the steps that a camera obtains an initial vibration video of a detected object;
The detected object comprises at least one vibration feature point, the target feature point is any one of the at least one vibration feature point, the initial vibration video comprises a plurality of video frames, the target feature point in each target video frame comprises at least one target pixel point, and each target video frame is any one of the plurality of video frames.
302. The processor acquires brightness information of the at least one target pixel point from the plurality of video frames;
303. The processor determines a plurality of brightness change factors of each target pixel point in the at least one target pixel point according to the brightness information;
304. The processor determines a target brightness change factor of each target characteristic pixel point from a plurality of brightness change factors of each target pixel point by adopting a preset brightness factor determination method;
305. The processor determines vibration information of the target characteristic points according to the target brightness change factors of each target characteristic pixel point;
306. The processor performs vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video;
307. The display shows the target vibration video.
in this example, a given brightness change factor is determined according to brightness information of at least one target pixel point of the target feature point, vibration information is obtained according to the brightness change factor, amplification processing is performed according to the vibration information to obtain a target video, the vibration information is determined according to the brightness information, the vibration information can be determined more accurately, and therefore the accuracy in determining the target vibration video can be improved to a certain extent.
referring to fig. 4, fig. 4 is a schematic flow chart of another vibration augmented reality testing method according to an embodiment of the present application. As shown in fig. 4, the video display method includes steps 401 and 407, which are as follows:
401. the method comprises the steps that a camera obtains an initial vibration video of a detected object;
402. the processor determines the vibration information of the target characteristic point of the detected object from the initial vibration video;
the vibration information includes a vibration spectrogram, the detected object includes at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point.
403. the processor acquires a plurality of pieces of reference vibration mode information of the target characteristic points from the vibration spectrogram;
404. The processor determines at least one target vibration mode information from the plurality of reference vibration mode information;
405. The processor obtains a target amplification factor of the at least one piece of target vibration mode information;
406. The processor adopts the target amplification factor to amplify the amplitude of the at least one vibration mode information to obtain a target vibration video;
407. The display shows the target vibration video.
In this example, by extracting a plurality of pieces of vibration mode information of the target feature point, determining the target vibration mode information from the vibration mode information, and performing amplitude amplification processing on the target vibration mode information, the small vibration can be amplified to become vibration that can be seen by human eyes, and thus the practicability of the video display method can be improved.
In accordance with the foregoing embodiments, please refer to fig. 5, fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application, and as shown in the drawing, the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, the computer program includes program instructions, the processor is configured to call the program instructions, and the program includes instructions for performing the following steps;
Acquiring an initial vibration video of a detected object;
Determining vibration information of target feature points of the detected object from the initial vibration video, wherein the detected object comprises at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
According to the vibration information, a preset vibration amplification method is adopted to carry out vibration amplification processing on the initial vibration video to obtain a target vibration video;
and displaying the target vibration video.
in the example, by acquiring an initial vibration video of a detected object, determining vibration information of a target feature point of the detected object from the initial vibration video, and performing vibration amplification processing on the initial vibration video by using a preset vibration amplification method according to the vibration information to obtain a target vibration video and display the target vibration video, compared with the prior art that vibration acquisition of the detected object is performed by using a vibration acceleration signal sensor, when equipment is detected, a plurality of groups of sensors are required to be installed, the initial vibration video can be acquired only by acquiring the vibration information of the detected object in the video, the initial vibration video is amplified according to the vibration information to obtain the target vibration video, and the vibration of the target object can be amplified and displayed in the target video, therefore, the vibration in the vibration video can be amplified and displayed according to the vibration information, and the cost in test analysis can be reduced to a certain extent.
the above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
in accordance with the above, please refer to fig. 6, fig. 6 is a schematic structural diagram of a vibration augmented reality testing apparatus according to an embodiment of the present application. The device is applied to a testing device, the testing device comprises a camera, a display and a processor, the camera, the display and the processor are coupled, the vibration augmented reality testing device comprises an acquisition unit 601, a determination unit 602, an amplification unit 603 and a display unit 604, wherein,
the acquiring unit 601 is configured to acquire an initial vibration video of a detected object, where the detected object includes a sunroof component;
The determining unit 602 is configured to determine, from the initial vibration video, vibration information of a target feature point of the sunroof device, where the detected object includes at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
the amplifying unit 603 is configured to perform vibration amplification processing on the initial vibration video by using a preset vibration amplification method according to the vibration information to obtain a target vibration video of the sunroof component;
the display unit 604 is configured to display the target vibration video.
optionally, the initial vibration video includes a plurality of video frames, a target feature point in a target video frame includes at least one target pixel point, and the target video frame is any one of the plurality of video frames;
optionally, the initial vibration video includes a plurality of video frames, and in terms of determining vibration information of the target feature point of the detected object from the initial vibration video, the determining unit 602 is configured to:
the processor determines a reference vibration area of the sunroof component;
the processor randomly determining at least one reference video frame from the plurality of video frames;
The processor determines a reference characteristic point according to the at least one reference video frame, wherein the reference characteristic point is a point at which the environment image of the detected object is relatively static in the video acquisition process;
The processor determines the target feature point from the reference vibration region according to the reference feature point, the initial vibration video and a preset feature point obtaining method, wherein the target feature point comprises at least one pixel point;
the processor acquires brightness information of the at least one target pixel point from the plurality of video frames;
The processor determines a plurality of brightness change factors of each target pixel point in the at least one target pixel point according to the brightness information;
the processor determines a target brightness change factor of each target characteristic pixel point from a plurality of brightness change factors of each target pixel point by adopting a preset brightness factor determination method;
and the processor determines the vibration information of the target characteristic points according to the target brightness change factors of each target characteristic pixel point.
Optionally, the initial vibration video includes a plurality of video frames acquired during the sliding process of the sunroof, and in the aspect of determining the target feature point from the reference vibration region according to the reference feature point, the initial vibration video, and a preset feature point acquisition method, the determining unit 602 is configured to:
The processor divides the video frames into a plurality of video frame sets according to the sliding stroke of the automobile skylight component, each video frame set comprises at least one video frame, each video frame set corresponds to a section of sliding track of the automobile skylight, and the directions of the sliding tracks corresponding to different video frame sets are different;
the processor performs frame skipping detection on each video frame set, and determines a video frame set needing vibration feature point screening;
And the processor determines the target characteristic point from the reference vibration area according to the video frame in the video frame set, the reference characteristic point and a preset characteristic point acquisition method.
Optionally, in the aspect of determining the reference vibration region of the sunroof apparatus, the determining unit 602 is configured to:
the processor acquires a vibration audio frequency of the automobile skylight component;
and the processor determines a reference vibration area of the automobile skylight component according to the vibration audio.
Optionally, in the aspect of determining the reference vibration region of the sunroof apparatus, the determining unit 602 is configured to:
The processor queries a preset skylight abnormal sound database to obtain at least one reference position of skylight abnormity;
And the processor determines a reference vibration area of the automobile skylight component according to the at least one reference position.
Optionally, the vibration augmented reality testing apparatus is further configured to:
The processor determines a reference damaged area of the automobile skylight component according to the reference vibration area;
The processor determines a target damaged area from the reference damaged area according to the vibration information of the target characteristic points;
The processor determines a target component from the target damaged area by adopting a preset component damage determination method, wherein the target component is a damaged component in the automobile skylight component and damage information of the target component;
The display shows the damage information.
embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the vibration augmented reality testing methods as described in the above method embodiments.
embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program causes a computer to execute some or all of the steps of any one of the vibration augmented reality testing methods as described in the above method embodiments.
it should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
in addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, read-only memory, random access memory, magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. a vibration augmented reality testing method is applied to a testing device, the testing device comprises a camera, a display and a processor, the camera, the display and the processor are coupled and connected, and the method comprises the following steps:
the camera acquires an initial vibration video of a detected object, wherein the detected object comprises an automobile skylight component;
The processor determines vibration information of target feature points of the automobile skylight component from the initial vibration video, the detected object comprises at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
The processor performs vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video of the automobile skylight component;
The display shows the target vibration video.
2. the method of claim 1, wherein the initial vibration video comprises a plurality of video frames, and wherein the processor determines the vibration information of the target feature point of the detected object from the initial vibration video, comprising:
the processor determines a reference vibration area of the sunroof component;
The processor randomly determining at least one reference video frame from the plurality of video frames;
The processor determines a reference characteristic point according to the at least one reference video frame, wherein the reference characteristic point is a point at which the environment image of the detected object is relatively static in the video acquisition process;
The processor determines the target feature point from the reference vibration region according to the reference feature point, the initial vibration video and a preset feature point obtaining method, wherein the target feature point comprises at least one pixel point;
the processor acquires brightness information of the at least one target pixel point from the plurality of video frames;
the processor determines a plurality of brightness change factors of each target pixel point in the at least one target pixel point according to the brightness information;
The processor determines a target brightness change factor of each target characteristic pixel point from a plurality of brightness change factors of each target pixel point by adopting a preset brightness factor determination method;
And the processor determines the vibration information of the target characteristic points according to the target brightness change factors of each target characteristic pixel point.
3. the method according to claim 2, wherein the initial vibration video includes a plurality of video frames acquired during sliding of the sunroof, and the processor determines the target feature point from the reference vibration area according to the reference feature point, the initial vibration video and a preset feature point acquisition method, including:
The processor divides the video frames into a plurality of video frame sets according to the sliding stroke of the automobile skylight component, each video frame set comprises at least one video frame, each video frame set corresponds to a section of sliding track of the automobile skylight, and the directions of the sliding tracks corresponding to different video frame sets are different;
the processor performs frame skipping detection on each video frame set, and determines a video frame set needing vibration feature point screening;
and the processor determines the target characteristic point from the reference vibration area according to the video frame in the video frame set, the reference characteristic point and a preset characteristic point acquisition method.
4. the method of claim 3, wherein the processor determining a reference vibration region of the sunroof component comprises:
the processor acquires a vibration audio frequency of the automobile skylight component;
And the processor determines a reference vibration area of the automobile skylight component according to the vibration audio.
5. The method of claim 3, wherein the processor determining a reference vibration region of the sunroof component comprises:
the processor queries a preset skylight abnormal sound database to obtain at least one reference position of skylight abnormity;
and the processor determines a reference vibration area of the automobile skylight component according to the at least one reference position.
6. The method according to any one of claims 2 to 5, further comprising:
the processor determines a reference damaged area of the automobile skylight component according to the reference vibration area;
The processor determines a target damaged area from the reference damaged area according to the vibration information of the target characteristic points;
The processor determines a target component from the target damaged area by adopting a preset component damage determination method, wherein the target component is a damaged component in the automobile skylight component and damage information of the target component;
the display shows the damage information.
7. the method according to claim 2 or 3, wherein the processor determines the target luminance variation factor of each target feature pixel point from the plurality of luminance variation factors of the at least one target pixel point by using a preset luminance factor determination method, including:
The processor performs principal characteristic analysis on the brightness change factor of the at least one target pixel point by adopting a principal component decomposition method to obtain a reference brightness change factor meeting preset characteristics;
And the processor filters the reference brightness variation factor to obtain the target brightness variation factor.
8. A vibration augmented reality testing device is applied to a testing device, the testing device comprises a camera, a display and a processor, the camera, the display and the processor are in coupling connection, the vibration augmented reality testing device comprises an acquisition unit, a determination unit, an amplification unit and a display unit, wherein,
The acquisition unit is used for acquiring an initial vibration video of a detected object, and the detected object comprises an automobile skylight component;
the determining unit is configured to determine vibration information of a target feature point of the sunroof component from the initial vibration video, where the detected object includes at least one vibration feature point, and the target feature point is any one of the at least one vibration feature point;
the amplifying unit is used for carrying out vibration amplification processing on the initial vibration video by adopting a preset vibration amplification method according to the vibration information to obtain a target vibration video of the automobile skylight component;
The display unit is used for displaying the target vibration video.
9. A terminal, comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-6.
CN201910819912.4A 2019-08-31 2019-08-31 Intelligent vibration augmented reality testing method and related product Active CN110553714B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910819912.4A CN110553714B (en) 2019-08-31 2019-08-31 Intelligent vibration augmented reality testing method and related product
PCT/CN2020/105796 WO2021036672A1 (en) 2019-08-31 2020-07-30 Intelligent vibration augmented reality test method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910819912.4A CN110553714B (en) 2019-08-31 2019-08-31 Intelligent vibration augmented reality testing method and related product

Publications (2)

Publication Number Publication Date
CN110553714A true CN110553714A (en) 2019-12-10
CN110553714B CN110553714B (en) 2022-01-14

Family

ID=68738775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910819912.4A Active CN110553714B (en) 2019-08-31 2019-08-31 Intelligent vibration augmented reality testing method and related product

Country Status (2)

Country Link
CN (1) CN110553714B (en)
WO (1) WO2021036672A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112161700A (en) * 2020-09-18 2021-01-01 安徽江淮汽车集团股份有限公司 Method and device for checking up lifting noise of car window glass
WO2021036641A1 (en) * 2019-04-26 2021-03-04 深圳市豪视智能科技有限公司 Coupling mismatch detection method and related device
WO2021036672A1 (en) * 2019-08-31 2021-03-04 深圳市广宁股份有限公司 Intelligent vibration augmented reality test method and related product
CN113467877A (en) * 2021-07-07 2021-10-01 安徽容知日新科技股份有限公司 Data display system and method
TWI804405B (en) * 2022-08-04 2023-06-01 友達光電股份有限公司 Vibration detection method and vibration detection device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0756933A2 (en) * 1995-08-01 1997-02-05 Canon Kabushiki Kaisha Method and apparatus for manufacturing color filter, display device using the color filter, and electronic equipment comprising the display device
JP4439852B2 (en) * 1999-05-11 2010-03-24 株式会社半導体エネルギー研究所 Contact type area sensor
CN203552042U (en) * 2013-11-13 2014-04-16 柳州职业技术学院 Automobile abnormal sound fault self-diagnosis system
CN104215419A (en) * 2014-09-18 2014-12-17 重庆长安汽车股份有限公司 Test system and test method for abnormal sound of automobile panoramic sunroof shade curtain
CN105651377A (en) * 2016-01-11 2016-06-08 衢州学院 Video data mining-based non-contact object vibration frequency measurement method
CN205573881U (en) * 2016-05-10 2016-09-14 张杰明 Car intelligence security system
CN107068164A (en) * 2017-05-25 2017-08-18 北京地平线信息技术有限公司 Acoustic signal processing method, device and electronic equipment
CN107222529A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, WEB modules, terminal and cloud server
CN108225537A (en) * 2017-11-21 2018-06-29 华南农业大学 A kind of contactless small items vibration measurement method based on high-speed photography
CN108492352A (en) * 2018-03-22 2018-09-04 腾讯科技(深圳)有限公司 Implementation method, device, system, computer equipment and the storage medium of augmented reality
CN108960091A (en) * 2018-06-20 2018-12-07 深圳市科迈爱康科技有限公司 Monitoring system, method, readable storage medium storing program for executing and automobile
CN109062535A (en) * 2018-07-23 2018-12-21 Oppo广东移动通信有限公司 Sounding control method, device, electronic device and computer-readable medium
CN109520690A (en) * 2018-10-30 2019-03-26 西安交通大学 A kind of rotary machine rotor Mode Shape global measuring device and method based on video

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009103637A (en) * 2007-10-25 2009-05-14 Hard Giken Kogyo Kk Noise stethoscope
CN104089697B (en) * 2014-07-08 2017-02-15 安徽常春藤光电智能科技有限公司 Real-time online visual vibration measurement method based on thread pool concurrent technology
CN104048744B (en) * 2014-07-08 2017-03-08 安徽常春藤光电智能科技有限公司 A kind of contactless real-time online vibration measurement method based on image
CN105424350B (en) * 2015-12-19 2017-10-31 湖南科技大学 Thin-wall part mode testing method and system based on machine vision
US10502615B2 (en) * 2016-07-12 2019-12-10 Mitsubishi Electric Corporation Diagnostic apparatus and diagnostic system
CN108593087A (en) * 2018-03-29 2018-09-28 湖南科技大学 A kind of thin-wall part operational modal parameter determines method and system
CN108731788B (en) * 2018-05-22 2020-06-09 河海大学常州校区 Visual detection device and method for low-frequency vibration of aerial work arm
CN109341847A (en) * 2018-09-25 2019-02-15 东莞青柳新材料有限公司 A kind of Vibration-Measuring System of view-based access control model
CN110068388A (en) * 2019-03-29 2019-07-30 南京航空航天大学 A kind of method for detecting vibration of view-based access control model and blind source separating
CN110108348B (en) * 2019-05-15 2021-04-23 湖南科技大学 Thin-wall part micro-amplitude vibration measurement method and system based on motion amplification optical flow tracking
CN110553714B (en) * 2019-08-31 2022-01-14 深圳市广宁股份有限公司 Intelligent vibration augmented reality testing method and related product

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0756933A2 (en) * 1995-08-01 1997-02-05 Canon Kabushiki Kaisha Method and apparatus for manufacturing color filter, display device using the color filter, and electronic equipment comprising the display device
JP4439852B2 (en) * 1999-05-11 2010-03-24 株式会社半導体エネルギー研究所 Contact type area sensor
CN203552042U (en) * 2013-11-13 2014-04-16 柳州职业技术学院 Automobile abnormal sound fault self-diagnosis system
CN104215419A (en) * 2014-09-18 2014-12-17 重庆长安汽车股份有限公司 Test system and test method for abnormal sound of automobile panoramic sunroof shade curtain
CN105651377A (en) * 2016-01-11 2016-06-08 衢州学院 Video data mining-based non-contact object vibration frequency measurement method
CN205573881U (en) * 2016-05-10 2016-09-14 张杰明 Car intelligence security system
CN107222529A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, WEB modules, terminal and cloud server
CN107068164A (en) * 2017-05-25 2017-08-18 北京地平线信息技术有限公司 Acoustic signal processing method, device and electronic equipment
CN108225537A (en) * 2017-11-21 2018-06-29 华南农业大学 A kind of contactless small items vibration measurement method based on high-speed photography
CN108492352A (en) * 2018-03-22 2018-09-04 腾讯科技(深圳)有限公司 Implementation method, device, system, computer equipment and the storage medium of augmented reality
CN108960091A (en) * 2018-06-20 2018-12-07 深圳市科迈爱康科技有限公司 Monitoring system, method, readable storage medium storing program for executing and automobile
CN109062535A (en) * 2018-07-23 2018-12-21 Oppo广东移动通信有限公司 Sounding control method, device, electronic device and computer-readable medium
CN109520690A (en) * 2018-10-30 2019-03-26 西安交通大学 A kind of rotary machine rotor Mode Shape global measuring device and method based on video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHEXIONGSHANG 等: "Multi-point vibration measurement and mode magnification of civil structures using video-based motion processing", 《AUTOMATION IN CONSTRUCTION》 *
李乐鹏 等: "视频微小运动放大的加速方法", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021036641A1 (en) * 2019-04-26 2021-03-04 深圳市豪视智能科技有限公司 Coupling mismatch detection method and related device
WO2021036672A1 (en) * 2019-08-31 2021-03-04 深圳市广宁股份有限公司 Intelligent vibration augmented reality test method and related product
CN112161700A (en) * 2020-09-18 2021-01-01 安徽江淮汽车集团股份有限公司 Method and device for checking up lifting noise of car window glass
CN113467877A (en) * 2021-07-07 2021-10-01 安徽容知日新科技股份有限公司 Data display system and method
CN113467877B (en) * 2021-07-07 2023-12-05 安徽容知日新科技股份有限公司 Data display system and method
TWI804405B (en) * 2022-08-04 2023-06-01 友達光電股份有限公司 Vibration detection method and vibration detection device

Also Published As

Publication number Publication date
CN110553714B (en) 2022-01-14
WO2021036672A1 (en) 2021-03-04

Similar Documents

Publication Publication Date Title
CN110553714B (en) Intelligent vibration augmented reality testing method and related product
TWI754855B (en) Method and device, electronic equipment for face image recognition and storage medium thereof
Yang et al. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements
JP7265003B2 (en) Target detection method, model training method, device, apparatus and computer program
WO2020155907A1 (en) Method and apparatus for generating cartoon style conversion model
WO2020119026A1 (en) Image processing method and apparatus, electronic device and storage medium
Matarazzo et al. Scalable structural modal identification using dynamic sensor network data with STRIDEX
CN111476871B (en) Method and device for generating video
US10073908B2 (en) Functional space-time trajectory clustering
CN112766189B (en) Deep forgery detection method and device, storage medium and electronic equipment
CN110595745B (en) Detection method for abnormality of fixing screw of equipment and related product
CN109672853A (en) Method for early warning, device, equipment and computer storage medium based on video monitoring
CN109154938B (en) Classifying entities in a digital graph using discrete non-trace location data
Ghadiyaram et al. A time-varying subjective quality model for mobile streaming videos with stalling events
TWI761787B (en) Method and system for predicting dynamical flows from control inputs and limited observations
CN112153460B (en) Video dubbing method and device, electronic equipment and storage medium
CN112037223B (en) Image defect detection method and device and electronic equipment
CN106297184A (en) The monitoring method of mobile terminal surrounding, device and mobile terminal
CN108387757B (en) Method and apparatus for detecting moving state of movable device
CN110595603B (en) Video-based vibration analysis method and related product
Heiß et al. In-distribution interpretability for challenging modalities
CN111310595B (en) Method and device for generating information
CN113052198A (en) Data processing method, device, equipment and storage medium
WO2023016290A1 (en) Video classification method and apparatus, readable medium and electronic device
CN113706663B (en) Image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant