CN116754654A - Sound barrier detection method and device based on image analysis model - Google Patents

Sound barrier detection method and device based on image analysis model Download PDF

Info

Publication number
CN116754654A
CN116754654A CN202311049569.2A CN202311049569A CN116754654A CN 116754654 A CN116754654 A CN 116754654A CN 202311049569 A CN202311049569 A CN 202311049569A CN 116754654 A CN116754654 A CN 116754654A
Authority
CN
China
Prior art keywords
sound barrier
sound
noise
spectrogram
image analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311049569.2A
Other languages
Chinese (zh)
Other versions
CN116754654B (en
Inventor
陈卓
尹虎
鄂治群
孙磊
蒋俊
崔波
王卫敏
王磊
郭超
廖建州
陈锋
李绍富
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan China Railway Second Institute Environmental Technology Co ltd
Sichuan Shudao Railway Operation And Maintenance Co ltd
Chengdu Zhonghong Rail Transit Environmental Protection Industry Co ltd
Original Assignee
Sichuan China Railway Second Institute Environmental Technology Co ltd
Sichuan Shudao Railway Operation And Maintenance Co ltd
Chengdu Zhonghong Rail Transit Environmental Protection Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan China Railway Second Institute Environmental Technology Co ltd, Sichuan Shudao Railway Operation And Maintenance Co ltd, Chengdu Zhonghong Rail Transit Environmental Protection Industry Co ltd filed Critical Sichuan China Railway Second Institute Environmental Technology Co ltd
Priority to CN202311049569.2A priority Critical patent/CN116754654B/en
Publication of CN116754654A publication Critical patent/CN116754654A/en
Application granted granted Critical
Publication of CN116754654B publication Critical patent/CN116754654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/11Analysing solids by measuring attenuation of acoustic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

The application provides a sound barrier detection method and device based on an image analysis model. The system combines a conventional acoustic imaging device with a hybrid structure generation type countermeasure network capable of automatically extracting speckle noise characteristics and removing speckle and other noise in a targeted manner, can effectively eliminate noise such as electronic noise, speckle noise and the like, obtains a high-quality spectrogram and improves the acoustic imaging analysis capability, overcomes the problems of low imaging sensitivity, low time resolution and large data processing capacity of the conventional speckle modulation acoustic imaging system, remarkably improves the imaging capability of the acoustic imaging system, and can efficiently detect the acoustic barrier by analyzing a high-sensitivity spectrogram.

Description

Sound barrier detection method and device based on image analysis model
Technical Field
The application belongs to the technical field of sound barrier monitoring and improving. In particular to a sound barrier detection method and device based on an image analysis model.
Background
Because of the rapid development of economy, urban highways and viaducts are becoming more popular for better solving traffic problems, and because many highways or viaducts are upgrading to the original areas, the noise generated in this way can seriously interfere the normal work and life of business buildings and residents in residential communities at both sides of the road, one of the effective methods for managing the traffic noise is to install sound insulation barriers at both sides of the highways, various sound insulation barriers are appeared in recent years, and a certain effect is played on noise reduction.
The existing high-speed railway sound barrier is mainly inspected by manual modes such as visual inspection, telescope, plumb ball, steel tape measure, hammer strike inspection, torque wrench, feeler gauge and the like, and the working efficiency is low. The outside of the sound barrier is inspected and maintained in a prying way by a prying bar, and the like, so that the inspection and maintenance on the outside of the sound barrier are very difficult due to the limitation of inspection operation methods and means, the working intensity is also high, and the working efficiency is very low.
Disclosure of Invention
The application aims to overcome the defects of the prior art and provides a sound barrier detection method and device based on an image analysis model.
The aim of the application is realized by the following technical scheme:
a sound barrier detection method based on an image analysis model comprises the following steps:
noise prediction points are arranged at two ends of the sound barrier and the protected sensitive points, a sound spectrogram of the sound barrier is collected when no noise exists, and a noise reduction coefficient is determined;
collecting a sound barrier spectrogram with noise in the noise reduction coefficient range at the same noise prediction point;
training a sound barrier spectrogram with noise by using the sound spectrogram of the sound barrier when no noise exists, and determining a sound barrier detection threshold; the noise reduction coefficient refers to the ratio of the sound barrier to noise reduction, and the sound barrier detection threshold is to detect whether the sound barrier is damaged by sound vibration; the noise reduction coefficient can be used to reject unstable values;
the method for calculating the noise reduction coefficient of the sound barrier comprises the following specific steps:
calculating a noise contribution value by taking the central line of the sound barrier and the lane as a sound source head line according to the distance condition between the predicted point and the sound source head line, and superposing a plurality of noise contribution values as noise reduction coefficients of the predicted point;
detecting whether the sound barrier is faulty or not in real time according to the detection threshold;
the sound spectrogram of the sound barrier and the sound spectrogram of the sound barrier with noise are intermittently acquired when no noise exists, and the sound spectrogram of the sound barrier with noise are acquired and compared with a sound barrier detection threshold value before detection is carried out each time.
Further, the sound barrier sound spectrum diagram with noise comprises a plurality of sound spectrum diagram data, the sound barrier detection threshold value is compared with the plurality of sound spectrum diagram data, normal data difference and obvious data difference exist, normal data are discarded, obvious data difference is recorded, and when the obvious data difference is more than or equal to the noise reduction coefficient, the sound spectrum diagram data point is recorded as a fault point.
The protected sensitive point is determined by determining the highest point of the sound barrier, the sound source and the sound reflection point, and the protected sensitive point is positioned at the center point of the triangle formed by the highest point of the sound barrier, the sound source and the sound reflection point
Further, the method further comprises a training step, and the specific implementation mode is as follows:
s1: temporarily modifying a conventional acoustic imaging system to enable the conventional acoustic imaging system to have speckle modulation acoustic imaging capability;
s2: carrying out speckle modulation acoustic imaging by using the speckle modulation acoustic imaging system obtained in the step S1, and obtaining 100 spectrograms containing a large amount of speckle noise at each sample position;
s3, carrying out average processing on 100 images containing speckle noise obtained in the step S2, and obtaining a training reference image without speckle noise;
s4: selecting images containing speckle noise and training reference images without speckle noise to be matched in pairs to form a data set, setting 80% of the data set as a training data set, and setting the rest 20% as a test training set;
s5: constructing a generation countermeasure network of the hybrid structure;
s6: training the hybrid structure by using the data set manufactured in the step S3 to generate an countermeasure network, and obtaining a trained generated countermeasure network;
s7: combining the trained hybrid structure generation countermeasure network with the conventional acoustic imaging to obtain the deep learning speckle modulation acoustic imaging system capable of automatically removing noise such as speckle and improving imaging resolution capability.
Further, the specific implementation manner of generating the countermeasure network in the hybrid structure in the step S5 is as follows:
s51: carrying out normalization processing on the training set and the test set image data by adopting a normalization formula;
s52: the construction generator is a U-Net structure formed by an encoder and a decoder constructed by a dense connection network, and an image is subjected to downsampling by the encoder and then is subjected to upsampling by the decoder to obtain an image output;
s53: constructing a discriminator, inputting the image output by the generator into the discriminator, and outputting true and false probability values;
s54: the design generator parameter optimization objective function is as follows:
equation 1:
in the formula 1 of the present application,mean square loss for pixel>For perception loss->And->Coefficients of pixel mean square loss and perceptual loss, +.>And->Is defined by the following formula:
equation 2:
equation 3:
in the method, in the process of the application,representing the output of the generator,/>Representing a reference image +.>Output representing VGG-19 network feature extraction, +.>The width, height and channel number of the image, respectively;
s55: the network parameters of the generator and the arbiter are updated.
The above-mentioned method is achieved by a sound barrier detection device based on an image analysis model, which comprises,
the sound vibration sensors are arranged at two ends of the sound barrier and at the protected sensitive points and are used for acquiring a sound spectrogram of the sound barrier in the noiseless state and a sound spectrogram of the sound barrier with noise;
a first processing unit for training a sound barrier spectrogram with noise by using the sound spectrogram of the sound barrier when no noise exists, and determining a sound barrier detection threshold;
and the second processing unit is used for comparing the sound barrier spectrogram with noise with a sound barrier detection threshold value.
On the other hand, a signal processing system is provided,
the signal processing system includes one or more memories for storing instructions; and
one or more processors configured to invoke and execute the instructions from the memory to perform a sound barrier detection method based on an image analysis model.
Further, a computer readable access medium is provided,
the computer-readable storage medium includes a program which, when executed by a processor, performs a sound barrier detection method based on an image analysis model.
The present application provides a computer program product comprising program instructions which, when executed by a computing device, perform a method as described in the first aspect and any possible implementation of the first aspect.
It is contemplated that the present application provides a chip system comprising a processor for performing the functions involved in the above aspects, e.g., generating, receiving, transmitting, or processing data and/or information involved in the above methods.
The chip system can be composed of chips, and can also comprise chips and other discrete devices.
In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data. The processor and the memory may be decoupled, provided on different devices, respectively, connected by wire or wirelessly, or the processor and the memory may be coupled on the same device.
The beneficial effects of the application are as follows:
1) According to the application, the hardware and the structure of the conventional sound barrier are not required to be changed, and a high-quality spectrogram for removing noise such as speckle noise can be obtained only by integrating a deep learning network of a hybrid structure.
2) The application improves the resolving capability of the sound imaging microstructure, and the proposed deep learning method can remove noise such as speckle noise and the like and resolve the tiny and important microstructure covered by the speckle noise.
3) The application avoids repeated scanning, improves the time resolution of the system and reduces the data processing capacity.
4) The application avoids reducing the input power loss of the sound imaging sample arm and maintains the imaging sensitivity of the conventional sound imaging system.
Drawings
FIG. 1 is a flowchart showing steps of a sound barrier detection method based on an image analysis model according to the present application;
FIG. 2 is a schematic illustration of a sound spectrum of the present application;
FIG. 3 is a schematic diagram illustrating a sound barrier gap detection in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating vibration of a sound barrier gap in an embodiment of the present application;
fig. 5 is a schematic diagram illustrating noise reduction of a sound barrier according to an embodiment of the application.
Detailed Description
In the following, the technical solution of the present application will be clearly and completely described with reference to the embodiments, and it is obvious that the described embodiments are only some embodiments, but not all embodiments, of the present application, and in some cases, other deep learning networks, such as a residual generation countermeasure network, a dense connection generation countermeasure network, or a U-Net network, may be integrated, so that better imaging results may also be obtained. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application, based on the embodiments of the present application.
Referring to fig. 1-5, the present application provides a technical solution, a sound barrier detection method based on an image analysis model.
The aim of the application is realized by the following technical scheme:
the aim of the application is realized by the following technical scheme:
a sound barrier detection method based on an image analysis model comprises the following steps:
noise prediction points are arranged at two ends of the sound barrier and the protected sensitive points, a sound spectrogram of the sound barrier is collected when no noise exists, and a noise reduction coefficient is determined;
collecting a sound barrier spectrogram with noise in the noise reduction coefficient range at the same noise prediction point;
training a sound barrier spectrogram with noise by using the sound spectrogram of the sound barrier when no noise exists, and determining a sound barrier detection threshold; the noise reduction coefficient refers to the ratio of the sound barrier to noise reduction, and the sound barrier detection threshold is to detect whether the sound barrier is damaged by sound vibration; the noise reduction coefficient can be used to reject unstable values;
the method for calculating the noise reduction coefficient of the sound barrier comprises the following specific steps:
calculating a noise contribution value by taking the central line of the sound barrier and the lane as a sound source head line according to the distance condition between the predicted point and the sound source head line, and superposing a plurality of noise contribution values as noise reduction coefficients of the predicted point;
detecting whether the sound barrier is faulty or not in real time according to the detection threshold;
the sound spectrogram of the sound barrier and the sound spectrogram of the sound barrier with noise are intermittently acquired when no noise exists, and the sound spectrogram of the sound barrier with noise are acquired and compared with a sound barrier detection threshold value before detection is carried out each time.
Further, the sound barrier sound spectrum diagram with noise comprises a plurality of sound spectrum diagram data, the sound barrier detection threshold value is compared with the plurality of sound spectrum diagram data, normal data difference and obvious data difference exist, normal data are discarded, obvious data difference is recorded, and when the obvious data difference is more than or equal to the noise reduction coefficient, the sound spectrum diagram data point is recorded as a fault point.
The protected sensitive point is determined by determining the highest point of the sound barrier, the sound source and the sound reflection point, and the protected sensitive point is positioned at the center point of the triangle formed by the highest point of the sound barrier, the sound source and the sound reflection point
Further, the method further comprises a training step, and the specific implementation mode is as follows:
s1: temporarily modifying a conventional acoustic imaging system to enable the conventional acoustic imaging system to have speckle modulation acoustic imaging capability;
s2: carrying out speckle modulation acoustic imaging by using the speckle modulation acoustic imaging system obtained in the step S1, and obtaining 100 spectrograms containing a large amount of speckle noise at each sample position;
s3, carrying out average processing on 100 images containing speckle noise obtained in the step S2, and obtaining a training reference image without speckle noise;
s4: selecting images containing speckle noise and training reference images without speckle noise to be matched in pairs to form a data set, setting 80% of the data set as a training data set, and setting the rest 20% as a test training set;
s5: constructing a generation countermeasure network of the hybrid structure;
s6: training the hybrid structure by using the data set manufactured in the step S3 to generate an countermeasure network, and obtaining a trained generated countermeasure network;
s7: combining the trained hybrid structure generation countermeasure network with the conventional acoustic imaging to obtain the deep learning speckle modulation acoustic imaging system capable of automatically removing noise such as speckle and improving imaging resolution capability.
Further, the specific implementation manner of generating the countermeasure network in the hybrid structure in the step S5 is as follows:
s51: carrying out normalization processing on the training set and the test set image data by adopting a normalization formula;
s52: the construction generator is a U-Net structure formed by an encoder and a decoder constructed by a dense connection network, and an image is subjected to downsampling by the encoder and then is subjected to upsampling by the decoder to obtain an image output;
s53: constructing a discriminator, inputting the image output by the generator into the discriminator, and outputting true and false probability values;
s54: the design generator parameter optimization objective function is as follows:
equation 1:
in the formula 1 of the present application,mean square loss for pixel>For perception loss->And->Coefficients of pixel mean square loss and perceptual loss, +.>And->Is defined by the following formula:
equation 2:
equation 3:
in the method, in the process of the application,representing the output of the generator,/>Representing a reference image +.>Output representing VGG-19 network feature extraction, +.>The width, height and channel number of the image, respectively;
s55: the network parameters of the generator and the arbiter are updated.
To further illustrate the application more clearly, this embodiment temporarily constructs a speckle-modulated acoustic imaging system for acquiring images containing significant amounts of speckle noise and their corresponding speckle noise-free reference images. The light source adopts a central wavelength of 850nm. A broadband light source with a full width at half maximum of 165nm, the output beam is split by a 50:50 to the sample and reference arms. The interference light enters a 2048-pixel spectrometer, and signals detected by the spectrometer are transmitted to a computer. In experiments, the spectrometer and the two-dimensional scanning galvanometer were synchronized by a computer generated trigger signal. The other elements also have two identical collimators and two polarization controllers. In order to obtain different speckle noise, a 4-fold mirror, a 10-fold mirror and a 20-fold mirror are respectively used as objective lenses in imaging, and an optical diffuse reflector is moved. The application also images the sound barriers made of various materials, thereby increasing the richness and diversity of the data set.
The application uses the data set manufactured by the method as input to train and generate the countermeasure network, and sets the optimizer as the Adam optimizer before training and the learning rate as followsThe batch size and the number of training steps were set to 1 and 35 ten thousand, respectively, to obtain the optimal model parameters. Coefficients in the generator objective function ∈ ->Sum coefficient->1 and 0.1 respectively, the arbiter uses the cross entropy loss function as its objective function.
Generating optimal parameters of the model after the countermeasure network training is completed, and integrating the trained model and the parameters thereof onto the conventional acoustic imaging acquisition system to obtain the method: a deep learning speckle modulation optical coherence tomography system based on a hybrid structure-generated challenge network. In order to prove that the application has the advantages of automatically removing noise such as electronic noise, speckle noise and the like and having strong analysis capability on microstructures compared with the conventional acoustic imaging system, the embodiment respectively carries out imaging experiments on the transparent adhesive tape and the pork tissue without using the application and using the application.
Fig. 4 shows a contrast plot of the sound spectrum acquired by the sound barrier without and with the present application. It can be seen that images not imaged using the present application present a significant amount of speckle noise that severely degrades the quality of the spectrogram, making it impossible for a person to see the details and microstructure in the image, as shown in the enlarged view illustrated in the box of fig. 4. The spectrogram obtained by imaging of the application well removes speckle noise, electronic noise and other noise in the image, improves the signal-to-noise ratio and contrast of the image, and analyzes and restores the microstructure originally covered by the speckle noise. The micro-cracks present in the sound barrier as in the box of fig. 4, after use of the present application, were observed for the structural morphology of the cracks.
In order to verify that the present application has high time resolution, the present example further conducted experiments using the present application to continuously acquire 800 frames of spectrograms, and the test system took time to obtain 800 frames of high quality images with noise such as speckle noise removed. Experiments prove that the acquisition of the spectrogram for removing noise such as speckle noise and the like from 800 frames takes 115.8 seconds, and the average image of each frame only needs 144.7 milliseconds, so that the application has high time resolution.
The present application is achieved by a sound barrier detection apparatus based on an image analysis model, comprising,
the sound vibration sensors are arranged at two ends of the sound barrier and at the protected sensitive points and are used for acquiring a sound spectrogram of the sound barrier in the noiseless state and a sound spectrogram of the sound barrier with noise;
a first processing unit for training a sound barrier spectrogram with noise by using the sound spectrogram of the sound barrier when no noise exists, and determining a sound barrier detection threshold;
and the second processing unit is used for comparing the sound barrier spectrogram with noise with a sound barrier detection threshold value. In one example, the unit in any of the above apparatuses may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (application specific integratedcircuit, ASIC), or one or more digital signal processors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA), or a combination of at least two of these integrated circuit forms.
For another example, when the units in the apparatus may be implemented in the form of a scheduler of processing elements, the processing elements may be general-purpose processors, such as a central processing unit (central processing unit, CPU) or other processor that may invoke the program. For another example, the units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Various objects such as various messages/information/devices/network elements/systems/devices/actions/operations/processes/concepts may be named in the present application, and it should be understood that these specific names do not constitute limitations on related objects, and that the named names may be changed according to the scenario, context, or usage habit, etc., and understanding of technical meaning of technical terms in the present application should be mainly determined from functions and technical effects that are embodied/performed in the technical solution.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should also be understood that in various embodiments of the present application, first, second, etc. are merely intended to represent that multiple objects are different. For example, the first time window and the second time window are only intended to represent different time windows. Without any effect on the time window itself, the first, second, etc. mentioned above should not impose any limitation on the embodiments of the present application.
It is also to be understood that in the various embodiments of the application, where no special description or logic conflict exists, the terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a computer-readable storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
On the other hand, a signal processing system is provided,
the signal processing system includes one or more memories for storing instructions; and
one or more processors configured to invoke and execute the instructions from the memory to perform a sound barrier detection method based on an image analysis model.
Further, a computer readable access medium is provided,
the computer-readable storage medium includes a program which, when executed by a processor, performs a sound barrier detection method based on an image analysis model.
The present application provides a computer program product and a chip system comprising a processor for performing the functions involved in the above-described methods, e.g. generating, receiving, transmitting, or processing data and/or information involved in the above-described methods.
The chip system can be composed of chips, and can also comprise chips and other discrete devices.
The processor referred to in any of the foregoing may be a CPU, microprocessor, ASIC, or integrated circuit that performs one or more of the procedures for controlling the transmission of feedback information described above.
In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data. The processor and the memory may be decoupled, and disposed on different devices, respectively, and connected by wired or wireless means, so as to support the chip system to implement the various functions in the foregoing embodiments. In the alternative, the processor and the memory may be coupled to the same device.
Optionally, the computer instructions are stored in a memory.
Alternatively, the memory may be a storage unit in the chip, such as a register, a cache, etc., and the memory may also be a storage unit in the terminal located outside the chip, such as a ROM or other type of static storage device, a RAM, etc., that may store static information and instructions.
It will be appreciated that the memory in the present application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
The nonvolatile memory may be a ROM, a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory.
The volatile memory may be RAM, which acts as external cache. There are many different types of RAM, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM.
The foregoing is merely a preferred embodiment of the application, and it is to be understood that the application is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the application are intended to be within the scope of the appended claims.

Claims (10)

1. The sound barrier detection method based on the image analysis model is characterized by comprising the following steps of:
noise prediction points are arranged at two ends of the sound barrier and the protected sensitive points, a sound spectrogram of the sound barrier is collected when no noise exists, and a noise reduction coefficient is determined;
collecting a sound barrier spectrogram with noise in the noise reduction coefficient range at the same noise prediction point;
training a sound barrier spectrogram with noise by using the sound spectrogram of the sound barrier when no noise exists, and determining a sound barrier detection threshold;
detecting whether the sound barrier is faulty or not in real time according to the detection threshold;
the sound spectrogram of the sound barrier and the sound spectrogram of the sound barrier with noise are intermittently acquired when no noise exists, and the sound spectrogram of the sound barrier with noise are acquired and compared with a sound barrier detection threshold value before detection is carried out each time.
2. The sound barrier detection method based on an image analysis model as claimed in claim 1, wherein:
the protected sensitive point is determined by determining the highest point of the sound barrier, the sound source and the sound reflection point, and the protected sensitive point is positioned at the center point of a triangle formed by the highest point of the sound barrier, the sound source and the sound reflection point.
3. The sound barrier detection method based on an image analysis model as claimed in claim 1, wherein: a step of training a noisy sound barrier spectrogram comprising the sound barrier when noise is absent:
s1: temporarily modifying a conventional acoustic imaging system to enable the conventional acoustic imaging system to have speckle modulation acoustic imaging capability;
s2: carrying out speckle modulation acoustic imaging by using the speckle modulation acoustic imaging system obtained in the step S1, and obtaining 100 spectrograms containing a large amount of speckle noise at each sample position;
s3, carrying out average processing on 100 images containing speckle noise obtained in the step S2, and obtaining a training reference image without speckle noise;
s4: selecting images containing speckle noise and training reference images without speckle noise to be matched in pairs to form a data set, setting 80% of the data set as a training data set, and setting the rest 20% as a test training set;
s5: constructing a generation countermeasure network of the hybrid structure;
s6: training the hybrid structure by using the data set manufactured in the step S4 to generate an countermeasure network, and obtaining a trained generated countermeasure network;
s7: combining the trained hybrid structure generation countermeasure network with the conventional acoustic imaging in the step S1 to obtain the deep learning speckle modulation acoustic imaging system capable of automatically removing speckle noise and improving imaging resolution capability.
4. A sound barrier detection method based on an image analysis model as claimed in claim 3, wherein: the specific implementation mode of the S5 is as follows:
s51: carrying out normalization processing on the training set and the test set image data by adopting a normalization formula;
s52: constructing a generator;
s53: constructing a discriminator;
s54: the design generator parameter optimization objective function is as follows:
s55: the network parameters of the generator and the arbiter are updated.
5. The sound barrier detection method based on an image analysis model as claimed in claim 1, wherein:
the method for calculating the noise reduction coefficient of the sound barrier comprises the following specific steps:
and calculating a noise contribution value by taking the central line of the sound barrier and the lane as a sound source head line according to the distance condition between the predicted point and the sound source head line, and superposing a plurality of noise contribution values as noise reduction coefficients of the predicted point.
6. The sound barrier detection method based on an image analysis model as claimed in claim 1, wherein: the comparison method of the sound barrier detection threshold and the sound barrier spectrogram with noise is as follows:
the sound barrier sound spectrum diagram with noise comprises a plurality of sound spectrum diagram data, the sound barrier detection threshold value is compared with the plurality of sound spectrum diagram data, normal data difference and obvious data difference exist, normal data are discarded, obvious data difference is recorded, and when the obvious data difference is more than or equal to the noise reduction coefficient, the sound spectrum diagram data point is recorded as a fault point.
7. The method for detecting sound barrier based on image analysis model as claimed in claim 4, wherein:
the generator comprises an encoder and a decoder which are constructed by densely connected networks to form a U-Net structure, and the sound spectrum graph is subjected to downsampling by the encoder and then is subjected to upsampling by the decoder to obtain image output.
8. A sound barrier detection device based on an image analysis model is characterized in that: comprising the steps of (a) a step of,
the sound vibration sensors are arranged at two ends of the sound barrier and at the protected sensitive points and are used for acquiring a sound spectrogram of the sound barrier in the noiseless state and a sound spectrogram of the sound barrier with noise;
a first processing unit for training a sound barrier spectrogram with noise by using the sound spectrogram of the sound barrier when no noise exists, and determining a sound barrier detection threshold;
and the second processing unit is used for comparing the sound barrier spectrogram with noise with a sound barrier detection threshold value.
9. The sound barrier detection apparatus based on an image analysis model according to claim 8, wherein: comprising a signal processing system and a processing system,
the signal processing system includes one or more memories for storing instructions; and
one or more processors to invoke and execute the instructions from the memory to perform a sound barrier detection method based on an image analysis model as claimed in any one of claims 1 to 7.
10. The sound barrier detection apparatus based on an image analysis model according to claim 8, wherein: also included is a computer-readable storage medium,
the computer-readable storage medium includes a program which, when executed by a processor, performs a sound barrier detection method based on an image analysis model as set forth in any one of claims 1 to 7.
CN202311049569.2A 2023-08-21 2023-08-21 Sound barrier detection method and device based on image analysis model Active CN116754654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311049569.2A CN116754654B (en) 2023-08-21 2023-08-21 Sound barrier detection method and device based on image analysis model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311049569.2A CN116754654B (en) 2023-08-21 2023-08-21 Sound barrier detection method and device based on image analysis model

Publications (2)

Publication Number Publication Date
CN116754654A true CN116754654A (en) 2023-09-15
CN116754654B CN116754654B (en) 2023-12-01

Family

ID=87950090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311049569.2A Active CN116754654B (en) 2023-08-21 2023-08-21 Sound barrier detection method and device based on image analysis model

Country Status (1)

Country Link
CN (1) CN116754654B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117436350A (en) * 2023-12-18 2024-01-23 中国石油大学(华东) Fracturing horizontal well pressure prediction method based on deep convolution generation countermeasure network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070236894A1 (en) * 2006-03-29 2007-10-11 Ken Colby Apparatus and method for limiting noise and smoke emissions due to failure of electronic devices or assemblies
CN101593360A (en) * 2009-06-26 2009-12-02 深圳市克罗赛尔声学技术有限公司 A kind of output intent of simulated environment noise and device
CN105386415A (en) * 2015-11-16 2016-03-09 黄立 Ecological sound barrier device and greening isolating device used for municipal road
CN105423924A (en) * 2015-11-17 2016-03-23 北京交通大学 Noise barrier on-line measurement method and system
WO2017139968A1 (en) * 2016-02-15 2017-08-24 厦门嘉达环保建造工程有限公司 Hyperbolic cooling tower noise-attenuating system
US20190213990A1 (en) * 2016-08-19 2019-07-11 3M Innovative Properties Company Sound-absorbing panels comprising a core consisting of connected cells, wherein some of the cell walls have openings
CN110633499A (en) * 2019-08-19 2019-12-31 全球能源互联网研究院有限公司 Sound barrier parameter determination method and device
CN112347705A (en) * 2021-01-07 2021-02-09 中国电力科学研究院有限公司 Method and system for establishing transformer substation factory boundary noise model
CN114491771A (en) * 2022-03-04 2022-05-13 贵州省交通规划勘察设计研究院股份有限公司 Road sound barrier acoustic design simulation calculation method
CN114841878A (en) * 2022-04-27 2022-08-02 广东博迈医疗科技股份有限公司 Speckle denoising method and device for optical coherence tomography image and electronic equipment
CN116289683A (en) * 2023-03-17 2023-06-23 中交城乡建设规划设计研究院有限公司 Novel light totally-enclosed sound barrier for urban viaduct

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070236894A1 (en) * 2006-03-29 2007-10-11 Ken Colby Apparatus and method for limiting noise and smoke emissions due to failure of electronic devices or assemblies
CN101593360A (en) * 2009-06-26 2009-12-02 深圳市克罗赛尔声学技术有限公司 A kind of output intent of simulated environment noise and device
CN105386415A (en) * 2015-11-16 2016-03-09 黄立 Ecological sound barrier device and greening isolating device used for municipal road
CN105423924A (en) * 2015-11-17 2016-03-23 北京交通大学 Noise barrier on-line measurement method and system
WO2017139968A1 (en) * 2016-02-15 2017-08-24 厦门嘉达环保建造工程有限公司 Hyperbolic cooling tower noise-attenuating system
US20190213990A1 (en) * 2016-08-19 2019-07-11 3M Innovative Properties Company Sound-absorbing panels comprising a core consisting of connected cells, wherein some of the cell walls have openings
CN110633499A (en) * 2019-08-19 2019-12-31 全球能源互联网研究院有限公司 Sound barrier parameter determination method and device
CN112347705A (en) * 2021-01-07 2021-02-09 中国电力科学研究院有限公司 Method and system for establishing transformer substation factory boundary noise model
CN114491771A (en) * 2022-03-04 2022-05-13 贵州省交通规划勘察设计研究院股份有限公司 Road sound barrier acoustic design simulation calculation method
CN114841878A (en) * 2022-04-27 2022-08-02 广东博迈医疗科技股份有限公司 Speckle denoising method and device for optical coherence tomography image and electronic equipment
CN116289683A (en) * 2023-03-17 2023-06-23 中交城乡建设规划设计研究院有限公司 Novel light totally-enclosed sound barrier for urban viaduct

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KINGA SZOPI´NSKA ET AL: "National legal regulations and location of noise barriers along the Polish highway", 《TRANSPORTATION RESEARCH PART D 》, pages 1 - 22 *
徐文文 等: "城市快速路噪声影响预测及声屏障降噪效果模拟研究", 《环境科技》, pages 31 - 34 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117436350A (en) * 2023-12-18 2024-01-23 中国石油大学(华东) Fracturing horizontal well pressure prediction method based on deep convolution generation countermeasure network
CN117436350B (en) * 2023-12-18 2024-03-08 中国石油大学(华东) Fracturing horizontal well pressure prediction method based on deep convolution generation countermeasure network

Also Published As

Publication number Publication date
CN116754654B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN116754654B (en) Sound barrier detection method and device based on image analysis model
Jian et al. An indirect method for bridge mode shapes identification based on wavelet analysis
BR102014001496A2 (en) method for automatically characterizing an echo contained in an ultrasonic signal, and system for automatically characterizing an echo contained in an ultrasonic signal
FR2693557A1 (en) Method and device for evaluating precipitation over an area of land.
Astone et al. The short FFT database and the peak map for the hierarchical search of periodic sources
CN107111294A (en) Use the defects detection of structural information
WO2021059909A1 (en) Data generation system, learning device, data generation device, data generation method, and data generation program
JP2016524334A (en) Wafer inspection using free form protection area
Park et al. Vision‐based natural frequency identification using laser speckle imaging and parallel computing
EP3994516B1 (en) System and method for generating an image
KR101834601B1 (en) Method and system for hybrid reticle inspection
CN111936850A (en) Surveying device, surveying system, mobile body, and surveying method
WO2013098942A1 (en) Information signal generating method
JP3612247B2 (en) Semiconductor inspection apparatus and semiconductor inspection method
CN115311532A (en) Ground penetrating radar underground cavity target automatic identification method based on ResNet network model
JP2000121539A (en) Particle monitor system and particle detection method and recording medium storing particle detection program
JP4566913B2 (en) Spike noise removal method using averaging iteration method and computer program
Liebling et al. Continuous wavelet transform ridge extraction for spectral interferometry imaging
CN112529080A (en) Image generation method based on spectral feature discrimination
Ince et al. Averaged acoustic emission events for accurate damage localization
CN113390848A (en) DCGAN spectral data expansion method
CN115967451A (en) Wireless router signal processing method and device and wireless router applying same
CN111537088B (en) Method and system for measuring effective spatial coherence distribution of dynamic light field
CN117269065A (en) Laser vibration measuring system and non-contact damage monitoring method
JP7017966B2 (en) Analytical equipment, analysis methods, programs, and storage media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant