CN213715133U - Acoustic imaging device based on edge computing platform - Google Patents

Acoustic imaging device based on edge computing platform Download PDF

Info

Publication number
CN213715133U
CN213715133U CN202023326837.5U CN202023326837U CN213715133U CN 213715133 U CN213715133 U CN 213715133U CN 202023326837 U CN202023326837 U CN 202023326837U CN 213715133 U CN213715133 U CN 213715133U
Authority
CN
China
Prior art keywords
signal
data
omapl138
synchronous sampling
computing platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202023326837.5U
Other languages
Chinese (zh)
Inventor
何鹏举
王璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Northwestern Polytechnical University filed Critical Shenzhen Institute of Northwestern Polytechnical University
Priority to CN202023326837.5U priority Critical patent/CN213715133U/en
Application granted granted Critical
Publication of CN213715133U publication Critical patent/CN213715133U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Circuit For Audible Band Transducer (AREA)
  • Studio Devices (AREA)

Abstract

The utility model provides an acoustic imaging device based on edge computing platform, belonging to the acoustic imaging field, comprising an information acquisition terminal, an OMAPL138 development board and a data display server; the OMAPL138 development board is provided with a DSP and an ARM processor; a signal acquisition and transmission module and a signal processing module are arranged in the ARM processor; the information acquisition terminal comprises a camera and a microphone sensor array, and respectively acquires a real environment image signal and an environment sound signal; the signal acquisition and transmission module comprises a synchronous sampling circuit and a signal conditioning circuit connected with the synchronous sampling circuit, and the microphone sensor array is connected with the synchronous sampling circuit; the camera is connected with the OMAPL138 development board through a USB interface, and the signal conditioning circuit is in data communication with the OMAPL 138; and the signal processing module transmits and preprocesses the received data and then sends the data to the data display server to display a calculation result. The device lightens the calculation pressure of the server side and realizes the real-time high-speed acquisition and transmission of the environmental acoustic signals.

Description

Acoustic imaging device based on edge computing platform
Technical Field
The utility model belongs to the technical field of acoustics becomes and optical imaging, concretely relates to based on cloud-limit-end acoustooptic imaging device, acoustics imaging device based on edge computing platform realizes promptly.
Background
The acoustic imaging technology utilizes technologies such as acoustics, electronics, information processing and the like, and the technology of converting sound into an image visible to human eyes can help people to intuitively know sound fields, sound waves and sound sources, conveniently know the positions and reasons of noise generated by machine equipment, and reflect the state of the sound images of objects (machine equipment).
The acoustic imaging device is implemented based on an edge computing platform. According to IDC prediction, the total amount of global data is more than 40 bytes in 2020, and 45% of data generated by the Internet of things is processed at the edge of the network. But the traditional cloud computing has insufficient real-time performance; the bandwidth is insufficient; the energy consumption is overlarge; the data security and privacy are not facilitated.
In order to solve the problems, the method is realized by adopting an edge computing platform facing to the computation of mass data generated by edge equipment.
SUMMERY OF THE UTILITY MODEL
In order to overcome the deficiencies in the prior art, the utility model provides an acoustic imaging device based on edge computing platform realizes.
In order to achieve the above object, the present invention provides the following technical solutions:
an acoustic imaging device realized based on an edge computing platform comprises an information acquisition terminal, an OMAPL138 development board and a data display server;
the information acquisition terminal comprises a camera and a microphone sensor array, wherein the camera and the microphone sensor array are respectively used for acquiring an image signal and an environmental sound signal of a real environment;
the OMAPL138 development board is provided with a DSP and an ARM processor; a signal acquisition and transmission module and a signal processing module are arranged in the ARM processor;
the signal acquisition and transmission module comprises a synchronous sampling circuit and a signal conditioning circuit connected with the synchronous sampling circuit, and the microphone sensor array is connected with the synchronous sampling circuit;
the camera is connected with the OMAPL138 development board through a USB interface;
and the signal processing module transmits and preprocesses the received data and then sends the data to the data display server to display a calculation result.
Preferably, the microphone array is in an L-shaped layout with 9 array elements, the spacing between the array elements is 2cm, and 9 array element microphone sensors form 9 signal acquisition channels to acquire environmental sound signals in real time; the camera is a TC421HD wide dynamic camera.
Preferably, the synchronous sampling circuit uses two synchronous sampling ADC chips with the types of LTCs 2320-14, wherein the output signal is 9 paths, and two paths of same driving signals generated by a phase-locked loop IP core in the FPGA are used for driving the two LTCs 2320-14 chips respectively;
the signal conditioning circuit is a single-end to differential circuit of the FPGA, a single-end signal consists of a signal end and a reference end, and the reference end is the ground; the single-end to differential conversion is to convert a single-end signal into two outputs, wherein one output is in phase with the single-end signal, and the other output is in phase-opposite to the single-end signal; the single-end to differential conversion circuit is realized by two operational amplifier circuits; after the microphone array data is converted into differential signals by the signal conditioning circuit, the differential signals are connected with an input pin AIN + and an AIN-differential input pair; in the aspect of an output circuit, the SPI output format adopts CMOS, and the CMOS/LVDS pins are grounded; the output time sequence adopts an SDR mode, and an SDR/DDR pin is grounded; the single-channel serial output SDOx of the LTCs 2320-14 are directly connected to the I/O pins of the FPGA, respectively.
Preferably, the FPGA communicates data with the DSP in the oma pl138 development board via an EMIFA interface.
Preferably, the FPGA moves the data to OMAP-L138 while writing the data to the buffer; specifically, ping-pong FIFOs are developed, reading and writing logic is controlled through ping-pong operation, one FIFO is responsible for writing data, the other FIFO is responsible for reading data, and the two are alternately performed, so that the synchronous operation of reading and writing is realized.
Preferably, the operational amplifier adopts an LT1819CMS8 operational amplifier chip, and the data display server is a BOA server.
The utility model provides an acoustic imaging device based on edge computing platform realizes has following beneficial effect:
1. synchronous sampling of data of a multi-channel microphone array is realized; data processing is carried out near a data producer, a network is not required to request the cloud computing center for response, system delay is greatly reduced, and service response capability is enhanced;
2. the FPGA drives the two LTCs 2320-14 to realize synchronous sampling of the data of the multi-channel microphone array sensor.
3. The ping-pong FIFO is developed to control the alternate execution of the read-write logic through ping-pong operation, so that the synchronous execution of the read-write operation is realized, and the stable and continuous transmission of data under the conditions of different read-write rates is realized.
4. The dual-core processor (ARM + DSP) edge computing platform using the FPGA + OMAPL138 development board is used as a hardware implementation carrier of the device, has the powerful data processing capacity of the FPGADSP processor, reduces the computing pressure of a server side, can preprocess data, is rich in IO interface and embedded system transplantation capacity of the ARM processor, can realize real-time high-speed acquisition and transmission of environmental acoustic signals, and has a certain edge computing function.
5. A BOA server is transplanted on the OMAPL138 platform, so that the acoustic imaging result can be browsed by a webpage end in real time.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some embodiments of the invention, and it will be clear to a person skilled in the art that other drawings can be obtained on the basis of these drawings without inventive effort.
Fig. 1 is a schematic block diagram of an acoustic imaging apparatus implemented based on an edge computing platform according to embodiment 1 of the present invention;
FIG. 2 is a block diagram of a signal conditioning circuit;
FIG. 3 is a schematic diagram of a synchronous sampling chip LT 2320-14;
FIG. 4 is a flow chart of synchronous sampling;
FIG. 5 is a diagram of a ping-pong FIFO architecture;
FIG. 6 is a main functional block diagram of OMAP-L138;
FIG. 7 is a schematic diagram of the connection structure between FPGA and OMAPL 138;
fig. 8 is a schematic diagram of the oma pl138 shared memory.
Detailed Description
In order to make the technical solution of the present invention better understood and practical for those skilled in the art, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1
The utility model provides an acoustic imaging device based on edge computing platform realizes adopts to use OMAP-L138 to design control and data processing module as the core to EP4CE115F23I7 is core design data acquisition and transmission module, utilizes the server to assist processing module as data, accomplishes collection, transmission and analysis processes of acoustics camera system to microphone array signal.
Specifically, as shown in fig. 1, the system includes an information acquisition terminal, an OMAPL138 development board, and a data display server; in this embodiment, the data display server is a BOA server.
The information acquisition terminal comprises a camera and a microphone sensor array, wherein the camera and the microphone sensor array are respectively used for acquiring an image signal and an environmental sound signal of a real environment;
the output signal of the microphone array circuit is a single-ended voltage signal, and a single-ended-to-differential signal conditioning circuit is designed for improving the transmission quality and facilitating the subsequent analog-to-digital conversion because the amplitude of the sound pressure signal is small; therefore, the signal acquisition and transmission module comprises a synchronous sampling circuit and a signal conditioning circuit connected with the synchronous sampling circuit, and the microphone sensor array is connected with the synchronous sampling circuit;
the OMAPL138 development board is provided with a DSP and an ARM processor; a signal acquisition and transmission module and a signal processing module are arranged in the ARM processor;
the structure diagram of the OMAPL138 development board is shown in FIG. 6, a camera is connected with the OMAPL138 development board through a USB interface, the camera is controlled by a corresponding driver in a LINUX system on the development board to shoot and store images for further algorithm processing, and a signal conditioning circuit is in data communication with the OMAPL 138;
and the signal processing module transmits and preprocesses the received data and then sends the data to the data display server to display a calculation result.
Specifically, in the embodiment, the microphone is a DGO4522DD fully-directional electret microphone, the sensitivity is-36 +/-3 dB, the frequency response range is 20 Hz-16 kHz, the acoustic imaging application requirement can be met, the microphone array is in 9-array-element L-shaped layout, the array element interval is 2cm, and 9 array-element microphone sensors form 9 signal acquisition channels to acquire the environmental sound signals in real time; the camera is TC421HD type wide dynamic camera, and this kind of camera has the backlight compensation, and highlight suppression function can adapt to image acquisition under the multiple different environment, and has USB interface plug-and-play.
In the embodiment, the microphone array has 9 output signals, so the synchronous sampling circuit uses two synchronous sampling ADC chips with the types of LTCs 2320-14, and drives the two LTCs 2320-14 through the FPGA to complete synchronous sampling of microphone array data, and the flow of the synchronous sampling scheme is shown in fig. 4. LTC2320-14 is a low-noise, high-speed, 8-channel, 14-bit Successive Approximation Register (SAR) synchronous sampling ADC chip with sign bit and 14-bit sign bit produced by Adenoda (ADI), single-channel output data adopts a serial mode, comprises 1-bit sign bit and 14-bit data bit, supports the highest 1.5Msps sampling rate of a single channel, supports 8VPP differential input, supports CMOS or LVDSSPI serial I/O, and can adopt 3.3V or 5V power supply. The output signals are 9 paths, in order to ensure synchronous sampling, the phases and wiring of the driving signals are required to be designed to be consistent, and in order to ensure the load, two paths of same driving signals generated by a phase-locked loop (PLL) IP core in the FPGA are used for driving two LTC2320-14 chips respectively; thereby ensuring the consistency of the sampling moments. The LTCs 2320-14 are equipped with a high-speed compatible SPI serial interface, supporting both LVDS mode and CMOS mode. Under LVDS mode, each two paths of signals are combined and output as low voltage differential signals, and data of two channels are continuously and serially output. In the CMOS mode, each signal outputs its data serially and separately.
The signal conditioning circuit is a single-end to differential circuit of the FPGA, and the single-end to differential circuit is designed for improving transmission quality and facilitating the following analog-to-digital conversion because the amplitude of the sound pressure signal is small. The differential input can significantly reduce common mode noise and second harmonic distortion. The single-ended signal consists of a signal end and a reference end, wherein the reference end is the ground; the single-end to differential conversion is to convert a single-end signal into two outputs, wherein one output is in phase with the single-end signal, and the other output is in phase-opposite to the single-end signal; the differential output gain is greater compared to a single ended output. As shown in fig. 2, the single-ended to differential circuit is implemented by two operational amplifier circuits, in this embodiment, the operational amplifier is an LT1819CMS8 operational amplifier chip; the LT1819CMS8 is an operational amplifier with dual channels, wide bandwidth, high slew rate, low noise and low distortion, and is powered by a 5V dc voltage. In the aspect of power supply, 3.3V direct-current voltage is adopted for power supply; in the aspect of an input circuit, microphone array data is converted into differential signals through a signal conditioning circuit and then is connected with an input pin AIN + and an AIN-differential input pair; in the aspect of an output circuit, the SPI output format adopts a CMOS, so that a CMOS/LVDS pin is grounded; the output time sequence adopts an SDR mode, so that an SDR/DDR pin is grounded; the LTCs 2320-14 support a 2.5VI/O voltage, level matched to the FPGAI/O pin, so that the single channel serial output SDOx of the LTCs 2320-14 is directly connected to the I/O pin of the FPGA, respectively, as shown in the schematic diagram of FIG. 3.
As shown in fig. 7, in order to realize real-time monitoring, it is necessary to increase the data transfer speed as much as possible, and the FPGA and the DSP in the oma pl138 development board perform data communication through the EMIFA interface. The field programmable gate array FPGA in this embodiment is EP4CE115F23I 7.
In the device, the DSP is responsible for carrying the sensor data from the FPGA to the memory space in the OMAP-L138 module and taking out the effective data. And after the work task is finished, informing the ARM to store the data to a file system. The OMAP-L138 supports various inter-multi-core data transmission modes, including an interrupt-based mode, a shared memory-based mode and the like, and the data transmission scheme based on the shared memory is adopted in the application. In OMAP-L138, the DSP runs an SYS/BIOS system and directly accesses a memory space through a physical address; ARM runs linux3.3, and accesses the memory space through virtual address mapping, as shown in fig. 8:
the DSP directly assigns the processed data to the memory space address of the DDR2RAM, then informs the ARM end to read the data through Notify, and after the ARM receives the synchronous signal of the DSP end, the ARM maps the DDR2RAM address to the virtual address space and stores the virtual address space to the file system.
A boa (web) server based on an embedded linux system is transplanted to an ARM end in an OMAPL138 edge computing platform, the computing result of acoustic imaging is displayed on the edge computing platform for data in a B/S mode, and real-time browsing of the data is achieved at the edge end. The BOA server is a small and high-efficiency web server, is an http server which runs under unix or linux, supports CGI and is suitable for a single task of an embedded system, is open in source code and high in performance, and is suitable for the embedded system due to the small size and excellent performance.
Further, in this embodiment, the FPGA transfers data to the OMAP-L138 while writing data to the buffer; the FPGA samples according to a 200KHz rate, the EMIFA module adopts a 100MHz clock for transmitting data, and in order to ensure the stability of a data reading process, a data buffer area needs to be arranged to deal with the phenomenon that the data generation rate is not matched with the reading rate. Considering that the memory capacity of the FPGA chip CycleiVEP 4CE115F23I7 in the system is 3888Kbits and cannot store large batch of data, the data needs to be transferred to the OMAP-L138 while the data is written to the buffer. In order to realize the aim, ping-pong FIFOs with proper capacity are opened, reading and writing logic is controlled through ping-pong operation, one FIFO is responsible for writing data, the other FIFO is responsible for reading data, the two FIFO and the reading logic are alternately carried out, the synchronous operation of the reading and the writing operation is realized, the two FIFO and the reading logic are not influenced mutually, and the structure is shown in figure 5.
The processing procedure of the environmental sound signal collected by the microphone sensor array in this embodiment is as follows: the microphone array collects environmental sound signal data; the environmental sound signal data is converted into a differential signal through a signal conditioning circuit; the FPGA drives the synchronous sampling circuit to sample an analog signal acquired by the microphone to obtain a digital signal, and the digital signal is buffered to the FPGA; the data obtained by sampling is written into a buffer area in the FPGA through a ping-pong FIFO, and simultaneously data transfer is carried out; the DSP on the OMAPL138 will carry the sensor data from the FPGA to the memory space in the OMAPL138 module; and after the data stored in the file memory is processed by an algorithm, displaying a result and sending the result to the BOA server, and displaying the result on a webpage.
The acoustic imaging device based on the edge computing platform provided by the embodiment drives two LTCs 2320-14 through the FPGA to realize synchronous sampling of the data of the multi-channel microphone array sensor; the ping-pong FIFO is developed to control the alternate execution of the read-write logic through ping-pong operation, so that the synchronous execution of the read-write operation is realized, and the stable and continuous transmission of data under the conditions of different read-write rates is realized; an FPGA + OMAPL138 dual-core processor (ARM + DSP) edge computing platform is used as a hardware implementation carrier of the device, and the device has strong data processing capacity of an FPGA DSP processor, abundant IO interfaces and embedded system transplanting capacity of the ARM processor, can realize real-time high-speed acquisition and transmission of environmental acoustic signals, and has a certain edge computing function; a BOA server is transplanted on the OMAPL138 platform, so that the acoustic imaging result can be browsed by a webpage end in real time.
The above embodiments are only preferred embodiments of the present invention, the scope of protection of the present invention is not limited thereto, and any person skilled in the art can obviously obtain simple changes or equivalent replacements of the technical solutions within the technical scope of the present invention.

Claims (5)

1. An acoustic imaging device realized based on an edge computing platform is characterized by comprising an information acquisition terminal, an OMAPL138 development board and a data display server;
the information acquisition terminal comprises a camera and a microphone sensor array, wherein the camera and the microphone sensor array are respectively used for acquiring an image signal and an environmental sound signal of a real environment;
the OMAPL138 development board is provided with a DSP and an ARM processor; a signal acquisition and transmission module and a signal processing module are arranged in the ARM processor;
the signal acquisition and transmission module comprises a synchronous sampling circuit and a signal conditioning circuit connected with the synchronous sampling circuit, and the microphone sensor array is connected with the synchronous sampling circuit;
the camera is connected with the OMAPL138 development board through a USB interface;
and the signal processing module transmits and preprocesses the received data and then sends the data to the data display server to display a calculation result.
2. The acoustic imaging device realized based on the edge computing platform according to claim 1, wherein the microphone array is in an L-shaped layout with 9 array elements, the distance between the array elements is 2cm, and 9 array element microphone sensors constitute 9 signal acquisition channels to acquire the environmental sound signals in real time; the camera is a TC421HD wide dynamic camera.
3. The acoustic imaging device realized based on the edge computing platform according to claim 2, wherein the synchronous sampling circuit uses two synchronous sampling ADC chips with the types of LTCs 2320-14, wherein the output signal is 9 channels, and two identical driving signals generated by a phase-locked loop IP core in an FPGA drive the two LTCs 2320-14 chips respectively;
the signal conditioning circuit is a single-end to differential circuit of the FPGA, a single-end signal consists of a signal end and a reference end, and the reference end is the ground; the single-end to differential conversion is to convert a single-end signal into two outputs, wherein one output is in phase with the single-end signal, and the other output is in phase-opposite to the single-end signal; the single-end to differential conversion circuit is realized by two operational amplifier circuits; after the microphone array data is converted into differential signals by the signal conditioning circuit, the differential signals are connected with an input pin AIN + and an AIN-differential input pair; in the aspect of an output circuit, the SPI output format adopts CMOS, and the CMOS/LVDS pins are grounded; the output time sequence adopts an SDR mode, and an SDR/DDR pin is grounded; the single-channel serial output SDOx of the LTCs 2320-14 are directly connected to the I/O pins of the FPGA, respectively.
4. The edge computing platform implementation-based acoustic imaging apparatus of claim 3, wherein the FPGA communicates data with a DSP in the OMAPL138 development board via an EMIFA interface.
5. The edge computing platform based acoustic imaging apparatus of claim 3, wherein the operational amplifier is an LT1819CMS8 operational amplifier chip, and the data display server is a BOA server.
CN202023326837.5U 2020-12-31 2020-12-31 Acoustic imaging device based on edge computing platform Active CN213715133U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202023326837.5U CN213715133U (en) 2020-12-31 2020-12-31 Acoustic imaging device based on edge computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202023326837.5U CN213715133U (en) 2020-12-31 2020-12-31 Acoustic imaging device based on edge computing platform

Publications (1)

Publication Number Publication Date
CN213715133U true CN213715133U (en) 2021-07-16

Family

ID=76790324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202023326837.5U Active CN213715133U (en) 2020-12-31 2020-12-31 Acoustic imaging device based on edge computing platform

Country Status (1)

Country Link
CN (1) CN213715133U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114001816A (en) * 2021-12-30 2022-02-01 成都航空职业技术学院 Acoustic imager audio acquisition system based on MPSOC

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114001816A (en) * 2021-12-30 2022-02-01 成都航空职业技术学院 Acoustic imager audio acquisition system based on MPSOC
CN114001816B (en) * 2021-12-30 2022-03-08 成都航空职业技术学院 Acoustic imager audio acquisition system based on MPSOC

Similar Documents

Publication Publication Date Title
CN101271076B (en) Control method for integrated nuclear magnetic resonance spectrometer data communication
CN103986869A (en) Image collecting and displaying device of high-speed TDICCD remote sensing camera
CN206541145U (en) A kind of multi channel signals synchronous
CN205176826U (en) Audio acquisition device based on USB high speed interface
CN213715133U (en) Acoustic imaging device based on edge computing platform
CN106817545B (en) A kind of fast multiresolution video image mirror image rotation processing system
CN109186752B (en) Underwater acoustic signal acquisition, transmission and detection system based on graphic processor
CN112822438A (en) Real-time control multichannel video manager
CN101980281B (en) Ultrasonic image amplification method and amplification system
US9229841B2 (en) Systems and methods for detecting errors and recording actions on a bus
Yan et al. Design of CMOS image acquisition system based on FPGA
CN118157807A (en) Array data synchronization system and method based on multichannel ADC chip
CN107564265B (en) LXI data acquisition unit for high-speed transmission and working method thereof
CN209881907U (en) Image acquisition equipment based on FPGA
CN113823310B (en) Voice interruption wake-up circuit applied to tablet computer
CN102829805A (en) U-disk sensor and detector
CN103565476B (en) Medical ultrasound whole-frame image transmission system
CN203661277U (en) Low-frequency sensor signal transmission device based on MIC interface
CN214122459U (en) Sonar target simulator device
CN205388775U (en) Adopt parallel data processing's figure processing system
CN114006994B (en) Transmission system based on configurable wireless video processor
CN118505489B (en) Image data processing apparatus, image data processing method, image data processing device, and storage medium
CN205005137U (en) FPGA's real -time image acquisition with remove device of making an uproar and handling
Zeng et al. Design of Speech Recognition System Based on Linear Microphone Array
CN108052510A (en) A kind of multilingual translation device for supporting gesture identification and voice pickup

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant