CN117763399A - Neural network classification method for self-adaptive variable-length signal input - Google Patents

Neural network classification method for self-adaptive variable-length signal input Download PDF

Info

Publication number
CN117763399A
CN117763399A CN202410194232.9A CN202410194232A CN117763399A CN 117763399 A CN117763399 A CN 117763399A CN 202410194232 A CN202410194232 A CN 202410194232A CN 117763399 A CN117763399 A CN 117763399A
Authority
CN
China
Prior art keywords
neural network
size
signal
pooling
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410194232.9A
Other languages
Chinese (zh)
Other versions
CN117763399B (en
Inventor
周军
谢子熠
刘嘉豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202410194232.9A priority Critical patent/CN117763399B/en
Publication of CN117763399A publication Critical patent/CN117763399A/en
Application granted granted Critical
Publication of CN117763399B publication Critical patent/CN117763399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a neural network classification method of self-adaptive variable-length signal input, which comprises the steps of firstly calculating the length of a signal to be processed, and windowing continuous signals according to the length of the signal; on the other hand, calculating the characteristic diagram sizes of different convolution layers and the characteristic diagram sizes of different pooling layers in the neural network classifier according to the length of the signal to be processed; the neural network classifier generates adjustment parameters of the current layer according to the received characteristic diagram sizes of the convolution layers and the characteristic diagram sizes of the pooling layers of different layers, so that the optimization adjustment of the number of registers and operation units used in the convolution operation and pooling operation of the current layer is completed; and finally, finishing classification processing by using the neural network classifier which is optimally adjusted. The invention utilizes the calculation unit which can adapt to the size of the feature map in the neural network to realize the neural network classification based on the variable-length signal input, improves the robustness and the universality of the neural network classifier, and reduces the utilization rate of system resources.

Description

Neural network classification method for self-adaptive variable-length signal input
Technical Field
the invention relates to a deep learning technology, in particular to a neural network classification technology for self-adaptive variable-length signal input.
Background
deep learning is a representation learning method capable of autonomously learning data features, and has high-efficiency and strong classification capability and feature learning capability in an unsupervised or supervised mode. Neural networks, such as convolutional neural network CNN, are widely used in the fields of computer vision, natural language processing, physiological signals, etc., as a common method in the current artificial intelligence field.
The length of an input signal is required to be fixed in training and testing, when the length of the input signal is changed, the original data is required to be truncated or zero-padded, so that the information of part of the original signals is lost or useless information is introduced, and finally, the accuracy of the classification of the neural network is reduced or the redundant calculation amount is increased.
Taking electrocardiographic signal processing as an example, the length of a heart beat of different people at different moments is variable, in the existing method, because the input signal is of a fixed length, in order to ensure that the classification accuracy is not reduced, the length of the input signal is generally set to be the maximum heart beat length which can occur, however, useful information of the heart beat only exists between the beginning and the end of the heart beat, and the rest is redundant input, so that unnecessary calculation is caused.
Disclosure of Invention
The invention aims to solve the technical problems that the length of the acquired real data is uncertain in the applications of voice processing, physiological signal processing and the like, and the size of an input signal cannot be efficiently adapted to the existing fixed-length input neural network classification method, and provides a neural network classification method suitable for variable-length signal input.
the invention discloses a neural network classification method for self-adaptive variable-length signal input, which comprises the following steps:
Pretreatment: receiving an input continuous original signal, and carrying out filtering and denoising pretreatment on the original signal;
Self-adaptive windowing: calculating the length of a signal to be processed according to the preprocessed signal, on one hand, windowing the continuous signal according to the length of the signal, and outputting the windowed signal to a neural network classifier; on the other hand, according to the signal length to be processed, the corresponding sizes of all layers in the neural network classifier, namely the characteristic map sizes of the convolution layers of different layers and the characteristic map sizes of the pooling layers of different layers are calculated and output to the neural network classifier;
And (3) self-adaptive adjustment: generating adjustment parameters of a current layer by the neural network classifier according to the received characteristic diagram sizes of the convolution layers of different layers and the characteristic diagram sizes of the pooling layers of different layers, and then completing optimization adjustment of the number of registers and operation units used in the convolution operation and pooling operation of the current layer of the neural network by utilizing the adjustment parameters of the current layer;
Classification processing: and carrying out convolution operation and pooling operation on the windowed signals layer by layer according to the layer sequence of the neural network to obtain signals with fixed lengths, finally finishing classification processing on the signals with fixed lengths through the full-connection layer, and finally outputting classification results.
The invention has the beneficial effects that the neural network classification based on variable-length signal input is realized by utilizing the calculation unit which can adapt to the feature map size in the neural network, so that the robustness and the universality of the neural network classifier are improved, the utilization rate of system resources is reduced, and finally, the overall cost is reduced, and the invention is suitable for the signal classification processing of voice signals, physiological signals and the like with unfixed length.
Drawings
fig. 1 is a neural network classification system for adaptive variable length signal input.
Fig. 2 is a flow chart of neural network classification for adaptive variable length signal input.
Detailed Description
The electrocardio or language signal is first pre-processed, such as filtering and noise removing, for example, for electrocardio signal classification, QT wave group detection is required for electrocardio signals, the starting and ending positions of a complete heart beat are located, and then the signals are sent to a neural network for classification processing. Or in voice detection of the language signal, the positions of the beginning and the ending of the voice are determined through methods such as a threshold value, so that the length of the language signal to be processed at the later stage is determined. The embodiment system takes the variable-length signal as the input signal of the neural network and cooperates with the self-adaptive feature map size calculation function in the neural network classifier to realize the method. As shown in fig. 1, the system comprises a preprocessing module, an adaptive windowing module and a variable-length input neural network classifier. The variable-length input neural network classifier comprises a neural network control module, a data buffer, a calculation unit and a full-connection module. The computing unit comprises a convolution computing module and a pooling computing module.
The specific implementation steps are shown in fig. 2:
s0, the preprocessing module receives an input continuous original signal, performs preprocessing such as filtering and denoising on the original signal, and outputs preprocessed information to the self-adaptive windowing module.
S1, an adaptive windowing module receives a preprocessed signal, calculates the length of the signal to be processed, on one hand, windows the continuous signal according to the length of the signal to complete adaptive signal windowing, and then outputs the windowed signal to a data buffer in a neural network classifier; on the other hand, the corresponding sizes of all layers of all modules in the neural network classifier are calculated according to the signal length: convolutional layer feature map sizeSize conv And feature map size for pooling layersSize pooling and output to a neural network control module in the neural network classifier.
Size conv =(Size input Size kernel +2×Padding)/Stride+1;
Size pooling =(Size input Size kernel )/Stride+1;
Wherein,Size kernel for a preset size of convolution kernel in the neural network classifier,Stridefor the convolution step size,Paddingis the filling size.Size input The current convolutional layer is the size of the input signal, and the input signal of the next convolutional layer is the size of the output signal of the pooling layer of the current layer.
and S2, the neural network control module optimizes and adjusts adjustment parameters of the convolution calculation module and the pooling calculation module aiming at the number of registers and operation units used by convolution operation and pooling operation of each layer according to the received characteristic diagram size of the convolution layer and the characteristic diagram size of the pooling layer of each layer, and outputs the layer-by-layer adjustment parameters to the data buffer.
S3, temporarily storing the input adjustment parameters by the data buffer. The network parameters are pre-stored in the data buffer.
S4, performing layer-by-layer calculation by using the neural network classifier, wherein the calculation unit reads parameters and data of a current layer from the data buffer, the parameters comprise adjustment parameters and network parameters of the current layer, and the data are signals subjected to windowing; after parameter tuning is performed on the convolution calculation module and the pooling calculation module by using the adjustment parameters, performing first-layer convolution operation on the windowed signals through the convolution calculation module according to the layer sequence of the neural network, performing first-layer pooling operation through the pooling calculation module, and storing intermediate calculation results of the convolution calculation module and the pooling calculation module into a data buffer for use by a calculation unit of the next-layer neural network. The network parameters include weights and biases.
s5, the neural network classifier performs next-layer calculation, and the calculation unit reads parameters and data of the current layer from the data buffer, wherein the parameters comprise adjustment parameters and network parameters of the current layer, and the data are intermediate calculation results stored in S4; and outputting an intermediate calculation result to a data buffer through a convolution calculation module and a pooling calculation module according to the layer sequence of the neural network for a calculation unit of the next layer of the neural network.
The layer-by-layer operation of the neural network, for example, the first step is a first layer convolution operation, a first layer pooling operation, obtaining an intermediate result 1, and then using the intermediate result 1 as an input to perform a second layer convolution operation and a second layer pooling operation. The data on the arrow of the data buffer pointing to the calculation unit is the windowed signal, i.e. the input signal or the calculation result of the previous layer.
and S6, after the neural network classifier completes convolution operation and pooling operation of all layers, the pooling operation module converts an output result into a signal with a fixed length and outputs the signal with the fixed length to the full-connection module.
And S7, the full connection layer receives signals with fixed lengths, performs classification processing, and finally outputs classification results.
In the working phase, each time a variable length signal is input, the steps S0, S1, S2, S3, S4, S5, S6 and S7 are circulated once.

Claims (6)

1. the neural network classification method for the self-adaptive variable-length signal input is characterized by comprising the following steps of:
Pretreatment: receiving an input continuous original signal, and carrying out filtering and denoising pretreatment on the original signal;
Self-adaptive windowing: calculating the length of a signal to be processed according to the preprocessed signal, on one hand, windowing the continuous signal according to the length of the signal, and outputting the windowed signal to a neural network classifier; on the other hand, according to the signal length to be processed, the corresponding sizes of all layers in the neural network classifier, namely the characteristic map sizes of the convolution layers of different layers and the characteristic map sizes of the pooling layers of different layers are calculated and output to the neural network classifier;
And (3) self-adaptive adjustment: generating adjustment parameters of a current layer by the neural network classifier according to the received characteristic diagram sizes of the convolution layers of different layers and the characteristic diagram sizes of the pooling layers of different layers, and then completing optimization adjustment of the number of registers and operation units used in the convolution operation and pooling operation of the current layer of the neural network by utilizing the adjustment parameters of the current layer;
Classification processing: and carrying out convolution operation and pooling operation on the windowed signals layer by layer according to the layer sequence of the neural network to obtain signals with fixed lengths, finally finishing classification processing on the signals with fixed lengths through the full-connection layer, and finally outputting classification results.
2. the method of claim 1, wherein the convolutional layer feature map sizeSize conv And feature map size for pooling layersSize pooling the calculation mode of (a) is as follows:
Size conv =(Size input Size kernel +2×Padding)/Stride+1;
Size pooling =(Size input Size kernel )/Stride+1;
Wherein,Size kernel for a preset size of convolution kernel in the neural network classifier,Stridefor the convolution step size,Paddingis the filling size.
3. the method of claim 1, wherein the preprocessing step is performed by a preprocessing module.
4. the method of claim 1, wherein the adaptive windowing step is performed by an adaptive windowing module.
5. the method of claim 1, wherein the adaptively adjusting step and the classifying step are performed by a neural network classifier;
the neural network classifier comprises a neural network control module, a data buffer, a computing unit and a full-connection module;
The neural network classifier comprises the following specific steps:
The neural network control module generates adjustment parameters of the current layer according to the received characteristic diagram sizes of the convolution layers and the characteristic diagram sizes of the pooling layers of different layers and outputs the adjustment parameters to the data buffer;
the data buffer stores the input adjustment parameters temporarily;
The calculation unit reads the adjustment parameters in the data buffer, after optimizing and adjusting the number of registers and operation units used in the convolution operation and pooling operation of the current layer, reads the windowed signals from the data buffer, carries out convolution operation on the windowed signals, carries out pooling operation on the windowed signals to convert the windowed signals into signals with fixed length, and finally outputs the signals with fixed length to the full-connection module;
And the full-connection module receives the signals with fixed length, completes classification processing through the full-connection layer, and finally outputs classification results.
6. The method of claim 5, wherein the convolution operation is performed by a convolution calculation module in the calculation unit; the pooling operation is implemented by a pooling calculation module in the calculation unit.
CN202410194232.9A 2024-02-21 2024-02-21 Neural network classification method for self-adaptive variable-length signal input Active CN117763399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410194232.9A CN117763399B (en) 2024-02-21 2024-02-21 Neural network classification method for self-adaptive variable-length signal input

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410194232.9A CN117763399B (en) 2024-02-21 2024-02-21 Neural network classification method for self-adaptive variable-length signal input

Publications (2)

Publication Number Publication Date
CN117763399A true CN117763399A (en) 2024-03-26
CN117763399B CN117763399B (en) 2024-05-14

Family

ID=90326016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410194232.9A Active CN117763399B (en) 2024-02-21 2024-02-21 Neural network classification method for self-adaptive variable-length signal input

Country Status (1)

Country Link
CN (1) CN117763399B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
US20200054262A1 (en) * 2018-08-16 2020-02-20 Korea Institute Of Science And Technology Method for real time analyzing stress using deep neural network algorithm
CN111053549A (en) * 2019-12-23 2020-04-24 威海北洋电气集团股份有限公司 Intelligent biological signal abnormality detection method and system
CN111210019A (en) * 2020-01-16 2020-05-29 电子科技大学 Neural network inference method based on software and hardware cooperative acceleration
CN111445420A (en) * 2020-04-09 2020-07-24 北京爱芯科技有限公司 Image operation method and device of convolutional neural network and electronic equipment
CN111460932A (en) * 2020-03-17 2020-07-28 哈尔滨工程大学 Underwater sound signal classification and identification method based on self-adaptive convolution
CN111584029A (en) * 2020-04-30 2020-08-25 天津大学 Electroencephalogram self-adaptive model based on discriminant confrontation network and application of electroencephalogram self-adaptive model in rehabilitation
CN111738427A (en) * 2020-08-14 2020-10-02 电子科技大学 Operation circuit of neural network
CN111783876A (en) * 2020-06-30 2020-10-16 西安全志科技有限公司 Self-adaptive intelligent detection circuit and image intelligent detection method
US20210036656A1 (en) * 2018-10-29 2021-02-04 Xi'an Jiaotong University Arc fault detection method for photovoltaic system based on adaptive kernel function and instantaneous frequency estimation
US20210142144A1 (en) * 2019-11-07 2021-05-13 Alibaba Group Holding Limited Multi-size convolutional layer
US20210374518A1 (en) * 2020-05-27 2021-12-02 Nvidia Corporation Techniques for modifying and training a neural network
CN114190889A (en) * 2021-11-19 2022-03-18 上海联影智能医疗科技有限公司 Electrocardiosignal classification method and system, electronic equipment and readable storage medium
US20220191524A1 (en) * 2020-12-14 2022-06-16 Nokia Technologies Oy Caching and Clearing Mechanism for Deep Convolutional Neural Networks
CN114692830A (en) * 2022-03-25 2022-07-01 潘振华 Self-strengthening image and voice deep learning model of promotion network
CN114742225A (en) * 2022-04-07 2022-07-12 中国科学院合肥物质科学研究院 Neural network reasoning acceleration method based on heterogeneous platform
CN115221926A (en) * 2022-07-20 2022-10-21 吉林大学 Heart beat signal classification method based on CNN-GRU network model
CN116027911A (en) * 2023-03-29 2023-04-28 北京理工大学 Non-contact handwriting input recognition method based on audio signal

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
US20200054262A1 (en) * 2018-08-16 2020-02-20 Korea Institute Of Science And Technology Method for real time analyzing stress using deep neural network algorithm
US20210036656A1 (en) * 2018-10-29 2021-02-04 Xi'an Jiaotong University Arc fault detection method for photovoltaic system based on adaptive kernel function and instantaneous frequency estimation
US20210142144A1 (en) * 2019-11-07 2021-05-13 Alibaba Group Holding Limited Multi-size convolutional layer
CN111053549A (en) * 2019-12-23 2020-04-24 威海北洋电气集团股份有限公司 Intelligent biological signal abnormality detection method and system
CN111210019A (en) * 2020-01-16 2020-05-29 电子科技大学 Neural network inference method based on software and hardware cooperative acceleration
CN111460932A (en) * 2020-03-17 2020-07-28 哈尔滨工程大学 Underwater sound signal classification and identification method based on self-adaptive convolution
CN111445420A (en) * 2020-04-09 2020-07-24 北京爱芯科技有限公司 Image operation method and device of convolutional neural network and electronic equipment
CN111584029A (en) * 2020-04-30 2020-08-25 天津大学 Electroencephalogram self-adaptive model based on discriminant confrontation network and application of electroencephalogram self-adaptive model in rehabilitation
US20210374518A1 (en) * 2020-05-27 2021-12-02 Nvidia Corporation Techniques for modifying and training a neural network
CN111783876A (en) * 2020-06-30 2020-10-16 西安全志科技有限公司 Self-adaptive intelligent detection circuit and image intelligent detection method
CN111738427A (en) * 2020-08-14 2020-10-02 电子科技大学 Operation circuit of neural network
US20220191524A1 (en) * 2020-12-14 2022-06-16 Nokia Technologies Oy Caching and Clearing Mechanism for Deep Convolutional Neural Networks
CN114190889A (en) * 2021-11-19 2022-03-18 上海联影智能医疗科技有限公司 Electrocardiosignal classification method and system, electronic equipment and readable storage medium
CN114692830A (en) * 2022-03-25 2022-07-01 潘振华 Self-strengthening image and voice deep learning model of promotion network
CN114742225A (en) * 2022-04-07 2022-07-12 中国科学院合肥物质科学研究院 Neural network reasoning acceleration method based on heterogeneous platform
CN115221926A (en) * 2022-07-20 2022-10-21 吉林大学 Heart beat signal classification method based on CNN-GRU network model
CN116027911A (en) * 2023-03-29 2023-04-28 北京理工大学 Non-contact handwriting input recognition method based on audio signal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XU WANG 等: "A Lightweight Neural Network Based Respiratory Rate Estimation Approach Using PPG Signal", 2023 6TH INTERNATIONAL CONFERENCE ON ELECTRONICS TECHNOLOGY (ICET), 15 August 2023 (2023-08-15), pages 1446 - 1449 *
祝镇: "面向智能心电监测系统的自适应可重构心拍检测硬件研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》, vol. 2023, no. 01, 15 January 2023 (2023-01-15), pages 006 - 796 *

Also Published As

Publication number Publication date
CN117763399B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN109410974B (en) Voice enhancement method, device, equipment and storage medium
US20230334632A1 (en) Image recognition method and device, and computer-readable storage medium
CN111814975B (en) Neural network model construction method and related device based on pruning
CN110348564B (en) SCNN reasoning acceleration device based on systolic array, processor and computer equipment
CN111144556A (en) Hardware circuit of range batch processing normalization algorithm for deep neural network training and reasoning
CN115941112B (en) Portable hidden communication method, computer equipment and storage medium
CN111950715A (en) 8-bit integer full-quantization inference method and device based on self-adaptive dynamic shift
CN117501245A (en) Neural network model training method and device, and data processing method and device
CN117763399B (en) Neural network classification method for self-adaptive variable-length signal input
CN114996495A (en) Single-sample image segmentation method and device based on multiple prototypes and iterative enhancement
CN113298235A (en) Neural network architecture of multi-branch depth self-attention transformation network and implementation method
CN115328661B (en) Computing power balance execution method and chip based on voice and image characteristics
CN117494762A (en) Training method of student model, material processing method, device and electronic equipment
CN116187418A (en) Pulse neural network compression method suitable for neuromorphic hardware
Lee et al. MPQ-YOLACT: Mixed-Precision Quantization for Lightweight YOLACT
KR102478256B1 (en) Rank order coding based spiking convolutional neural network calculation method and handler
US10938412B2 (en) Decompression of model parameters using functions based upon cumulative count distributions
CN111354372B (en) Audio scene classification method and system based on front-end and back-end combined training
CN115880324A (en) Battlefield target image threshold segmentation method based on pulse convolution neural network
CN113255446B (en) Face detection system
US20230409869A1 (en) Process for transforming a trained artificial neuron network
CN113673690B (en) Underwater noise classification convolutional neural network accelerator
CN113435586B (en) Convolution operation device and system for convolution neural network and image processing device
CN117236388A (en) Transformation process of trained artificial neuron network
EP4016386A1 (en) Method and server for recognizing and determining object in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant