CN116155283A - TI-ADC mismatch error calibration method based on fully connected neural network - Google Patents

TI-ADC mismatch error calibration method based on fully connected neural network Download PDF

Info

Publication number
CN116155283A
CN116155283A CN202310143328.8A CN202310143328A CN116155283A CN 116155283 A CN116155283 A CN 116155283A CN 202310143328 A CN202310143328 A CN 202310143328A CN 116155283 A CN116155283 A CN 116155283A
Authority
CN
China
Prior art keywords
neural network
adc
training
mismatch error
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310143328.8A
Other languages
Chinese (zh)
Inventor
彭析竹
米奕杭
张耘凡
唐鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310143328.8A priority Critical patent/CN116155283A/en
Publication of CN116155283A publication Critical patent/CN116155283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/10Calibration or testing
    • H03M1/1009Calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/12Analogue/digital converters
    • H03M1/124Sampling or signal conditioning arrangements specially adapted for A/D converters
    • H03M1/1245Details of sampling arrangements or methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Analogue/Digital Conversion (AREA)

Abstract

The invention discloses a TI-ADC mismatch error calibration method based on a fully connected neural network. Firstly, establishing a TI-ADC mismatch error model, generating signal data with errors, and constructing a neural network training data set by using the data; then, setting up a fully-connected neural network, and training the fully-connected neural network by using a training data set to obtain a trained neural network; then, generating TI-ADC signals with mismatch errors by using an error model, testing the trained neural network, and performing fast Fourier transform on the output of the neural network to obtain a spectrogram and performance indexes; according to the test result, adjusting network training parameters including training iteration times, data batch processing size, loss function types, training optimizers, learning rate and the like, and training the neural network again; training is repeated until the test result reaches the expected target; the method has novel algorithm and simple flow, and can effectively improve the overall performance of the TI-ADC.

Description

TI-ADC mismatch error calibration method based on fully connected neural network
Technical Field
The invention belongs to the technical field of analog integrated circuits, and relates to a mismatch error calibration method in a TI-ADC (time interleaved analog-to-digital converter), which comprises calibration of gain mismatch error, offset mismatch error and time mismatch error.
Background
In the field of integrated circuits, an analog-to-digital converter (ADC) is an indispensable functional component that converts a continuous analog signal in nature into a discrete digital signal in a digital system, and serves as a bridge between the analog world and the digital world. The ADC variety is wide, including Pipelined ADC (Pipelined ADC), successive approximation ADC (SAR-ADC), time interleaved ADC (TI-ADC), etc. The TI-ADC is provided with a plurality of sub-ADCs, each sub-ADC is used for independently sampling signals to form a channel, and in the synthesis stage, the output results of each channel are sequentially arranged in sequence to form a final sampling result. If the TI-ADC comprises N channels, the sampling frequency of each channel sub-ADC is fs, the sampling frequency of the TI-ADC can be improved to Nfs, so that the TI-ADC is widely applied to high-speed high-precision application scenes.
However, in practical situations, due to factors such as design limitations, production process defects, and interference of working environment, the performance of the TI-ADC may be affected to some extent, which may cause errors in the converted signal. The main errors of the TI-ADC are gain mismatch errors, offset mismatch errors and time mismatch errors. Wherein gain mismatch errors are caused by mismatch of gains of the sub-channel ADCs, mismatch errors are caused by mismatch of the sub-channel ADCs, and time mismatch errors are caused by clock offset of sampling clocks of the sub-channel ADCs. To improve TI-ADC performance and converted signal quality, the ADC needs to be calibrated.
Conventional calibration methods generally have both analog and digital calibration. The analog calibration method modifies or fine-tunes the design on the analog circuit section, and the digital calibration method compensates the ADC output signal by using a digital circuit. The traditional calibration method has the defects of complex algorithm, high resource consumption and the like. In recent years, artificial Intelligence (AI) technology has been rapidly developed, and neural networks are representative thereof. The neural network technology is widely applied in the fields of image recognition, man-machine interaction, voice recognition and the like, and has remarkable effect. Therefore, the neural network opens up a new idea for error calibration of the TI-ADC.
Disclosure of Invention
Aiming at the defects of gain mismatch error, offset mismatch error and time mismatch error existing in the TI-ADC, complex algorithm, high resource consumption and the like of the traditional calibration method, the invention provides a novel method for calibrating various mismatch errors of the TI-ADC by combining a neural network, which can partially eliminate the influence of the TI-ADC mismatch error on signals and effectively improve the quality of the converted signals.
The whole implementation steps of the calibration method in the invention are as follows:
s1, establishing a TI-ADC mismatch error model as follows:
S e =amp×sin(2πft)+E g ×sin(2π(fs/N×k±f)t)+E o ×sin(2π(fs/N×k)t)+E t ×sin(2π(fs/N×k±f)t)
wherein amp is the signal amplitude, f is the signal frequency, t is the sampling time, E g Indicating the amplitude of clutter caused by gain mismatch error, E o Indicating the amplitude of clutter caused by mismatch error, E t The amplitude of clutter caused by time mismatch error is represented, fs is the sampling frequency of TI-ADC, and N is the channel number;
s2, constructing a training data set:
based on the established TI-ADC mismatch error model, setting parameters amp, f, N, k, E g 、E o 、E t Selecting a plurality of sampling time points t to obtain a plurality of signal data points which are continuous in time and serve as one data sample of an input data set, and acquiring a plurality of data samples to form a training data set through a plurality of random parameters;
s3, constructing a fully-connected neural network, wherein the fully-connected neural network comprises an input layer, a hidden layer and an output layer, and neurons between adjacent network layers are connected with each other;
s4, training the neural network by using the constructed training data set to obtain a trained neural network;
s5, testing the performance of the trained neural network, namely generating a TI-ADC test signal containing errors by using the method of S1, inputting the test signal into the trained upgrading network, performing FFT processing on the output of the neural network to obtain a signal spectrogram, calculating a judging index according to the signal spectrogram, judging according to a set standard to obtain the performance of the neural network, entering S7 if the performance of the neural network reaches an expected target, otherwise entering S6;
s6, optimizing the neural network, namely adjusting training parameters, and returning to S4;
s7, performing mismatch error calibration of the TI-ADC by using the trained neural network
The full-connection neural network in the step S3 has the following specific structure: three layers, one input layer, one hidden layer and one output layer. The input layer is composed of 1000 neuron nodes, the hidden layer is composed of 36 neuron nodes, and the output layer is composed of 1 neuron node. This structure may be simply denoted as [1000,36,1].
The calibration algorithm provided by the invention is suitable for sinusoidal signals with TI-ADC mismatch errors of different frequencies and different sizes, and can effectively inhibit the influence of gain mismatch errors, offset mismatch errors and time mismatch errors of the TI-ADC on the converted signals, thereby improving the overall quality of the converted signals, improving the performance indexes (ENOB, SFDR and the like) of the converted signals and realizing the error calibration of the TI-ADC.
Drawings
FIG. 1 is a flow chart showing the main steps and steps for implementing the calibration method of the present invention.
Fig. 2 is a basic neuron structure diagram of a neural network.
Fig. 3 is a schematic diagram of the fully-connected neural network structure of the present invention.
FIG. 4 is a flow chart of the neural network training test of the present invention.
Fig. 5 is a diagram of a pre-calibration signal FFT spectral analysis.
FIG. 6 is a graph of the FFT spectrum analysis of the signal after calibration according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples.
The invention adopts the neural network to calibrate the gain mismatch error, the offset mismatch error and the time mismatch error of the TI-ADC, and the main implementation steps and the implementation flow are shown in the figure 1, and specifically comprise six implementation steps of the establishment of the TI-ADC mismatch error model, the establishment of a neural network training data set, the establishment of the neural network, the training of the neural network, the test of the performance of the neural network and the optimization of the neural network.
Step one: and (5) establishing a TI-ADC mismatch error model.
In the traditional method, the general steps for establishing three error models of the TI-ADC are as follows: generating signal data of a plurality of sub-channel ADCs, fine-tuning gain, offset and sampling clock of each channel ADC to enable gain and offset among channels to be inconsistent and enable sampling clock of one or more channels to be offset, and then arranging and synthesizing the signal data of all the sub-channel ADCs to obtain TI-ADC converted signals containing gain mismatch errors, offset mismatch errors and time mismatch errors. Although the method can obtain the signal containing TI-ADC error, the process is relatively complicated, and is unfavorable for rapidly generating a large amount of signal data. In view of the above, the present invention chooses to build the TI-ADC error model directly from the frequency domain.
For a sinusoidal signal with frequency f, if the sampling frequency of the TI-ADC is fs, the number of channels is N, and according to priori knowledge, the influence caused by three mismatch errors is expressed as clutter components at specific frequency points of the signal. Wherein the spurious component caused by the gain mismatch error is at fs/n×k±f (k=0, 1, …, N-1) frequency point, the spurious component caused by the offset mismatch error is at fs/n×k (k=0, 1, …, N-1) frequency point, and the spurious component caused by the time mismatch error is at fs/n×k±f (k=0, 1, …, N-1) frequency point.
Based on the theory, the TI-ADC mismatch error model establishment method provided by the invention comprises the following steps: an ideal error-free sinusoidal signal s=amp×sin (2pi ft) is first generated, where amp is the signal amplitude, f is the signal frequency, and t is the sampling time. Then, gain mismatch error and offset mismatch error of TI-ADC are correctedAdding clutter components caused by time mismatch errors to an ideal signal to obtain a signal S containing three mismatch errors e =amp×sin(2πft)+E g ×sin(2π(fs/N×k±f)t)+E o ×sin(2π(fs/N×k)t)+E t Xsin (2pi (fs/Nxk.+ -. F) t), where E g Indicating the amplitude of clutter caused by gain mismatch error, E o Indicating the amplitude of clutter caused by mismatch error, E t Indicating the magnitude of the spur caused by the time mismatch error. S is S e I.e. sinusoidal signals containing TI-ADC gain mismatch errors, offset mismatch errors and time mismatch errors.
Step two: and constructing a neural network training data set.
The neural network training data set is a necessary condition for training the neural network, and the advantages and disadvantages of the data set directly influence the training effect of the neural network. The neural network training data set comprises two major parts, namely an input data set and a training target data set.
The data of the input data set is generated by the TI-ADC error model in step one. Amp, f, N, k, E in the formula given in step one g 、E o 、E t The method can obtain a signal containing three mismatch errors of the TI-ADC, continuously takes 1000 sampling time points t, and obtains 1000 signal data points which are continuous in time and serve as one data sample of the input data set. Random within a certain range amp, f, N, k, E g 、E o 、E t The values of the parameters can obtain a large number of data samples. All data sample sets are formed into an input data set.
The data of the training target data set is derived from the ideal signal. And taking ideal signals which do not contain errors and correspond to the signals containing the three mismatch errors of the TI-ADC, and collecting 1000 continuous data points, wherein the 500 th data point is taken as a training target, and the data point is one data sample of a training target data set. For each data sample in the input data set, there is a corresponding data point as its training target. All training target data points are assembled to form a training target data set. The input data set and the training target data set together form a neural network training data set.
Step three: and (3) building a neural network.
The invention adopts the fully-connected neural network with simpler structure to calibrate TI-ADC error in order to reduce algorithm complexity and training time. The most basic unit of the neural network is a neuron, and the structure of the neural network is shown in fig. 2. Wherein x is i For neuron input, w i For each x i The corresponding weight (weight), b is the neuron bias term (bias), and f is the activation function. The neuron output y is obtained by the following formula:
Figure BDA0004088270690000041
the activation function f is a nonlinear function that acts to introduce nonlinear characteristics into the neural network to enhance the fitting ability of the network. Common activation functions are Sigmoid functions, tanh functions, reLU functions, and the like. The neural network includes several layers, each layer in turn including a plurality of neurons. The neurons of different layers are connected with each other according to a certain relation, so that the neural network with different structures can be formed. The fully connected neural network used in the invention is to connect all neurons between every two adjacent layers one by one, but not between the neurons of the same layer.
The structure schematic diagram of the fully-connected neural network constructed by the invention is shown in fig. 3, and the network totally comprises three layers, namely an input layer, a hidden layer and an output layer. The input layer is composed of 1000 neuron nodes, the hidden layer is composed of 36 neuron nodes, and the output layer is composed of 1 neuron node. Neurons of two adjacent layers are connected one by one.
The activation function of the fully-connected neural network built by the invention adopts a ReLU function, and the formula is as follows:
f(x)=max(0,x)。
the ReLU function is a maximum function, where the output is 0 when the input is less than 0, and remains unchanged when the input is greater than or equal to 0.
Step four: training of neural networks.
A single training of the neural network involves three processes of forward propagation, error calculation and back propagation. In the forward propagation process, the input layer neuron receives one input data, and calculates according to the neuron output formula of the third step to obtain the neuron output. The outputs of all neurons of the input layer are used as the input data of neurons of the hidden layer, and the same calculation is performed again. The outputs of all neurons in the hidden layer are used as the input data of neurons in the output layer, and the same calculation process is repeated. The output of the output layer neuron is the final output of the neural network forward propagation process.
In the error calculation process, the output value of the neural network in forward propagation is compared with the value of the corresponding training target data, and the error value of the output value and the value of the training target data is calculated according to a certain calculation formula.
In the back propagation process, the error value obtained in the error calculation process is used as input data of the neural network, and back propagation calculation is started from the output layer of the neural network. And solving the bias leads of the weight w and the bias term b of each neuron from back to front in sequence according to the gradient descent principle, and adjusting and modifying the weight w and the bias term b according to the result, so that the modified weight and bias term can make the error between the neural network output result and the training target smaller. And taking the modified w and b as weights and bias terms of the next forward propagation to carry out calculation.
In combination with the above process, the training flow of the neural network is shown in fig. 4.
First, setting initial parameters.
The initial parameters include a neuron initial weight w and an initial bias term b. Before training begins, the initial weight and bias term of each neuron need to be preset to perform the first forward propagation process. In addition, the initial parameters include hyper-parameters of the training process, such as Batch Size (Batch Size), number of training iterations (Epoch), loss function, training optimizer, training learning rate, etc.
Second, forward propagation.
And inputting the input data in the training data set into a neural network for calculation, and obtaining a network output result through the forward propagation process.
Third, calculating the error.
And comparing the output result of the neural network with target data, and calculating according to a formula to obtain an error value. The error calculation is related to the data batch processing size, if the batch processing size is m, the error calculation is performed once after the calculation of m data is performed, and the error value is used as the back propagation input value.
Fourth, counter-propagating.
And (3) reversely calculating from back to front according to the error value of the output and target data of the neural network, and obtaining the partial derivative of the error value to the weight w and the bias term b of each neuron according to the gradient descent principle, wherein the partial derivative is used as the basis for parameter updating.
And fifthly, updating parameters.
The weight w and the bias term b of each neuron are modified and updated according to the result obtained in the back propagation process, and then the new w and b are taken as w and b used in the next forward propagation. Wherein the modification of w and b is performed in a direction that reduces the error value of the forward propagating output result from the target data.
Sixth, repeating the steps of the second to fifth steps. The forward propagation, error calculation, backward propagation and parameter update processes are repeated. In this process, the error value between the output of the neural network and the target data is continuously becoming smaller, and the network neuron weights w and the bias terms b are also continuously converging. And after the training times reach the preset training iteration times, finishing the training.
Step five: and testing the performance of the neural network.
After the neural network training is completed, the performance of the neural network needs to be tested. The testing method comprises the following steps: generating a sinusoidal signal with gain mismatch error, offset mismatch error and time mismatch error by adopting the TI-ADC error model or the traditional error model in the step one, and sampling continuous 16384 data points therein as original signal data. And sequentially inputting the original signal data into the trained neural network according to a group of every 1000 data points to calculate, and obtaining the output result of the neural network as calibrated signal data.
And performing Fast Fourier Transform (FFT) processing on the calibrated signal data to obtain a signal spectrogram. Based on the spectrogram, it is observed whether the clutter component of the original signal due to the TI-ADC mismatch error is effectively reduced. According to the FFT result, the effective resolution bit number (ENOB) and the spurious-free dynamic range (SFDR) of the signal are calculated, and the ENOB and SFDR of the calibrated signal are compared with the ENOB and SFDR of the original signal, so as to judge whether the calibration effect of the neural network reaches the expected target.
Step six: optimization of the neural network.
And performing performance test on the trained neural network, and if the test result reaches an expected target, indicating that the performance of the neural network meets the standard, and can be used for calibrating gain mismatch errors, offset mismatch errors and time mismatch errors of the TI-ADC. If the test result does not reach the expected target, the neural network performance is insufficient, and retraining is needed to optimize the calibration performance of the neural network.
And in the retraining stage, parameter settings before training, including training iteration times (Epoch), data Batch Size (Batch Size), loss Function types (Loss Function), training optimizers (optimizers), learning Rate (Learning Rate) and the like, are adjusted, and then the neural network is trained and tested again.
In the training process, the parameters can be correspondingly adjusted by observing the change trend of the error value (loss) after each forward propagation. The specific mode is as follows:
if loss gradually decreases from the neural network training and there is still a more pronounced downward trend at the end of the training, this indicates that the training is inadequate and the neural network weights and bias terms have not converged to optimal values. In this case, an increase in the number of training iterations may be considered.
If loss shows a decreasing trend from the training of the neural network, but decreases slowly, the parameter updating step length is lower and the updating amplitude is too small in the training. In this case, it is considered to increase the learning rate.
If loss does not show a steady descending trend after training from the neural network, but continuously oscillates up and down or does not converge at all, the method indicates that the parameter updating step length is larger and the updating amplitude is too large during training. In this case, it may be considered to reduce the learning rate.
If loss does not drop after the loss is reduced to a certain value in the neural network training process, but the network test result is not ideal enough, the network training is possibly trapped in a local optimal solution. In this case, the optimization class may be replaced, or the initialization weight and bias parameters before the start of the network training may be adjusted, and the training may be performed again.
And step seven, performing mismatch error calibration of the TI-ADC by using the obtained neural network.
In the embodiment, a 4-channel TI-ADC model is built by using a Python code, and the model comprises gain mismatch error, offset mismatch error and time mismatch error. And setting the TI-ADC sampling frequency to be 100Mhz, generating a large amount of data containing three mismatch errors to construct a neural network training data set, and then training, testing and optimizing the neural network. Performing performance test on the optimized neural network, generating a sine signal with the frequency of 10.17Mhz by using an error model, adding gain mismatch error, offset mismatch error and time mismatch error of-60 dBc, and calibrating the signal containing the error by using the neural network to obtain a calibrated result. Fig. 5 shows a pre-calibration signal FFT spectrum, and fig. 6 shows a neural network post-calibration signal FFT spectrum. Comparing the two diagrams, the neural network calibration effectively suppresses clutter components caused by TI-ADC mismatch errors, compared with the signal before calibration, the signal after calibration is improved from 8.48bit to 13.43bit, and the SFDR is improved from 59.81dBc to 96.87dBc, so that the calibration effect is obvious.
In summary, the TI-ADC mismatch error calibration method based on the fully connected neural network is simple in principle, novel in mode and simple and convenient to operate, has obvious calibration effect through simulation verification, and can effectively eliminate errors caused by gain mismatch, offset mismatch and time mismatch of the TI-ADC, so that the overall performance of the TI-ADC is improved, and the quality of signals after conversion is improved.

Claims (2)

1. The TI-ADC mismatch error calibration method based on the fully connected neural network is characterized by comprising the following steps of:
s1, establishing a TI-ADC mismatch error model as follows:
S e =amp×sin(2πft)+E g ×sin(2π(fs/N×k±f)t)+E o ×sin(2π(fs/N×k)t)+E t ×sin(2π(fs/N×k±f)t)
wherein amp is the signal amplitude, f is the signal frequency, t is the sampling time, E g Indicating the amplitude of clutter caused by gain mismatch error, E o Indicating the amplitude of clutter caused by mismatch error, E t The amplitude of clutter caused by time mismatch error is represented, fs is the sampling frequency of TI-ADC, and N is the channel number;
s2, constructing a training data set:
based on the established TI-ADC mismatch error model, setting parameters amp, f, N, k, E g 、E o 、E t Selecting a plurality of sampling time points t to obtain a plurality of signal data points which are continuous in time and serve as one data sample of an input data set, and acquiring a plurality of data samples to form a training data set through a plurality of random parameters;
s3, constructing a fully-connected neural network, wherein the fully-connected neural network comprises an input layer, a hidden layer and an output layer, and neurons between adjacent network layers are connected with each other;
s4, training the neural network by using the constructed training data set to obtain a trained neural network;
s5, testing the performance of the trained neural network, namely generating a TI-ADC test signal containing errors by using the method of S1, inputting the test signal into the trained upgrading network, performing FFT processing on the output of the neural network to obtain a signal spectrogram, calculating a judging index according to the signal spectrogram, judging according to a set standard to obtain the performance of the neural network, entering S7 if the performance of the neural network reaches an expected target, otherwise entering S6;
s6, optimizing the neural network, namely adjusting training parameters, and returning to S4;
s7, performing mismatch error calibration of the TI-ADC by using the trained neural network.
2. The TI-ADC mismatch error calibration method based on a fully connected neural network according to claim 1, wherein in the fully connected neural network, an input layer comprises 1000 neuron nodes, a hidden layer comprises 36 neuron nodes, and an output layer comprises 1 neuron node; the simplified structure is denoted as [1000,36,1].
CN202310143328.8A 2023-02-21 2023-02-21 TI-ADC mismatch error calibration method based on fully connected neural network Pending CN116155283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310143328.8A CN116155283A (en) 2023-02-21 2023-02-21 TI-ADC mismatch error calibration method based on fully connected neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310143328.8A CN116155283A (en) 2023-02-21 2023-02-21 TI-ADC mismatch error calibration method based on fully connected neural network

Publications (1)

Publication Number Publication Date
CN116155283A true CN116155283A (en) 2023-05-23

Family

ID=86357952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310143328.8A Pending CN116155283A (en) 2023-02-21 2023-02-21 TI-ADC mismatch error calibration method based on fully connected neural network

Country Status (1)

Country Link
CN (1) CN116155283A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117335802A (en) * 2023-10-25 2024-01-02 合肥工业大学 Pipeline analog-to-digital converter background calibration method based on neural network
CN117408315A (en) * 2023-10-25 2024-01-16 合肥工业大学 Forward reasoning module for background calibration of pipeline analog-to-digital converter

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117335802A (en) * 2023-10-25 2024-01-02 合肥工业大学 Pipeline analog-to-digital converter background calibration method based on neural network
CN117408315A (en) * 2023-10-25 2024-01-16 合肥工业大学 Forward reasoning module for background calibration of pipeline analog-to-digital converter

Similar Documents

Publication Publication Date Title
CN116155283A (en) TI-ADC mismatch error calibration method based on fully connected neural network
US8421658B1 (en) Parallel pipelined calculation of two calibration values during the prior conversion cycle in a successive-approximation-register analog-to-digital converter (SAR-ADC)
CN109299430A (en) The short-term wind speed forecasting method with extreme learning machine is decomposed based on two stages
CN110991721A (en) Short-term wind speed prediction method based on improved empirical mode decomposition and support vector machine
CN109150183B (en) Metastable state detection-based capacitance mismatch calibration method for SAR-ADC
CN108988860B (en) Calibration method based on SAR ADC and SAR ADC system
CN111553513A (en) Medium-and-long-term runoff prediction method based on quadratic decomposition and echo state network
CN108134606A (en) A kind of pipeline ADC based on digital calibration
CN112259119B (en) Music source separation method based on stacked hourglass network
Akimov et al. Selecting criteria for optimizing parameters of ADC for digital signal processing
CN113114247A (en) Pipeline ADC interstage gain calibration method based on comparison time detector
CN116760412A (en) Time interleaving ADC calibrator based on ANN
CN111506868A (en) Ultrashort-term wind speed prediction method based on HHT weight optimization
Deng et al. An efficient background calibration technique for analog-to-digital converters based on neural network
CN113114243B (en) TIADC system mismatch error correction method and system
CN107659290B (en) Bandwidth extension filter and design method thereof
CN113872599A (en) System and method for calibrating mismatch error of TIADC system based on GA optimization
Xu et al. A/D converter background calibration algorithm based on neural network
RU151549U1 (en) ARTIFICIAL NEURAL NETWORK
CN116015292A (en) ADC calibration method based on fully-connected neural network
CN111431534A (en) Analog-digital converter for quantizing multipath input
CN115860277A (en) Data center energy consumption prediction method and system
Du et al. Capacitor mismatch calibration of a 16-bit SAR ADC using optimized segmentation and shuffling scheme
JP2009516231A (en) Vector quantizer based on n-dimensional spatial bisection
CN118041363A (en) TI ADC neural network calibration method based on signal derivative information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination