CN117408315A - Forward reasoning module for background calibration of pipeline analog-to-digital converter - Google Patents

Forward reasoning module for background calibration of pipeline analog-to-digital converter Download PDF

Info

Publication number
CN117408315A
CN117408315A CN202311392418.7A CN202311392418A CN117408315A CN 117408315 A CN117408315 A CN 117408315A CN 202311392418 A CN202311392418 A CN 202311392418A CN 117408315 A CN117408315 A CN 117408315A
Authority
CN
China
Prior art keywords
module
layer neuron
neuron unit
output
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311392418.7A
Other languages
Chinese (zh)
Other versions
CN117408315B (en
Inventor
李龙
尹勇生
李嘉燊
宋宇鲲
邓红辉
陈红梅
孟旭
吴洛天
李牡琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202311392418.7A priority Critical patent/CN117408315B/en
Publication of CN117408315A publication Critical patent/CN117408315A/en
Application granted granted Critical
Publication of CN117408315B publication Critical patent/CN117408315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/10Calibration or testing
    • H03M1/1009Calibration
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M3/00Conversion of analogue values to or from differential modulation
    • H03M3/30Delta-sigma modulation
    • H03M3/38Calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a forward reasoning module for background calibration of a pipeline analog-to-digital converter, which relates to the field of analog-to-digital converter calibration, wherein the forward reasoning module is provided with a three-layer neural network for calibrating the pipeline analog-to-digital converter, the pipeline analog-to-digital converter to be calibrated and a sigma-delta ADC at the front end of the neural network are connected with the same analog signal source, and digital signals output by the pipeline analog-to-digital converter to be calibrated are used as training sets to be input into the neural network, and the forward reasoning module comprises: the weight bias storage unit, the input layer neuron unit, the hidden layer neuron unit and the output layer neuron unit; when the neural network is initialized, each layer of neuron units read weight and bias data from the weight bias storage unit, and parameters of neurons are set; the input layer neuron unit receives signals sent by the digital-to-analog converter at the front end, the input layer neuron unit is connected with the hidden layer neuron unit, and the hidden layer neuron unit is connected with the output layer neuron unit.

Description

Forward reasoning module for background calibration of pipeline analog-to-digital converter
Technical Field
The invention relates to the technical field of analog-to-digital converter calibration, in particular to a forward reasoning module for background calibration of a pipeline analog-to-digital converter.
Background
An analog-to-digital converter (Analog to Digital Converter, ADC) as a bridge between the analog domain and the digital domain, converts continuous signals of the analog world into discrete signals of the digital world, which is a key component in many fields, whose performance affects the efficiency of the whole system; with the advancement of integrated circuit technology, analog circuits have encountered limitations in terms of scaling down; the limitation mainly relates to the problems of difficult adjustment of threshold voltage, reduction of power supply voltage, reduction of transistor intrinsic gain and the like; together, these factors result in reduced dynamic range of the analog circuit, limited bandwidth, and poor stability.
Pipeline ADC (Pipeline ADC) is taken as an ADC with conversion speed and precision, has good performance in the range of 8 bit-16 bit coverage of resolution and MHz-GHz coverage of speed, is popular, has more common research on alignment calibration technology, and has no excellent effect.
In recent years, the neural network algorithm is widely applied to nonlinear error model construction and calibration work of instruments and sensors, and performance is increasingly improved along with research of model optimization. The error calibration of the ADC by adopting the neural network algorithm has theoretical basis and application prospect.
Disclosure of Invention
The invention aims at: aiming at the problem of poor error calibration effect of the existing pipeline analog-to-digital converter, the neural network all-digital forward reasoning module is provided for calibrating the pipeline analog-to-digital converter.
The technical scheme of the invention is as follows: the forward reasoning module is used for carrying a three-layer neural network for calibrating the pipeline analog-digital converter, the pipeline analog-digital converter to be calibrated and the sigma-delta ADC are connected into the same analog signal source at the front end of the neural network, the digital signal of the output of the pipeline analog-digital converter to be calibrated is used as a training set to be input into the neural network, the output of the sigma-delta ADC is used as an element iteration optimization neural network of a neural network loss function, and the forward reasoning module comprises: the weight bias storage unit, the input layer neuron unit, the hidden layer neuron unit and the output layer neuron unit;
when the neural network is initialized, the hidden layer neuron unit and the output layer neuron unit read corresponding weight and bias data from the weight bias storage unit, and set parameters of the hidden layer neuron unit and the output layer neuron unit; the input layer neuron unit receives signals sent by the digital-to-analog converter at the front end, the input layer neuron unit is connected with the hidden layer neuron unit, and the hidden layer neuron unit is connected with the output layer neuron unit.
In any of the above technical solutions, further, the weight bias storage unit is a ROM memory, RAM memories are respectively provided in the hidden layer neuron unit and the output layer neuron unit, and all the RAM memories are connected with the ROM memory.
In any of the above solutions, further, an input register is included between the input layer neuron unit and the hidden layer neuron unit; external signals are input into the neural network through the input layer neuron unit, the input layer neuron unit outputs the received signals to the input register in the input register, and the stored data are output to all hidden layer neuron units.
In any of the above solutions, further, the hidden layer neuron unit further includes, in addition to the RAM memory: the device comprises a multiplier substitution module, a hidden layer activation function module and a first accumulator module; the multiplier substitution module receives a signal sent by the input layer neuron unit, the multiplier substitution module outputs a signal to the first accumulator module, the first accumulator module outputs an accumulation result to the hidden layer activation function module, and the hidden layer activation function module sends a calculation result to the output layer neuron unit;
the multiplier substitution module consists of a multiplexer, and when the value input into the multiplier substitution module is 0, the multiplier substitution module outputs 0; when the value of the input multiplier replacement module is 1, the multiplier replacement module outputs the weight stored in the RAM memory of the hidden layer neuron unit.
In any of the above solutions, further, the output layer neuron unit further includes, in addition to the RAM memory: the output layer activation function module is used for activating the output layer activation function module; the multiplier module adopts multiplication operation with a signed fixed point number, the multiplier module receives signals sent by the hidden layer neuron unit and carries out operation, the multiplier module sends operation results to the second accumulator module, the second accumulator module outputs accumulation results to the output layer activation function module, and the output of the output layer activation function module serves as the output of the neural network.
In any of the above technical solutions, further, a hidden layer output register is included between the hidden layer neuron unit and the output layer neuron unit; the hidden layer neuron unit outputs the calculation result to a hidden layer output register for temporary storage, and the hidden layer output register outputs the stored data to all the output layer neuron units.
In any of the above solutions, further, the first accumulator module and the second accumulator module each include an adder, a register, and a delay unit; the number of the adders is consistent with that of the registers, the adders and the registers form an accumulator unit, wherein the adders receive two data and output one data, and each adder stores the processed data into the corresponding register; the first stage of the pipeline is provided with accumulator units of which the number is half of that of the neural network input layer neural units, the number of accumulator units of the subsequent stage is half of that of the previous stage, and the result is rounded downwards when the result is a non-integer; when the number of the accumulators of the previous stage is odd, the last accumulator unit of the previous stage is connected with a delay unit; the last stage includes only one accumulator unit that outputs a sum for the activation function of the neuron unit.
In any of the above technical solutions, further, a neuron unit of the neural network performs multiply-accumulate operation on the input value, and a calculation formula of the neuron is as follows:
wherein y is the output, x i As input, w i For synaptic weights, θ is the bias and f is the activation function used by the neuron.
In any of the above technical solutions, further, after outputting the obtained result by the output layer neuron unit, using the output result and the result output by the sigma-delta ADC as elements of a neural network loss function, calculating a loss value by the loss function, comparing the loss value with a target value, and under the condition that the loss value is higher than the target value, adjusting the weight of the neural network by using a training optimization algorithm, and iteratively optimizing the neural network; and when the loss value is smaller than or equal to the target value, saving the neural network structure parameter into the RAM memory of each neuron unit.
In any of the above technical solutions, further, the ROM memory data width is determined by a weight bias numerical range obtained by the offline platform, and the ROM memory data depth is determined by the total weight number; the data width of the RAM memory is consistent with the data width of the ROM memory, and the data depth of the RAM memory is determined by the number of neurons of the previous stage.
The beneficial effects of the invention are as follows:
the invention adopts a digital circuit to realize a forward reasoning module of a Pipeline ADC background calibration method, and the whole adopts a Pipeline architecture, thereby improving the signal processing throughput rate; in the hidden layer neuron unit, because the input is binary, only 0 and 1 are input, and a multiplexer is used for replacing a multiplier, so that the consumption of hardware resources is reduced; an accumulator is realized by adopting a parallel pipeline architecture in the neuron, so that the signal processing rate is improved; the on-chip application of the forward reasoning module is realized by adopting a weight bias solidification storage mode.
Drawings
The advantages of the foregoing and additional aspects of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a neural network block diagram of a forward reasoning module for background calibration of a pipelined analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 2 is a neural network pipeline frame diagram of a forward reasoning module for background calibration of a pipeline analog-to-digital converter, according to one embodiment of the invention;
FIG. 3 is a block diagram of a hidden layer neuron unit multiplier replacement module of a forward reasoning module for background calibration of a pipelined analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 4 is a block diagram of a hidden layer neuron element accumulator module of a forward reasoning module for background calibration of a pipelined analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 5 is a diagram of the neuron structure of a forward reasoning module for background calibration of a pipelined analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 6 is a block diagram of an output layer neuron unit accumulator module of a forward reasoning module for background calibration of a pipelined analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 7 is a block diagram of a pipelined analog-to-digital converter of a forward reasoning module for background calibration of the pipelined analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 8 is a simulation diagram of an uncalibrated pipeline analog-to-digital converter of a forward reasoning module for background calibration of the pipeline analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 9 is a simulation diagram of a pipeline analog-to-digital converter calibrated by total output of a forward reasoning module for background calibration of the pipeline analog-to-digital converter in accordance with one embodiment of the present invention;
FIG. 10 is a simulation diagram of a pipeline analog-to-digital converter calibrated by intermediate results for a forward reasoning module for background calibration of the pipeline analog-to-digital converter in accordance with one embodiment of the present invention;
fig. 11 is a simulation diagram of a forward reasoning module for background calibration of a pipelined analog-to-digital converter through a PN sequence injection calibration method in accordance with one embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, embodiments of the present invention and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and the scope of the invention is therefore not limited to the specific embodiments disclosed below.
As shown in fig. 1 and fig. 2, the present embodiment provides a forward reasoning module for background calibration of a pipeline analog-to-digital converter, where the forward reasoning module is configured to carry a three-layer neural network for calibrating the pipeline analog-to-digital converter, and at the front end of the neural network, the pipeline analog-to-digital converter to be calibrated and a sigma-delta ADC are connected to the same analog signal source, and a digital signal output by the pipeline analog-to-digital converter to be calibrated is input into the neural network as a training set.
The forward reasoning module comprises: the weight bias storage unit, the input layer neuron unit, the hidden layer neuron unit and the output layer neuron unit.
The weight bias storage unit is a ROM memory, a RAM memory is arranged in the neuron units of the hidden layer and the output layer, and the RAM memory is connected with the ROM memory; when the initialization operation is carried out, the RAM memory reads the weight and the bias data from the ROM memory, and sets the parameters of the neuron; the weights and biases are parameters of the neural network, the weights being used to adjust the intensity between connecting two neurons, and the biases being used to adjust the activation threshold of the neurons.
Specifically, the data width of the ROM memory is determined by the weight bias numerical range obtained by the offline platform, and the depth is determined by the total weight number; the data width of the RAM memory is consistent with the data width of the ROM memory, and the depth is determined by the number of neurons or input nodes of the previous stage.
The three-layer neural network carried by the forward reasoning module in the embodiment comprises an input layer, a hidden layer and an output layer, wherein nodes of the front layer and the rear layer are connected with each other, the number of neurons of the input layer is 14, the number of neurons of the hidden layer is 100, and the number of neurons of the output layer is 1; the number of neurons in the input layer is determined by the number of output bits of the pipeline analog-to-digital converter.
An input register is arranged between the input layer neuron unit and the hidden layer neuron unit, external signals are input into the neural network through the input layer neuron unit, the input layer neuron unit outputs received signals into the input register, and the input register outputs stored data to all the hidden layer neuron units.
The hidden layer neuron unit includes, in addition to the RAM memory: the system comprises a multiplier substitution module, an accumulator module and a hidden layer activation function module; the multiplier substitution module receives signals sent by the input layer neuron unit, the multiplier substitution module outputs signals to the accumulator module, the accumulator module outputs accumulated results to the hidden layer activation function module, and the accumulated results are sent to the subsequent output layer neuron unit.
Specifically, the output data of the pipeline analog-to-digital converter is binary data, as shown in fig. 3, a multiplier substitution module in the hidden layer neuron unit is composed of a multiplexer, and when the value of the input multiplier substitution module is 0, the multiplier substitution module outputs 0; when the value of the input multiplier replacement module is 1, the multiplier replacement module outputs the weight stored in the RAM memory.
0 multiplied by any number is 0, so when the value of the input multiplier substitution module is 0, the multiplier substitution module can directly output 0 without calculation; the output data of the pipeline ADC is binary and only comprises 0 and 1, the input layer does not perform calculation operation, and the numerical value of the input multiplier substitution module is only 1 except 0, and the direct output weight is not required to be calculated; the number of the neuron units of the hidden layer is large, and each hidden layer neuron unit comprises a multiplier substitution module, so that the hardware resource consumption can be greatly reduced.
As shown in fig. 4, the accumulator module in the hidden layer neuron unit is implemented by adopting a parallel pipeline technology; the accumulator module comprises adders, registers and delay units, the number of the adders is consistent with that of the registers, the adders and the registers are in one-to-one correspondence, one adder and one register form one accumulator unit, the adders receive two data and output one data, and each adder stores the processed data into the corresponding register; the first stage of the pipeline is provided with accumulator units of which the number is half of that of the neural network input layer neural units, the number of accumulator units of the subsequent stage is half of that of the previous stage, and the result is rounded downwards when the result is a non-integer; when the number of the accumulators of the previous stage is odd, the last accumulator unit of the previous stage is connected with a delay unit; the last stage includes only one accumulator unit that outputs a sum for use by the activation function module of the neuron unit.
As shown in fig. 5, in the neural network of the present embodiment, the neuron unit performs a multiply-accumulate operation on the input value, and the calculation formula of the neuron unit is as follows:
wherein y is the output, x i As input, w i For synaptic weights, θ is the bias and f is the activation function used by the neuron unit.
The hidden layer neuron unit uses the Satlins function as the activation function in this embodiment:
the accumulator module is realized by adopting a 4-level parallel pipeline structure, the neural network input layer is provided with 14 neuron units, the hidden layer neuron units receive 14-bit signals and then transmit the 14-bit signals to the multiplier substitution module for processing, then the processing results are input into the accumulator module, the first level of the accumulator module comprises 7 accumulator units, the second level comprises 3 accumulator units and 1 delay unit, the third level comprises 2 accumulator units, and the fourth level comprises 1 accumulator unit.
The output layer neuron unit includes, in addition to the RAM memory: the system comprises a multiplier module, an accumulator module and an output layer activation function module; the multiplier module receives signals sent by the hidden layer neuron unit and carries out operation, an operation result is sent to an accumulator module of the output layer neuron unit, the accumulator module of the output layer neuron unit outputs the accumulated result to the output layer activation function module, and the output result of the output layer activation function module is used as output of the neural network.
As shown in fig. 6, the accumulator module in the output layer neuron unit is implemented by adopting a parallel pipeline technology, and the structure of the accumulator module is similar to that of the accumulator module of the hidden layer neuron unit; the accumulator module in the output layer neuron unit comprises adders, registers and delay units, the number of the adders is consistent with that of the registers and corresponds to that of the registers one by one, one adder and one register form an accumulator unit, wherein the adders receive two data and output one data, and each adder stores the processed data into the corresponding register; the first stage of the pipeline is provided with accumulator units of which the number is half of that of the neural network input layer neural units, the number of accumulator units of the subsequent stage is half of that of the previous stage, and the result is rounded downwards when the result is a non-integer; when the number of the accumulators of the previous stage is odd, the last accumulator unit of the previous stage is connected with a delay unit; the last stage includes only one accumulator unit that outputs a sum for the activation function of the neuron unit.
The output layer neuron unit uses purelin function as the activation function in this embodiment:
purelin(x)=x;
the accumulator module of the output layer neuron unit is realized by adopting a 7-level parallel pipeline structure, the neural network hidden layer is provided with 100 neuron units, the output layer neuron unit receives signals sent by the hidden layer neuron unit and then sends the signals to the multiplier substitution module for processing, and then the processing results are input into the accumulator module, wherein the first level of the accumulator module comprises 50 accumulator units, the second level comprises 25 accumulator units, the third level comprises 12 accumulator units and 1 delay unit, the fourth level comprises 6 accumulator units and 1 delay unit, the fifth level comprises 3 accumulator units and 1 delay unit, the sixth level comprises 2 accumulator units, and the seventh level comprises 1 accumulator unit.
A hidden layer output register is arranged between the hidden layer neuron unit and the output layer neuron unit; the hidden layer neuron unit outputs the calculation result to a hidden layer output register for temporary storage, and the hidden layer output register outputs the stored data to all the output layer neuron units.
As shown in fig. 7, the pipeline analog-digital converter to be calibrated is a 6-Stage 14-bit architecture, wherein the first 5 stages (Stage 1 to Stage 5) are 3-bit pipeline stages (1-bit redundancy), the 6 th Stage (Stage 6) is a 4-bit flash Stage, the ideal gain of each Stage is 4, the input signal is sampled and held (S/H) and then input into the 1 st Stage, each Stage quantizes the residual quantity output from the previous Stage, the quantized result is converted into an analog quantity through an MDAC (multiplying digital-analog converter) and is differenced with the input signal to obtain the residual quantity of the current Stage, the residual quantity of the current Stage is amplified by the gain and then is used as a residual signal, the residual quantity is output to the next Stage for further quantization, and finally the quantized data of all stages are added in a staggered manner to obtain a 14-bit binary output quantized value, and the data needing temporary storage in the working process is put into a register (Reg).
After obtaining the output result of the neural network, taking the output result of the neural network and the output result of the sigma-delta ADC as elements of a loss function, calculating a loss value by the loss function, comparing the loss value with a target value, and under the condition that the loss value is higher than the target value, adjusting the weight of the neural network by using a training optimization algorithm, and iteratively optimizing the neural network; and when the loss value is smaller than or equal to the target value, saving the neural network structure parameter into the RAM memory of each neuron unit.
The output precision of the sigma-delta ADC is 24 bits, the pipeline ADC to be calibrated and the sigma-delta ADC adopt the same clock source, the clock frequency adopted by the sigma-delta ADC is 1/1000 of that of the pipeline ADC to be calibrated, the sampling frequency of the pipeline ADC to be calibrated adopted by the embodiment is set to be 1GHz, the sampling frequency of the sigma-delta ADC is set to be 1MHz, and the training set is aligned according to time by taking the sampling data of the sigma-delta ADC as a standard.
In another embodiment of the present invention, MATLAB, simulink is used to build a calibration system behavior level model for the ADC calibration module, and after 1GHz single frequency sine wave is input, the pipeline ADC output spectrogram is produced.
FIG. 8 is an output of an uncalibrated ADC, FIG. 9 is a calibrated output after training the neural network with the total output of the pipelined ADC, and FIG. 10 is a calibrated output after training the neural network with the intermediate results of each pipeline stage of the pipelined ADC; the abscissa of the image represents the signal frequency and the ordinate represents the signal amplitude. For a 6-stage 14-bit pipelined ADC, after the calibration of a total output training neural network is adopted, the SFDR of the spurious-free dynamic range is increased from 51.6128dB to 123.8915dB, and the effective bit number ENOB is increased from 7.00944 bits to 15.6428 bits; after the training neural network is calibrated by adopting the intermediate results of each pipeline stage, the SFDR of the non-spurious dynamic range is increased from 51.61dB to 132.54dB, and the effective bit number ENOB is increased from 7.01 bits to 17.12 bits; higher output accuracy than input data can be obtained, and the calibration effect is excellent.
As shown in fig. 10, an uncalibrated ADC is calibrated by a conventional PN sequence injection calibration method, which is based on a feedback calibration architecture, and error extraction is achieved by LMS iteration and then the ADC is calibrated; after the same uncalibrated ADC as the simulation is calibrated by the PN sequence injection calibration method, the effective bit number ENOB is 12.73 bits, and the spurious-free dynamic range SFDR reaches 90.10dB, so that the method can effectively calibrate, but the calibration performance is obviously inferior to that of the ADC calibration method of the neural network using the forward reasoning module provided by the invention.
In summary, the present invention provides a forward reasoning module for background calibration of a pipeline analog-to-digital converter, the forward reasoning module is provided with a three-layer neural network for calibrating the pipeline analog-to-digital converter, the pipeline analog-to-digital converter to be calibrated and a sigma-delta ADC are connected to the same analog signal source at the front end of the neural network, the digital signal output by the pipeline analog-to-digital converter to be calibrated is used as a training set to be input into the neural network, and the forward reasoning module includes: the weight bias storage unit, the input layer neuron unit, the hidden layer neuron unit and the output layer neuron unit.
When the neural network is initialized, the hidden layer neuron unit and the output layer neuron unit read the weight and the bias data from the weight bias storage unit, and set the parameters of the neurons; the input layer neuron unit receives signals sent by the digital-to-analog converter at the front end, the input layer neuron unit is connected with the hidden layer neuron unit, and the hidden layer neuron unit is connected with the output layer neuron unit.
The steps in the invention can be sequentially adjusted, combined and deleted according to actual requirements.
The units in the device can be combined, divided and deleted according to actual requirements.
Although the invention has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and is not intended to limit the application of the invention. The scope of the invention is defined by the appended claims and may include various modifications, alterations and equivalents of the invention without departing from the scope and spirit of the invention.

Claims (10)

1. A forward reasoning module for background calibration of a pipeline analog-to-digital converter, the forward reasoning module is carried with a three-layer neural network for calibrating the pipeline analog-to-digital converter, the pipeline analog-to-digital converter to be calibrated and a sigma-delta ADC are connected into the same analog signal source at the front end of the neural network, a digital signal of the output of the pipeline analog-to-digital converter to be calibrated is used as a training set to be input into the neural network, and the output of the sigma-delta ADC is used as an element iteration optimization neural network of a neural network loss function, the forward reasoning module is characterized by comprising: the weight bias storage unit, the input layer neuron unit, the hidden layer neuron unit and the output layer neuron unit;
when the neural network is initialized, the hidden layer neuron unit and the output layer neuron unit read corresponding weight and bias data from the weight bias storage unit, and set parameters of the hidden layer neuron unit and the output layer neuron unit; the input layer neuron unit receives signals sent by the digital-to-analog converter at the front end, the input layer neuron unit is connected with the hidden layer neuron unit, and the hidden layer neuron unit is connected with the output layer neuron unit.
2. The forward reasoning module for background calibration of a pipelined analog-to-digital converter of claim 1 wherein the weight bias storage unit is a ROM memory, RAM memory is present within both the hidden layer neuron unit and the output layer neuron unit, and all RAM memory is connected to the ROM memory.
3. The forward reasoning module for background calibration of a pipelined analog-to-digital converter of claim 2 including an input register between the input layer neuron unit and the hidden layer neuron unit; the external signal is input into the neural network through the input layer neuron unit, the input layer neuron unit outputs the received signal to the input register, and the input register outputs the stored data to all hidden layer neuron units.
4. The forward reasoning module for background calibration of a pipelined analog-to-digital converter of claim 2 wherein the hidden layer neuron unit comprises in addition to RAM memory: the device comprises a multiplier substitution module, a hidden layer activation function module and a first accumulator module; the multiplier substitution module receives a signal sent by the input layer neuron unit, the multiplier substitution module outputs a signal to the first accumulator module, the first accumulator module outputs an accumulation result to the hidden layer activation function module, and the hidden layer activation function module sends a calculation result to the output layer neuron unit;
the multiplier substitution module consists of a multiplexer, and when the numerical value input into the multiplier substitution module is 0, the multiplier substitution module outputs 0; when the value input to the multiplier replacement module is 1, the multiplier replacement module outputs the weight stored in the RAM memory of the hidden layer neuron unit.
5. The forward reasoning module for background calibration of a pipelined analog-to-digital converter of claim 4 wherein said output layer neuron unit comprises in addition to RAM memory: the output layer activation function module is used for activating the output layer activation function module; the multiplier module adopts multiplication operation with a signed fixed point number, the multiplier module receives signals sent by the hidden layer neuron unit and performs operation, the multiplier module sends operation results to the second accumulator module, the second accumulator module outputs accumulation results to the output layer activation function module, and the output of the output layer activation function module is used as output of a neural network.
6. The forward reasoning module for background calibration of a pipelined analog-to-digital converter as recited in claim 5 wherein hidden layer output registers are included between said hidden layer neuron unit and said output layer neuron unit; the hidden layer neuron unit outputs the calculation result to the hidden layer output register for temporary storage, and the hidden layer output register outputs the stored data to all the output layer neuron units.
7. The forward reasoning module for background calibration of a pipelined analog-to-digital converter of claim 5, wherein said first accumulator module and said second accumulator module each comprise an adder, a register, and a delay unit; the number of the adders is consistent with that of the registers, the adders and the registers form an accumulator unit, wherein the adders receive two data and output one data, and each adder stores the processed data into the corresponding register; the first stage of the pipeline is provided with accumulator units of which the number is half of that of the neural network input layer neural units, the number of accumulator units of the subsequent stage is half of that of the previous stage, and the result is rounded downwards when the result is a non-integer; when the number of the accumulators of the previous stage is odd, the last accumulator unit of the previous stage is connected with a delay unit; the last stage includes only one accumulator unit that outputs a sum for the activation function of the neuron unit.
8. The forward reasoning module for background calibration of a pipeline analog-to-digital converter of claim 1, wherein a neuron unit of the neural network performs a multiply-accumulate operation on an input value, and a calculation formula of the neuron is as follows:
wherein y is the output, x i As input, w i For synaptic weights, θ is the bias and f is the activation function used by the neuron.
9. The forward reasoning module for background calibration of the pipeline analog-to-digital converter according to claim 2, wherein after the output layer neuron unit outputs the obtained result, the output result and the result output by the sigma-delta ADC are used as elements of a neural network loss function, the loss function calculates a loss value, the loss value is compared with a target value, and in the case that the loss value is higher than the target value, a training optimization algorithm is used for adjusting the weight of the neural network, so as to iteratively optimize the neural network; and when the loss value is smaller than or equal to the target value, saving the neural network structure parameter into the RAM memory of each neuron unit.
10. The forward reasoning module for background calibration of a pipelined analog-to-digital converter of claim 2 wherein the ROM memory data width is determined by a weight bias value range acquired by an offline platform and the ROM memory data depth is determined by a total weight number; the data width of the RAM memory is consistent with the data width of the ROM memory, and the data depth of the RAM memory is determined by the number of neurons of the previous stage.
CN202311392418.7A 2023-10-25 2023-10-25 Forward reasoning module for background calibration of pipeline analog-to-digital converter Active CN117408315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311392418.7A CN117408315B (en) 2023-10-25 2023-10-25 Forward reasoning module for background calibration of pipeline analog-to-digital converter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311392418.7A CN117408315B (en) 2023-10-25 2023-10-25 Forward reasoning module for background calibration of pipeline analog-to-digital converter

Publications (2)

Publication Number Publication Date
CN117408315A true CN117408315A (en) 2024-01-16
CN117408315B CN117408315B (en) 2024-06-25

Family

ID=89499589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311392418.7A Active CN117408315B (en) 2023-10-25 2023-10-25 Forward reasoning module for background calibration of pipeline analog-to-digital converter

Country Status (1)

Country Link
CN (1) CN117408315B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205139973U (en) * 2015-10-26 2016-04-06 中国人民解放军军械工程学院 BP neural network based on FPGA device founds
CN107862374A (en) * 2017-10-30 2018-03-30 中国科学院计算技术研究所 Processing with Neural Network system and processing method based on streamline
CN115296666A (en) * 2022-08-12 2022-11-04 中国人民解放军国防科技大学 Analog-to-digital conversion circuit and error calibration method for memristor
CN115425976A (en) * 2022-09-20 2022-12-02 重庆邮电大学 ADC sampling data calibration method and system based on FPGA
CN116015292A (en) * 2023-02-15 2023-04-25 电子科技大学 ADC calibration method based on fully-connected neural network
CN116155283A (en) * 2023-02-21 2023-05-23 电子科技大学 TI-ADC mismatch error calibration method based on fully connected neural network
CN116760412A (en) * 2023-07-06 2023-09-15 电子科技大学 Time interleaving ADC calibrator based on ANN

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205139973U (en) * 2015-10-26 2016-04-06 中国人民解放军军械工程学院 BP neural network based on FPGA device founds
CN107862374A (en) * 2017-10-30 2018-03-30 中国科学院计算技术研究所 Processing with Neural Network system and processing method based on streamline
CN115296666A (en) * 2022-08-12 2022-11-04 中国人民解放军国防科技大学 Analog-to-digital conversion circuit and error calibration method for memristor
CN115425976A (en) * 2022-09-20 2022-12-02 重庆邮电大学 ADC sampling data calibration method and system based on FPGA
CN116015292A (en) * 2023-02-15 2023-04-25 电子科技大学 ADC calibration method based on fully-connected neural network
CN116155283A (en) * 2023-02-21 2023-05-23 电子科技大学 TI-ADC mismatch error calibration method based on fully connected neural network
CN116760412A (en) * 2023-07-06 2023-09-15 电子科技大学 Time interleaving ADC calibrator based on ANN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUGUO XIANG等: "A neural network based background calibration for pipelined-SAR ADCs at low hardware cost", 《ELECTRONICS LETTERS》, vol. 59, no. 15, 31 August 2023 (2023-08-31), pages 1 - 3 *
李嘉燊等: "基于神经网络的数字校准技术综述", 《微电子学》, vol. 52, no. 2, 30 April 2022 (2022-04-30), pages 191 - 196 *

Also Published As

Publication number Publication date
CN117408315B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
US7205921B1 (en) Hybrid analog-to-digital converter
CN107437944B (en) Capacitive successive approximation analog-to-digital converter and self-calibration method thereof
WO2020029551A1 (en) Multiplication and accumulation calculation method and calculation circuit suitable for neural network
CN108134606B (en) Assembly line ADC based on digital calibration
CN104158545A (en) Successive approximation register analog-to-digital converter based on voltage-controlled oscillator quantization
CN111694544B (en) Multi-bit multiplexing multiply-add operation device, neural network operation system, and electronic apparatus
US20090296858A1 (en) Dem system, delta-sigma a/d converter, and receiver
CN109120263B (en) Successive approximation analog-digital converter based on digital modulation correction
CN110768671B (en) Off-chip calibration method and system for successive approximation type analog-to-digital converter
KR20220066396A (en) Consecutive Bit-Ordered Binary Weighted Multiplier-Accumulator
CN117408315B (en) Forward reasoning module for background calibration of pipeline analog-to-digital converter
CN109462399B (en) Background capacitance mismatch calibration method suitable for successive approximation analog-to-digital converter
CN214154487U (en) Analog-to-digital converter digital calibration circuit based on dynamic unit matching
CN103840833A (en) Analog-digital conversion circuit of infrared focal plane array reading circuit
McDanel et al. Saturation rram leveraging bit-level sparsity resulting from term quantization
CN106788435B (en) Quantify sampling noise-reduction method
CN112529171A (en) Memory computing accelerator and optimization method thereof
CN116015292A (en) ADC calibration method based on fully-connected neural network
CN117335802B (en) Pipeline analog-to-digital converter background calibration method based on neural network
CN113378109B (en) Mixed base fast Fourier transform calculation circuit based on in-memory calculation
CN1677869A (en) Pipeline type analog-to-digital converter capable of conducting back ground correction
US20060222128A1 (en) Analog signals sampler providing digital representation thereof
CN113114262B (en) Efficient direct function mapping analog-to-digital conversion circuit
CN113704139B (en) Data coding method and in-memory computing method for in-memory computing
CN115906735B (en) Multi-bit number storage and calculation integrated circuit, chip and calculation device based on analog signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant