CN118095384A - Hardware circuit calibration method and device based on machine learning - Google Patents

Hardware circuit calibration method and device based on machine learning Download PDF

Info

Publication number
CN118095384A
CN118095384A CN202410269997.4A CN202410269997A CN118095384A CN 118095384 A CN118095384 A CN 118095384A CN 202410269997 A CN202410269997 A CN 202410269997A CN 118095384 A CN118095384 A CN 118095384A
Authority
CN
China
Prior art keywords
neural network
hardware circuit
calibration
output
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410269997.4A
Other languages
Chinese (zh)
Inventor
毛国梁
洪思昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Hongtai Semiconductor Technology Co ltd
Original Assignee
Nanjing Hongtai Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Hongtai Semiconductor Technology Co ltd filed Critical Nanjing Hongtai Semiconductor Technology Co ltd
Priority to CN202410269997.4A priority Critical patent/CN118095384A/en
Publication of CN118095384A publication Critical patent/CN118095384A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2832Specific tests of electronic circuits not provided for elsewhere
    • G01R31/2834Automated test systems [ATE]; using microprocessors or computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2832Specific tests of electronic circuits not provided for elsewhere
    • G01R31/2836Fault-finding or characterising
    • G01R31/2846Fault-finding or characterising using hard- or software simulation or using knowledge-based systems, e.g. expert systems, artificial intelligence or interactive algorithms
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2851Testing of integrated circuits [IC]
    • G01R31/2855Environmental, reliability or burn-in testing
    • G01R31/2872Environmental, reliability or burn-in testing related to electrical or environmental aspects, e.g. temperature, humidity, vibration, nuclear radiation
    • G01R31/2879Environmental, reliability or burn-in testing related to electrical or environmental aspects, e.g. temperature, humidity, vibration, nuclear radiation related to electrical aspects, e.g. to voltage or current supply or stimuli or to electrical loads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Environmental & Geological Engineering (AREA)
  • Medical Informatics (AREA)
  • Toxicology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Neurology (AREA)
  • Measuring Temperature Or Quantity Of Heat (AREA)

Abstract

The invention relates to a hardware circuit calibration method and a device based on machine learning, which are applied to the field of semiconductor testing, and comprise the following steps of S1, acquiring an input value and an output value of a device in a designated working range under different temperatures of different gears as training samples; s2, after obtaining enough training samples from the device, preprocessing and normalizing the samples, and inputting the samples into a neural network for training; parameters of the neural network training model are obtained. S3, according to different outputs or measurements of the device, selecting corresponding parameters as inputs of a neural network, calculating a neural network model to obtain a result required to be corrected, and adopting the method to realize simultaneous learning of output mapping corresponding to all gears and various inputs at temperature; parameter calibration of linear and nonlinear output working areas corresponding to all inputs in a full gear range can be realized; meanwhile, the problem that the original calibration mode is easily affected by temperature is solved.

Description

Hardware circuit calibration method and device based on machine learning
Technical Field
The present invention relates to a method and apparatus for calibrating a circuit, and more particularly, to a method and apparatus for calibrating a hardware circuit based on machine learning, which are applied to the field of semiconductor testing.
Background
There are a number of devices in the integrated circuit test equipment (AutoTestEquipment, ATE) that need to be calibrated, such as: the voltage source, the current source, the arbitrary waveform generator and the like are mainly composed of AD, DA, operational amplifier and other circuits. The device needs to provide accurate signals such as voltage, current and the like for the tested device (DeviceUnderTest, DUT), but because of linear or nonlinear errors of the self-contained AD, DA, operational amplifier and other component elements, the original circuit device cannot meet the high-precision signal generating capacity, and each device needs to be specially calibrated before being actually used.
In the conventional calibration method, a circuit device formed by AD, DA, operational amplifier and the like is regarded as a linear signal generating device, gain (gain) and offset (offset) of a linear system formed by input and output of the device under the condition of meeting the precision requirement are obtained through a linear regression method, and the input x is calculated: gain x+offset to obtain a calibrated output y. The method has the following problems:
1. the actual circuit arrangement can be regarded as a linear operating region in the partial parameter region of the output signal, for example: and outputting a voltage interval of 10% -90% of each gear by the voltage source. And there is a non-linear condition at both ends of the gear, resulting in that the calibrated device cannot fully utilize the corresponding parameter area of each output or input gear.
2. Each device often has multiple modes of operation, each mode containing several gears. In the conventional mode, only gain and offset are used to calibrate the device. The method has the disadvantage that environmental factors (such as temperature) have certain influence on the device; if the device is calibrated by using gain and offset at a certain temperature T1, the output accuracy after correction is difficult to ensure if there is a change in the ambient temperature during actual use.
3. If the device is calibrated using gain and offset for different temperature ranges, gain and offset in both temperature and shift dimensions will be generated. Although available, it is difficult to build an environment to control temperature, takes a long time, requires a large amount of data to be stored, and is troublesome to store and use.
Disclosure of Invention
Aiming at the prior art, the invention aims to solve the technical problems of how to obtain effective calibration original data, how to establish a proper neural network model, and how to realize data fitting of which output precision prediction reaches within ten thousandths in all gear and temperature ranges, and how to enable the neural network calibration to eliminate the influence of factors (temperature) such as environment and the like on the output of a device.
In order to solve the above problems, the present invention provides a hardware circuit calibration method based on machine learning, comprising the following steps:
S1, acquiring an input value and an output value of a device in a designated working range under different temperatures of different gears, wherein the input value and the output value are used as training samples, and one piece of data in the training samples comprises: the expected value, gear, temperature, hardware circuit final actual output, hardware circuit measured value and multimeter measured value specified by a user;
S2, after obtaining enough training samples from the device, preprocessing and normalizing the samples, and inputting the samples into a neural network for training; obtaining parameters of a neural network training model;
S3, according to different outputs or measurements of the device, selecting corresponding parameters as inputs of the neural network, and obtaining a required correction result through calculation of a neural network model.
In the hardware circuit calibration method based on machine learning, the original calibration mode does not consider the influence of environmental factors (temperature), is obviously influenced by the temperature, and has larger difference between the calibration performed at other temperatures and the actual value; and the neural network is utilized for calibration, so that the influence of environmental factors can be effectively reduced, and the output is corrected within the range of expected accuracy.
In S2, the neural network model is used for predicting the input sample, then the input sample is compared with the output label value and is transmitted reversely until the accuracy of sample training reaches the requirement, and the training is finished.
As a further supplement to the application, the method comprises a calibration output state, wherein at the time of calibration output, one piece of data in the training sample in S1 comprises: the expected value, gear, temperature and hardware circuit specified by the user finally output the actual value; at this time, the inputs of the neural network model are: the hardware circuit actually outputs value, gear and temperature; the output of the neural network is: a user-specified desired value.
As a further supplement to the application, a calibration measurement state is included, and at the time of the calibration measurement, one piece of data in the training sample described in S1 includes: hardware circuit measurement values, gear positions, temperatures and multimeter measurement values; at this time, the neural network model inputs are: hardware circuit measured value, gear and temperature; the neural network model outputs are: multimeter measurements.
The invention also provides a hardware circuit calibration device based on machine learning, which comprises an upper computer ATE, wherein a neural network model is carried in a software control end of the upper computer ATE and is used for realizing the calibration method of the calibration hardware circuit.
As still another improvement of the present application, a hardware circuit calibration device based on machine learning includes a hardware circuit FPGA, in which a neural network model is mounted, for implementing the above calibration method for calibrating a hardware circuit.
In summary, the invention realizes a new calibration method by the neural network machine learning method. Compared with the existing calibration method, the method has the following advantages:
1. The output mapping corresponding to each input under all gears and temperature can be learned at the same time;
2. Parameter calibration of linear and nonlinear output working areas corresponding to all inputs in a full gear range can be realized;
3. the problem that the original calibration mode is easily affected by temperature can be solved.
Drawings
FIG. 1 is a schematic diagram of a neural network model calibration hardware circuit data flow according to an embodiment of the present application;
Fig. 2 is a schematic diagram of calibrating the board in two different manners according to the embodiment of the present application, where (a) is Force value= -0.05 and (b) is Force value=0.45;
FIG. 3 is a flowchart of the neural network calibration of embodiment 1 of the present application;
FIG. 4 is a neural network model training flowchart according to embodiment 1 of the present application;
FIG. 5 is a flow chart of forward propagation and backward propagation of an embodiment of the present application;
FIG. 6 is a flowchart of a hardware control circuit of an upper computer according to an embodiment of the present application;
FIG. 7 is a flow chart of the output of the calibration neural network parameter correction circuit device according to an embodiment of the present application, wherein (a) is applied in a software calibration manner and (b) is applied in a board FPGA;
FIG. 8 is a flow chart of a calibration neural network model correction circuit device measurement process according to an embodiment of the present application, wherein (a) is applied in a software calibration manner; (b) board FPGA application;
FIG. 9 is a flow chart of data collection according to an embodiment of the present application;
FIG. 10 is a comparison of the device used in embodiment 3 of the present application using two calibration modes, where (a) is a full gear and (b) is an enlarged nonlinear segment;
fig. 11 is a calibration chart of board card measurement for 1V (a) and 10V (b) gear positions in two different ways according to the embodiment of the present application, wherein (a) Force value=0.45V and (b) Force value= -8.5V.
Detailed Description
The following describes 4 embodiments of the present application in detail with reference to the accompanying drawings.
Embodiment 1:
In order to facilitate understanding of the present invention, the present embodiment uses a board card to perform voltage source neural network calibration and use by using the method as an example, and separately explains the following four stages (such as data acquisition, model training, parameter acquisition, calibration correction shown in the neural network calibration flowchart of fig. 3), specifically as follows:
s1, data acquisition
Referring to fig. 9, this stage is implemented as follows:
The required parameters of the board card to be calibrated are prepared, wherein the parameters comprise gears, the actual sampling point number of each gear, the upper and lower limits of the values of each gear, the cycle times and the like.
Under the condition of changing the ambient temperature, circularly executing sampling test engineering according to the set parameters; the expected value, gear, temperature and actual output value (namely measured value of the universal meter) are recorded in the execution process.
And exporting the data required by machine learning to a csv file according to a convention format.
S2, model training
Referring to fig. 4, this stage is implemented as follows:
In the concrete implementation, the neural network model adopts a three-layer fully-connected neural network structure, all nodes are linearly linked, and an activation function is added between an input layer and a hidden layer.
Firstly, loading training data obtained in the previous stage, and preprocessing the data at the same time comprises the following steps: the STANDARDSCALER method was used to convert the input data into normal distribution data with a mean value of 0 and a standard deviation of 1.
The formula is:
Wherein, The average value of the data is represented,Representing standard deviation.
After preprocessing the collected data, dividing the data into two groups of training sets and testing sets at random, wherein the proportion is 8:2, the test set data does not participate in the training process.
Defining the super-parameter learning rate of the neural network as 0.001, training the data of the training set by using an optimizer, predicting the data of the testing set during training the data of the training set by using the training sample number batch_size as 64, and calculating the accuracy of the predicted value. This process is shown in fig. 5.
During the training of the model, the effect of the fitting is described as a loss function. But it is of greater concern when using neural networks for calibration if all training results are within a range of accuracy that meets expectations. The accuracy between the predicted value and the expected value derived from the current parametric model is calculated in training, and it is determined whether or not it is within the passing accuracy range required for the gear in question, and the ratio of the passing number to the number of test sets (referred to herein as passing rate) is used as one of the training stop conditions.
In the model training process, a loss function and a pass rate are calculated in each iteration. In the training iteration process, the loss function is continuously reduced, and the passing rate is continuously improved.
S3, parameter acquisition
After model training is completed, parameters of the neural network are saved in a document.
Since the training process uses the normalized data, if parameters (weight, bias, etc.) of the model are directly saved, the normalized parameters need to be saved additionally in practical application (mean and standard deviation scale need to be saved for STANDARDSCALER method normalization). And the normalized neural network input value is calculated before forward propagation, and inverse normalization is performed after the output result is obtained by forward propagation calculation.
And there are two methods for actual calibration correction: software calibration and hardware FPGA calibration.
Taking into consideration the storage of the parameters of the follow-up neural network model, the resources occupied by the FPGA and the convenience of the follow-up use, the following operation is carried out on the parameters to obtain the parameters of the neural network model taking the original data scale as input and output. And normalization related parameters are not required to be saved, and normalization and inverse normalization calculation in actual use are not required.
Taking a three-layer neural network as an example, the number of nodes of an input layer is set asAccording to the forward propagation formula, when we use STANDARDSCALER, we can deduce the output of the hidden layer nodeThe method comprises the following steps:
Similarly, since there is only one node in the output layer and the output layer has no activation function, the node number in the hidden layer is set as Inverse normalized outputThe method comprises the following steps:
Thus, for four model parameters We have (STANDARDSCALER):
according to the method, the neural network model parameters with the original data scale as input and output can be obtained, and are stored and exported in a document form.
S4, calibrating and correcting
In this embodiment, the neural network model is applied in a software calibration manner.
As shown in the flowchart of fig. 7 (a), when executing a user instruction; before the original instruction is sent to the circuit device, a layer of software correction process is added, which is specifically as follows:
The software firstly reads the document to load the neural network parameters, reads the current temperature once before executing the test engineering to send the instruction, takes the expected output (Force Value), the gear and the temperature contained in the original user instruction as the input of the neural network, and calculates the output Value corrected by the neural network (namely the input Value of the circuit device) through forward propagation.
In contrast, the board is calibrated by using the gain and offset modes at the same time, and the gear gain and offset are not written into the board circuit device FPGA, so that the two calibration effects are compared in a software calibration mode.
As shown in FIG. 2, the horizontal axis here represents Temperature (Temperature:. Degree.C.), and the vertical axis represents output voltage (measurement Value: V). The output voltage measured by the two output values of the 1V gear of the board card after being calibrated in two ways at different temperatures is taken. It can be seen that calibration using a neural network can largely correct the effect of temperature on the circuit device and keep the output accuracy within the expected range.
Embodiment 2:
in comparative example 1, in this example, the FPGA uses the neural network model to process the user instruction sent by the upper computer, and the neural network model is directly used to correct the circuit device at the FPGA level.
The entire calibration procedure corresponds to the flow chart of fig. 3 shown in example 1. In the calibration correction step, referring to the flowchart shown in fig. 7 (b), the neural network model parameters are transferred to the FPGA in advance when the circuit device is powered up.
When the upper computer sends a user instruction to the testing equipment, the FPGA reads the temperature, takes the gear included in the instruction and the expected output as the input value of the neural network, obtains the input of the corrected circuit device through forward propagation calculation, and controls the circuit device to perform specified output.
The parameter acquisition in S3 in embodiment 1 can greatly reduce the resource usage overhead of the FPGA when the FPGA stores and calculates, because the neural network model parameters obtained by the machine learning training are combined with the normalization parameters.
Embodiment 3:
In order to better verify and explain the beneficial effects obtained by the technical method adopted in the method, the circuit device selected in the embodiment has a section of obvious nonlinear output relation in the use gear so as to verify that the nonlinear output relation of the circuit device can be calibrated and corrected.
When the original gain and offset mode is used for calibrating the board, only the linear relation section of the board can be roughly corrected, and the nonlinear relation section of the board still does not meet the expected precision when the gain and offset calibration mode is used.
Similar to example 1, after 4 steps of data acquisition, model training, parameter acquisition and calibration correction, neural network model parameters are obtained and used to correct the device, and the original gain and offset modes are also used to correct the output calibration of the device.
As shown in fig. 10 (a), the calibration of the board card using the Gain-Offset method is compared with the calibration using the neural network, and the abscissa is the desired output (user input) and the ordinate is the actual output (multimeter measurement). Since the Gain-Offset approach is still a linear correction to the output, the device has a significant nonlinear output relationship between 0-6V in the forward direction that cannot be corrected by calibration. In fig. 10 (b), the graph output range is narrowed, and then it can be more clearly observed that: after the neural network model adopted in the method is applied, the nonlinear output relation originally existing in the circuit device can be well calibrated and corrected.
Embodiment 4:
In this embodiment, similar to embodiment 1, the neural network is used to calibrate each gear of the circuit device at the time of measurement. The neural network calibration flowchart shown in fig. 3 is different from that in embodiment 1 in that in this embodiment, since the calibration is performed on the measured gear, the training data and the model input and output are different.
The data recorded in the original training sample in the S1 data acquisition becomes: board card self-test value (i.e. hardware circuit measurement value), gear, temperature, multimeter measurement value. All that is required as input to the neural network model in the S2 model training is the board card self-test value, gear and temperature, and the multimeter measurement value is taken as output of the neural network.
Then, in the calibration correction at S4, the measured value is subjected to the calibration correction using the flow shown in fig. 8. As in embodiment 1 and embodiment 2, software calibration or FPGA calibration may be used in correcting the measured gear using neural network calibration, and the specific flow is shown in fig. 8 (a) and 8 (b).
In this embodiment, taking software calibration as an example, after the original measurement data is obtained by the hardware circuit, the ATE control software inputs the original measurement value, the gear and the temperature as the model by applying the neural network model, so as to obtain the corrected measurement value.
Meanwhile, data obtained by using a traditional Gain-Offset calibration mode are compared, and the graph is shown in FIG. 11, wherein the horizontal axis represents Temperature (C) and the vertical axis represents self-measurement accuracy%). In fig. 11 (a), measurement is performed using a 1V measurement range, the output voltage is 0.45V, and the difference in accuracy between the Gain-Offset mode calibration and the calibration using the neural network model is compared at different ambient temperatures. In fig. 11 (b), the measurement was performed using a 10V measurement range, with an output voltage of-8.5V, a comparison at different ambient temperatures.
It is obvious that the self-test value obtained by using the neural network model of the method can well exclude the influence of temperature and keep good precision.
The manner of calibrating the circuit measurement gear using the FPGA shown in fig. 8 (b) is substantially the same as that of the present embodiment, and will not be described in detail here.
Taking the board card data of embodiment 1 as an example, compared with the original calibration mode, the actual output after correction by using the neural network is closer to the user input (expected output), i.e. the calibration accuracy is higher.
Table 1 comparison of two calibration modes
Force Gear Temperature Measure_GO Measure_NN Accuracy_GO Accuracy_NN
-0.85 7000 27.9266804 -0.85005582 -0.8500738 5.58E-05 7.38E-05
-0.45 7000 27.9266804 -0.4499151 -0.45006614 8.49E-05 6.61E-05
-0.05 7000 27.91332876 -0.04978424 -0.05002187 0.00021576 2.19E-05
0.05 7000 27.9266804 0.05026214 0.04995179 0.00026214 4.82E-05
0.45 7000 27.91332876 0.45039363 0.45008272 0.00039363 8.27E-05
0.85 7000 27.91332876 0.85052075 0.84999764 0.00052075 2.36E-06
The original calibration mode does not consider the influence of environmental factors (temperature), the calibration carried out at other temperatures has a larger difference from the actual value, and the influence of the environmental factors can be effectively reduced by utilizing the neural network for calibration. For example, fig. 2 is a scatter diagram of the relationship between the actual output value and the temperature corrected by two calibration methods when the Force value of the 1V gear in the board card is-0.05 and 0.45.
Wherein the middle horizontal line and the upper and lower horizontal lines represent Force value (desired output) and accuracy threshold range, respectively. It can be observed that the original calibration mode is obviously affected by temperature, while the neural network calibration mode can effectively reduce the influence and correct the output within the range of expected accuracy
The present application is not limited to the above-described embodiments, which are adopted in connection with the actual demands, and various changes made by the person skilled in the art without departing from the spirit of the present application are still within the scope of the present application.

Claims (6)

1. A hardware circuit calibration method based on machine learning is characterized in that: the method comprises the following steps:
s1, acquiring an input value and an output value of a device in a designated working range under different temperatures of different gears, wherein the input value and the output value are used as training samples, and one piece of data in the training samples comprises: the expected value, gear, temperature, hardware circuit final actual output, hardware circuit measured value and multimeter measured value specified by a user;
S2, after obtaining enough training samples from the device, preprocessing and normalizing the samples, and inputting the samples into a neural network for training; obtaining parameters of a neural network training model;
S3, according to different outputs or measurements of the device, selecting corresponding parameters as inputs of the neural network, and obtaining a required correction result through calculation of a neural network model.
2. A machine learning based hardware circuit calibration method as claimed in claim 1, wherein: the method comprises the steps of calibrating output states, wherein when the calibration output is carried out, one piece of data in the training sample in the step S1 comprises the following steps: the expected value, gear, temperature and hardware circuit specified by the user finally output the actual value;
at this time, the inputs of the neural network model are: the hardware circuit actually outputs value, gear and temperature; the output of the neural network is: a user-specified desired value.
3. A machine learning based hardware circuit calibration method as claimed in claim 1, wherein: the method comprises the steps of calibrating and measuring states, wherein during the calibrating and measuring, one piece of data in the training sample in the step S1 comprises the following steps: hardware circuit measurement values, gear positions, temperatures and multimeter measurement values;
at this time, the neural network model inputs are: hardware circuit measured value, gear and temperature; the neural network model outputs are: multimeter measurements.
4. A machine learning based hardware circuit calibration method as claimed in claim 1, wherein: in the step S2, a neural network model is used for predicting an input sample, then the input sample is compared with an output label value and is transmitted reversely, and training is finished after the accuracy of sample training reaches the requirement.
5. A hardware circuit calibration device based on machine learning, characterized in that: the method comprises an upper computer ATE, wherein a neural network model is carried in a software control end of the upper computer ATE and is used for realizing the calibration method of the calibration hardware circuit according to any one of claims 2 or 3.
6. A hardware circuit calibration device based on machine learning, characterized in that: the method comprises a hardware circuit FPGA, wherein a neural network model is carried in the hardware circuit FPGA and is used for realizing the calibration method of the calibration hardware circuit according to any one of claims 2 or 3.
CN202410269997.4A 2024-03-11 2024-03-11 Hardware circuit calibration method and device based on machine learning Pending CN118095384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410269997.4A CN118095384A (en) 2024-03-11 2024-03-11 Hardware circuit calibration method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410269997.4A CN118095384A (en) 2024-03-11 2024-03-11 Hardware circuit calibration method and device based on machine learning

Publications (1)

Publication Number Publication Date
CN118095384A true CN118095384A (en) 2024-05-28

Family

ID=91164950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410269997.4A Pending CN118095384A (en) 2024-03-11 2024-03-11 Hardware circuit calibration method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN118095384A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118566814A (en) * 2024-08-02 2024-08-30 杭州飞仕得科技股份有限公司 Automatic test equipment and current source calibration method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965198A (en) * 2020-07-20 2022-01-21 恩智浦有限公司 ADC compensation using machine learning system
CN115425976A (en) * 2022-09-20 2022-12-02 重庆邮电大学 ADC sampling data calibration method and system based on FPGA
CN115441872A (en) * 2022-09-22 2022-12-06 电子科技大学重庆微电子产业技术研究院 ADC calibration system based on FPGA verification development board
US20230243709A1 (en) * 2022-02-03 2023-08-03 Siemens Aktiengesellschaft Calibration of an electronic assembly during a manufacturing process
CN116643226A (en) * 2023-05-25 2023-08-25 中国电子科技集团公司第五十八研究所 Integrated circuit ATE equipment calibration method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965198A (en) * 2020-07-20 2022-01-21 恩智浦有限公司 ADC compensation using machine learning system
US20230243709A1 (en) * 2022-02-03 2023-08-03 Siemens Aktiengesellschaft Calibration of an electronic assembly during a manufacturing process
CN115425976A (en) * 2022-09-20 2022-12-02 重庆邮电大学 ADC sampling data calibration method and system based on FPGA
CN115441872A (en) * 2022-09-22 2022-12-06 电子科技大学重庆微电子产业技术研究院 ADC calibration system based on FPGA verification development board
CN116643226A (en) * 2023-05-25 2023-08-25 中国电子科技集团公司第五十八研究所 Integrated circuit ATE equipment calibration method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118566814A (en) * 2024-08-02 2024-08-30 杭州飞仕得科技股份有限公司 Automatic test equipment and current source calibration method

Similar Documents

Publication Publication Date Title
CN118095384A (en) Hardware circuit calibration method and device based on machine learning
CN103697928B (en) A kind of instrument calibration method and device
US10175334B2 (en) Self-calibration of source-measure unit via capacitor
CN106063132B (en) Analog-digital converter (ADC) linearity is tested on the chip of embedded equipment
Sediva et al. Comparison of GUM and Monte Carlo method for evaluation measurement uncertainty of indirect measurements
CN107783069A (en) The on-line calibration system and method for DC measurement in integrated circuit test system
CN115792769B (en) Signal calibration method and system of semiconductor test equipment and computer equipment
CN106053957A (en) Test fixture line loss test method and test fixture line loss test system
CN106506101A (en) A kind of receiver amplitude-frequency response automatized calibration method and device
CN117969935A (en) Current measurement method, device and test equipment under ATE leakage current scene
CN106225992A (en) Based on pressure transmitter Performance Test System and method
US20210255268A1 (en) Calibrating differential measurement circuitry
CN106526499B (en) High-stability current source assessment method and system
CN113325243A (en) Line loss measuring equipment and method for radio frequency test system
CN113702896B (en) System and method for measuring direct-current electric energy standard meter error based on voltage reference
CN113126014B (en) Calibration system for realizing array parallelism of digital oscilloscope
CN210626547U (en) Measuring device
Otomański et al. The evaluation of expanded uncertainty of measurement results in direct measurements using the LabVIEW environment
CN113325244A (en) Line loss measuring equipment and method for radio frequency test system
CN113030821A (en) Electric quantity calibration method and device
RU2714039C1 (en) Smart sensor development system
Clarkson et al. The calibration of IEC standard flickermeters using complex modulated signals
CN113325226B (en) Method for testing voltage drop of low-voltage linear voltage stabilizer
CN106528911A (en) Apparatus for building VHDL-AMS simulation model of power supply
CN117192462A (en) Current correction method, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination