CN116451738A - Machine learning model training and model prediction with noise correction using de-noised data - Google Patents

Machine learning model training and model prediction with noise correction using de-noised data Download PDF

Info

Publication number
CN116451738A
CN116451738A CN202310037184.8A CN202310037184A CN116451738A CN 116451738 A CN116451738 A CN 116451738A CN 202310037184 A CN202310037184 A CN 202310037184A CN 116451738 A CN116451738 A CN 116451738A
Authority
CN
China
Prior art keywords
noise
training
waveforms
tdecq
processors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310037184.8A
Other languages
Chinese (zh)
Inventor
孙文正
P·R·齐夫尼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tektronix Inc
Original Assignee
Tektronix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/094,947 external-priority patent/US20230228803A1/en
Application filed by Tektronix Inc filed Critical Tektronix Inc
Publication of CN116451738A publication Critical patent/CN116451738A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A test and measurement system having one or more inputs connectable to a DUT, and one or more processors configured to execute code that causes the one or more processors to: the trained neural network is generated by collecting a training waveform set by collecting one or more waveforms from the DUT or from analog waveforms, removing noise from the training waveform set to generate a noise-free training waveform set, and training the neural network using the noise-free training waveform set as the training set to predict measurements of the DUT. A method of training a neural network, comprising: one or more waveforms are received from one or more DUTs, or one or more waveforms are generated from a waveform simulator, noise is removed from a training waveform set to produce a noise-free training waveform set, the training waveform set is collected from the one or more waveforms, and the neural network is trained to predict measurements of the DUTs using the noise-free training waveform set as a training set, thereby producing a trained neural network.

Description

Machine learning model training and model prediction with noise correction using de-noised data
RELATED APPLICATIONS
The present disclosure claims the benefit of U.S. provisional application No. 63/299,878, entitled "MACHINE LEARNING MODEL TRAINING USING DE-NOISED DATA AND MODEL PREDICTION WITHOUT NOISE CORRECTION," filed on 1 month 14 of 2022, the disclosure of which is incorporated herein by reference in its entirety.
Technical Field
The present invention relates to test and measurement systems, and more particularly to techniques for training and using machine learning models in test and measurement applications.
Background
Recently, machine Learning (ML) algorithms or models, and in some cases, neural networks (referred to herein as ML algorithms), have been developed for testing and measurement applications. These include measuring the performance of elements in a high-speed communications network, for example, making transmitter dispersive eye closure four-phase (TDECQ) measurements. In some test and measurement systems, these ML algorithms may be implemented in a test and measurement instrument such as an oscilloscope.
Machine learning algorithms require a large set of training data to develop a predictive network for their normal operation. In practice, the generation and selection of training sets is as important as the Machine Learning (ML) algorithm itself.
Fig. 1 shows a current example of a workflow for developing a Machine Learning (ML) algorithm for TDECQ measurement. The workflow uses a Device Under Test (DUT) or laboratory generator to generate training waveforms 10. An oscilloscope collects a set of these waveforms. The waveforms are not intentionally altered and they include their own noise and oscilloscope noise for acquiring the waveforms. This set is used to train the ML algorithm, in this case a neural network. The expected response (TDECQ value) required to train the network 12 is provided by making measurements using a conventional measurement algorithm operating on the same waveform.
This results in a production network at 12. In operation, instead of training waveforms, the actual waveform to be measured from the DUT(s) is fed to the production network 12 to predict TDECQ values at 14.
There are several disadvantages to this approach. First, generating training sets is costly because large data sets must be collected from a truly operating DUT (or laboratory generator) and by the test and measurement instrumentation used to measure DUT responses. ML training requires processing thousands of waveforms, making this a difficult and time consuming process.
Furthermore, for completeness, the training data set may also need to include waveforms that are generated when scanning for other parameters that will affect the ML algorithm predictions, such as varying oscilloscope signal-to-noise ratio (SNR), SNR of the DUT, and varying DUT output level (amplitude). This may involve multiple DUT operations and measurements by multiple test and measurement instruments (e.g., multiple oscilloscopes) to account for different instrument noise and different DUT noise.
These and other additional scan parameters added during training set generation will significantly increase the size of the training data. For example, scanning one parameter with three possible values will increase the number of waveforms in the training set by a factor of three. This results in slower training and longer development time.
However, even if the training dataset has data from multiple instruments to account for different noise levels, it may still result in overfitting of the neural network to the noise levels exposed during training. When the ML algorithm learns "too good" for the training data, a fitting occurs because it cannot be generalized to other data, resulting in poor generalization in production. Since noise is more random than key signal features and is often at a higher frequency, including noise in the training waveform may result in a higher risk of the ML algorithm locking onto the noise features and overfitting to the training set.
Embodiments of the disclosed apparatus and method address the shortcomings of the prior art.
Drawings
FIG. 1 illustrates an example of a current workflow for developing a Machine Learning (ML) model of TDECQ measurements, as well as the operation of such a trained network.
Fig. 2 illustrates an embodiment of a workflow for neural net training.
FIG. 3 illustrates an embodiment of a test and measurement system including an ML system for predicting measured values.
Detailed Description
Embodiments herein overcome the problems set forth above, such as the need to include waveforms that are generated when scanning other parameters that would affect Machine Learning (ML) algorithm predictions, such as varying oscilloscope signal-to-noise ratio (SNR) and varying DUT levels and DUT SNR levels. The embodiments separate noise components from the training waveforms and then train the ML algorithm with noise-free data. The embodiment then compensates for noise in the operating environment. The embodiments will better generalize and avoid the problem of overfitting.
FIG. 2 shows a revised workflow of training and operation of the ML algorithm. Initially, a solution training scenario is discussed. A test and measurement instrument such as an oscilloscope collects a raw set of waveforms at 20. These may be collected from one or more DUTs or from a waveform simulator. Then, at 22, the process removes noise from the acquired waveforms to produce a noise-free training waveform set.
Removing noise may take many different forms. In one embodiment, the number of raw waveforms may exceed the number of waveforms required for training: the process may take multiple waveforms and average them to one waveform, reducing the number of waveforms to the number of original waveforms divided by the number used for averaging. The number may be adjusted to provide a substantially noise-free waveform. Although the total number of waveforms processed is now larger than in the prior art, this is still advantageous, since averaging is a much lower process than training costs.
There are other methods, such as nearest neighbor type methods, where each waveform is averaged with the nearest X neighbors before it and the nearest X neighbors after it, which does not reduce the number of overall waveforms, since each waveform will have its own average version. Furthermore, instead of being acquired, the waveform may be created in a simulation, which is a simple way of providing a noise-free waveform.
Training of ML may also include a step in which ML is trained to ignore noise. Some or all waveforms may be used twice, the first with/without noise, the second intentionally contaminated with noise, and even with noise of different amplitudes. ML is trained to ignore noise.
Typically, some type of oscilloscope will collect waveforms as they are acquired from the DUT. For example, a real-time (RT) oscilloscope may collect a large amount of data at little cost. It can perform high-speed acquisition of multiple adjacent waveform patterns. The oscilloscope may then average these waveform patterns to produce a smaller set of noiseless training waveforms. It will also measure the noise distribution of the original waveform to determine the noise compensation or correction factor in the noise compensation/representation module 26, resulting in the correction factor 28.
If a sampling oscilloscope is used instead, data may be collected only in part of the waveform (targeted acquisition) as the instrument operates slower. Thus, in one such embodiment, one can pick 8 different positions, 4 different levels, and 4 different edges, and collect multiple sub-waveforms at those positions to develop the original set of sub-waveforms, which are then averaged as above. Such targeted acquisition concepts are concepts of the prior art for jitter analysis of sampling oscilloscopes.
In the lower training path of fig. 2, the noiseless waveform 22 is then used to train the ML neural network. This process will typically involve training a neural network on known data and known "answers" and then validating it using portions of the data set that the network has not seen, as is commonly done in the art. At this point, the neural network has been trained at 24 and can now be used to predict TDECQ values when provided with the acquired waveforms.
Once the trained neural network is available, an operational mode may be entered. In one embodiment, the oscilloscope will acquire the waveform at 20, but this time in a runtime or production environment. The waveform includes noise from both the DUT and the oscilloscope. The neural network operates on the acquired waveforms at 24 and generates TDECQ predictions at 30. Since the prediction is based on noiseless training, it does not perfectly account for the two noises (DUT and oscilloscope) included in the waveform. The noise representation module 26 calculates a correction factor 28 from the noiseless training waveform.
ML predictions typically have an accuracy measure, similar to a confidence value in statistics. The predicted TDECQ value will have this value, but is reduced in the above embodiments based on the noise removed from the original waveform, because training is done with a noiseless waveform, and the waveform is noisy in operation. This acts as a disadvantage of reducing the accuracy measure of the TDECQ value, which includes a portion of the final TDE CQ value at 32.
In another embodiment, in the operational mode, the oscilloscope collects a plurality of waveforms, and these waveforms are used by both the noise representation module 26 and the trained ML 24. In this embodiment, the trained ML uses waveform(s) that remove noise. The noise removal process is performed on the operating waveform instead of the training waveform, as in the ML training above. Since ML prediction does not take noise into account, the process then applies all noise penalty compensation by means of correction factors 28 determined from the operating waveforms by the noise representation module 26. The fact that noise compensation is the only means to add a penalty due to noise to the final TDECQ result is a distinction from the previous embodiments. It may be more complex but may yield a more accurate embodiment of both embodiments.
With ML one can find TDECQ results without an intermediate step of finding intermediate values such as FFE tap values. This capability may have a disadvantage because in many cases it is valuable to know the FFE tap value or similar intermediate value. If only the FFE tap values are known, the process is not compromised because the remaining computation is a direct, closed form of computation, a low cost computational effort. Therefore, there is interest in the following process: FFE tap values are found using ML and then made available for calculating TDECQ from these FFE tap values.
One method of calculating TDECQ may be to focus on finding the FFE tap value that results in the correct TDECQ value, and then calculate the TDECQ value using BER (bit error rate) adaptation in classical (non-ML) processing as given by the standard (IEEE 802. Bs). While the first embodiment uses a neural network to directly predict TDECQ values, there is a separate embodiment in which the neural network can be used to predict tap values of a Feed Forward Equalizer (FFE). For noisy and noiseless waveforms, the FFE values should be the same, so an alternative approach is to train the neural network for the FFE tap values and then obtain the TDECQ values therefrom. While this presents a more indirect route, the present discussion will still refer to this as the predicted TDECQ value, and other useful intermediate parameters are found in addition to TDECQ in this route.
Another variation involves noise characterization, where analysis of the DUT waveform may be done for both waveform noise in the noise representation module 26 (producing correction factor 28) and TDECQ predictions from 30. The system then uses the correction factor and TDECQ values from the trained ML network to arrive at a final result.
The workflow of an embodiment may occur in one test and measurement instrument such as the oscilloscope(s) mentioned above, or may occur in a combination of separate computing devices on which the instrument and ML algorithm reside. As shown in FIG. 3, an instrument 42 may be connected to DUT(s) 40 through an interconnect or probe 41. The port 44 includes various components necessary to collect and digitize waveforms. These components may include, but are not limited to, clock recovery, analog-to-digital converters (ADCs), and the like. The one or more processors 48 typically perform the operations discussed above by executing code that causes the one or more processors to perform various operations.
The acquired waveforms will be stored in the memory 46. This may include memory on the instrument, memory on a computing device connected to the instrument, cloud memory, etc. The memory or another memory may also store code to be executed by the processor described above.
The user interface (U/I) 50 provides the user with the ability to interact with the instrument, such as by initiating acquisition, setting parameters, viewing displayed waveforms and associated data, and the like. The U/I may reside on an instrument or connected computing device.
Similarly, the ML algorithm or system 52 may reside on the instrument 42, or it may reside on a separate computing device. Although illustrated as a separate device herein, this is merely to aid in understanding and is in no way intended to limit the scope of the embodiments.
Aspects of the invention may operate on specially created hardware, firmware, digital signal processors, or specially programmed general-purpose computers including processors operating according to programmed instructions. The term "controller" or "processor" as used herein is intended to include microprocessors, microcomputers, application Specific Integrated Circuits (ASICs), and special purpose hardware controllers. One or more aspects of the present disclosure may be embodied in computer-usable data and computer-executable instructions, such as in one or more program modules, executed by one or more computers (including monitoring modules) or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer-executable instructions may be stored on a non-transitory computer-readable medium such as a hard disk, optical disk, removable storage media, solid state memory, random Access Memory (RAM), and the like. As will be appreciated by those skilled in the art, the functionality of the program modules may be combined or distributed as desired in various aspects. Furthermore, the functions may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, FPGAs, and the like. Particular data structures may be used to more effectively implement one or more aspects of the present disclosure, and such data structures are contemplated within the scope of computer-executable instructions and computer-usable data described herein.
In some cases, the disclosed aspects may be implemented in hardware, firmware, software, or any combination thereof. The disclosed aspects may also be implemented as instructions carried by or stored on one or more non-transitory computer-readable media, which may be read and executed by one or more processors. Such instructions may be referred to as a computer program product. As discussed herein, computer-readable media means any medium that can be accessed by a computing device. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media.
Computer storage media means any medium that can be used to store computer readable information. By way of example, and not limitation, computer storage media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital Video Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and any other volatile or nonvolatile, removable or non-removable media implemented in any technology. Computer storage media does not include signals themselves and the transitory forms of signal transmission.
Communication media means any medium that can be used for communication of computer readable information. By way of example, and not limitation, communication media may include coaxial cables, fiber-optic cables, air, or any other medium suitable for the communication of electrical, optical, radio Frequency (RF), infrared, acoustic, or other types of signals.
Furthermore, this written description refers to particular features. It should be understood that the disclosure in this specification includes all possible combinations of those particular features. For example, where a particular feature is disclosed in the context of a particular aspect, that feature may also be used to the extent possible in the context of other aspects.
Furthermore, when a method having two or more defined steps or operations is referred to in this application, the defined steps or operations may be performed in any order or simultaneously unless the context excludes those possibilities.
Example
Illustrative examples of the disclosed technology are provided below. Embodiments of the techniques may include one or more of the following examples, as well as any combination.
Example 1 is a test and measurement system, comprising: one or more inputs connectable to a Device Under Test (DUT); and one or more processors configured to execute code that causes the one or more processors to: collecting a training waveform set by collecting one or more waveforms from one or more DUTs or from analog waveforms; removing noise from the training waveform set to produce a noise-free training waveform set; and training the neural network using the noise-free training waveform set as a training set to predict the measurement of the DUT to produce a trained neural network.
Example 2 is the test and measurement system of example 1, wherein the code that causes the one or more processors to remove noise from the training waveform set comprises code that causes the one or more processors to: capturing and averaging multiple raw waveforms repeatedly to produce a noiseless waveform until the noiseless training waveform set is complete.
Example 3 is the test and measurement system of example 1 or 2, wherein the one or more processors are further configured to execute code to determine a correction factor based on noise removed from the waveform set.
Example 4 is the test and measurement system of any one of examples 1 to 3, wherein the measurement is a transmitter dispersive eye closure four-phase (TDECQ) value.
Example 5 is the test and measurement system of example 4, wherein the code that causes the one or more processors to train the neural network to predict the TDECQ value using the noise-free training waveform set includes code that causes the one or more processors to predict a tap value of a feedforward equalizer (FFE) and determine the TDECQ value from the FFE tap value.
Example 6 is the test and measurement system of any one of examples 1 to 5, wherein the one or more processors are further configured to execute code that normalizes the amplitudes of the noise-free waveform set by the one or more processors.
Example 7 is the test and measurement system of any one of examples 4 to 6, wherein the one or more processors are further configured to execute code that causes the one or more processors to: collecting one or more waveforms from a DUT in a production environment; and applying the trained neural network to generate predicted TDECQ values of the DUT based on the one or more waveforms.
Example 8 is the test and measurement system of example 7, wherein the one or more processors are further configured to execute code that causes the one or more processors to: determining a correction factor based on noise removed from the training waveform set; and applying a correction factor to the accuracy level of the predicted TDECQ value.
Example 9 is the test and measurement system of example 7, wherein the one or more processors are further configured to execute code that causes the one or more processors to: collecting an operation waveform set; determining a correction factor from noise removed from the set of operating waveforms; and applies a correction factor to the accuracy of the predicted TDECQ value.
Example 10 is the test and measurement system of example 7, wherein the code to cause the one or more processors to apply the trained neural network causes the one or more processors to predict a feedforward equalizer (FFE) tap value and determine the TDECQ value from the FFE tap value.
Example 11 is a method of training a neural network, comprising: receive one or more waveforms from one or more DUTs, or generate one or more waveforms from a waveform simulator; removing noise from a training waveform set to produce a noise-free training waveform set, the training waveform set being collected from the one or more waveforms; and training the neural network using the noise-free training waveform set as a training set to predict the measurements of the DUT to produce a trained neural network.
Example 12 is the method of example 11, wherein removing noise comprises repeatedly capturing a plurality of original waveforms and averaging the plurality of original waveforms to produce a noiseless waveform until the noiseless waveform set is complete.
Example 13 is the method of any of examples 11 or 12, wherein the one or more processors are further configured to determine a correction factor based on noise removed from the waveform set.
Example 14 is the method of any one of examples 11 to 13, wherein the measured value is a transmitter dispersive eye closure four-phase (TDECQ) value.
Example 15 is the method of example 14, wherein training the neural network to predict the TDECQ value using the noise-free waveform set includes predicting tap values of a feedforward equalizer (FFE) from which the TDECQ value may be determined.
Example 16 is the method of any one of examples 14 to 15, further comprising: collecting one or more waveforms from a DUT in a production environment; and applying the trained neural network to generate a predicted TDECQ value of the DUT based on the waveform.
Example 17 is the method of example 16, further comprising: determining a correction factor from noise removed from the training waveform set; and applying noise correction to the accuracy level of the predicted TDECQ value to produce a final TDECQ value.
Example 18 is the method of example 16, further comprising: collecting an operation waveform set; determining a correction factor from noise removed from the set of operating waveforms; and applying the correction factor to the accuracy level of the predicted TDECQ value to produce a final TDECQ value.
Example 19 is the method of any one of examples 14 to 15, wherein using the trained neural network includes predicting a feedforward equalizer (FFE) tap value, and determining a TDECQ value from the FFE tap value.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Although specific embodiments have been illustrated and described herein for purposes of description, it will be appreciated that various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention should not be limited except as by the appended claims.

Claims (19)

1. A test and measurement system comprising:
one or more inputs connectable to a Device Under Test (DUT); and
one or more processors configured to execute code that causes the one or more processors to:
collecting a training waveform set by collecting one or more waveforms from one or more DUTs or from analog waveforms;
removing noise from the training waveform set to produce a noise-free training waveform set; and
the neural network is trained using the noise-free training waveform set as a training set to predict measurements of the DUT to produce a trained neural network.
2. The test and measurement system of claim 1, wherein the code that causes the one or more processors to remove noise from the training waveform set comprises code that causes the one or more processors to: capturing a plurality of raw waveforms and averaging the plurality of raw waveforms to produce a noiseless waveform is repeated until the noiseless training waveform set is completed.
3. The test and measurement system of claim 1, wherein the one or more processors are further configured to execute code that determines a correction factor from noise removed from the waveform set.
4. The test and measurement system of claim 1, wherein the measurement is a transmitter dispersive eye-closure four-phase (TDECQ) value.
5. The test and measurement system of claim 4, wherein the code that causes the one or more processors to train a neural network to predict TDECQ values using the noise-free training waveform set comprises code that causes the one or more processors to predict tap values of a feedforward equalizer (FFE) and determine TDECQ values from the FFE tap values.
6. The test and measurement system of claim 1, wherein the one or more processors are further configured to execute code that normalizes the amplitudes of the noise-free waveform set by the one or more processors.
7. The test and measurement system of claim 4, wherein the one or more processors are further configured to execute code that causes the one or more processes to:
collecting one or more waveforms from a DUT in a production environment; and
the trained neural network is applied to generate a predicted TDECQ value of the DUT based on the one or more waveforms.
8. The test and measurement system of claim 7, wherein the one or more processors are further configured to execute code that causes the one or more processors to:
determining a correction factor from noise removed from the training waveform set; and
the correction factor is applied to the accuracy level of the predicted TDECQ value.
9. The test and measurement system of claim 7, wherein the one or more processors are further configured to execute code that causes the one or more processors to:
collecting an operation waveform set;
determining a correction factor from noise removed from the set of operating waveforms; and
the correction factor is applied to the accuracy of the predicted TDECQ value.
10. The test and measurement system of claim 7, wherein code that causes the one or more processors to apply a trained neural network causes the one or more processors to predict a feedforward equalizer (FFE) tap value and determine a TDECQ value from the FFE tap value.
11. A method of training a neural network, comprising:
receive one or more waveforms from one or more DUTs, or generate one or more waveforms from a waveform simulator;
removing noise from a training waveform set to produce a noise-free training waveform set, the training waveform set being collected from the one or more waveforms; and
the neural network is trained using the noise-free training waveform set as a training set to predict measurements of the DUT to produce a trained neural network.
12. The method of claim 11, wherein removing noise comprises repeatedly capturing a plurality of original waveforms and averaging the plurality of original waveforms to produce a noiseless waveform until the noiseless waveform set is complete.
13. The method of claim 11, wherein the one or more processors are further configured to determine a correction factor based on noise removed from the waveform set.
14. The method of claim 11, wherein the measurement is a transmitter dispersive eye closure four-phase (TDECQ) value.
15. The method of claim 14, wherein training a neural network to predict TDECQ values using the noise-free waveform set includes predicting tap values of a feedforward equalizer (FFE) from which TDECQ values may be determined.
16. The method of claim 14, further comprising:
collecting one or more waveforms from a DUT in a production environment; and
the trained neural network is applied to generate a predicted TDECQ value of the DUT based on the waveform.
17. The method of claim 16, further comprising:
determining a correction factor from noise removed from the training waveform set; and
noise correction is applied to the accuracy level of the predicted TDECQ value to produce a final TDECQ value.
18. The method of claim 16, further comprising:
collecting an operation waveform set;
determining a correction factor from noise removed from the set of operating waveforms; and
the correction factor is applied to the accuracy level of the predicted TDECQ value to produce a final TDECQ value.
19. The method of claim 14, wherein using a trained neural network comprises predicting feedforward equalizer (FFE) tap values and determining TDECQ values from the FFE tap values.
CN202310037184.8A 2022-01-14 2023-01-10 Machine learning model training and model prediction with noise correction using de-noised data Pending CN116451738A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/299878 2022-01-14
US18/094,947 US20230228803A1 (en) 2022-01-14 2023-01-09 Machine learning model training using de-noised data and model prediction with noise correction
US18/094947 2023-01-09

Publications (1)

Publication Number Publication Date
CN116451738A true CN116451738A (en) 2023-07-18

Family

ID=87122628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310037184.8A Pending CN116451738A (en) 2022-01-14 2023-01-10 Machine learning model training and model prediction with noise correction using de-noised data

Country Status (1)

Country Link
CN (1) CN116451738A (en)

Similar Documents

Publication Publication Date Title
US20230228803A1 (en) Machine learning model training using de-noised data and model prediction with noise correction
WO2006091810A2 (en) Measuring components of jitter
Martí et al. An approach to stopping criteria for multi-objective optimization evolutionary algorithms: The MGBM criterion
CN108535635B (en) EEMD and HMM based analog circuit intermittent fault diagnosis method
US11907090B2 (en) Machine learning for taps to accelerate TDECQ and other measurements
US11923895B2 (en) Optical transmitter tuning using machine learning and reference parameters
US11940889B2 (en) Combined TDECQ measurement and transmitter tuning using machine learning
CN109815855B (en) Electronic equipment automatic test method and system based on machine learning
WO2020112930A1 (en) Categorization of acquired data based on explicit and implicit means
CN115133984A (en) Optical transceiver tuning using machine learning
US10585130B2 (en) Noise spectrum analysis for electronic device
CN114062886B (en) Quantum chip testing method, device and system
CN116451738A (en) Machine learning model training and model prediction with noise correction using de-noised data
US9673862B1 (en) System and method of analyzing crosstalk without measuring aggressor signal
US11624781B2 (en) Noise-compensated jitter measurement instrument and methods
Moschitta et al. Measurements of transient phenomena with digital oscilloscopes
Carbone et al. Asymptotic properties of a one-bit estimator of parametric signals
JP2023183409A (en) Test and measurement device and performance measurement method of device under test
US20230086626A1 (en) System and method for detection of anomalies in test and measurement results of a device under test (dut)
TW202413957A (en) Separating noise to increase machine learning prediction accuracy in a test and measurement system
CN117235467A (en) Separating noise to improve machine learning prediction accuracy in test and measurement systems
CN117273164A (en) Machine learning for measurements using linear responses extracted from waveforms
JP2024054857A (en) Test and measurement apparatus and method for generating noise measurements
TW202334851A (en) Systems and methods for machine learning model training and deployment
CN117436541A (en) Runtime data collection and monitoring in machine learning systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication