CN111858221A - Efficient instruction test sequence generation method and device based on neural network - Google Patents

Efficient instruction test sequence generation method and device based on neural network Download PDF

Info

Publication number
CN111858221A
CN111858221A CN202010740929.3A CN202010740929A CN111858221A CN 111858221 A CN111858221 A CN 111858221A CN 202010740929 A CN202010740929 A CN 202010740929A CN 111858221 A CN111858221 A CN 111858221A
Authority
CN
China
Prior art keywords
neural network
instruction
probability vector
probability
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010740929.3A
Other languages
Chinese (zh)
Inventor
王培鑫
梁利平
王志君
管武
洪钦智
刘光宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN202010740929.3A priority Critical patent/CN111858221A/en
Publication of CN111858221A publication Critical patent/CN111858221A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A method for generating an instruction test sequence based on a neural network comprises the following steps: randomly generating a probability vector; generating M instructions through an instruction generator according to the probability vector; sending the instruction into a processor hardware code to obtain the coverage rate of the processor module; training a neural network with the plurality of probability vectors as inputs and the processor module coverage as outputs; after the neural network is trained, obtaining a fixed parameter neural network, inputting a randomly generated probability vector into the fixed parameter neural network, and judging whether the output of the fixed parameter neural network exceeds a threshold value; in the event that the corresponding outputs of the processor module of interest all exceed the threshold, then the corresponding probability vector is selected to generate the test sequence. The invention also provides a device for generating the instruction test sequence based on the neural network.

Description

Efficient instruction test sequence generation method and device based on neural network
Technical Field
The invention relates to the technical field of processor test verification, in particular to a method and a device for generating an efficient instruction test sequence based on a neural network.
Background
Currently, processor verification work is more and more complex and becomes a bottleneck of a design cycle. About 70% of the time was used for verification. Coverage is an important indicator of completeness of simulation verification and is 0 to 100%. The line coverage (line) is included to indicate the proportion of the hardware code line that executes. The condition coverage rate (conditions) represents the proportion covered by the true and false of the condition statement, the branch coverage rate represents the proportion detected by the branch combination such as if else, and the coverage rate of the state machine represents how many states of the state machine are reached. The total coverage can be weighted, with higher coverage representing more sophisticated authentication. Raising the coverage level has become a key factor in validation work, but the higher the coverage, the slower the convergence, and the longer the period to reach the higher coverage. How to quickly improve the verification coverage rate becomes a key issue.
Disclosure of Invention
In view of the above, the present invention provides a method for generating a test sequence of instructions based on a neural network, so as to solve at least one of the above technical problems.
In order to achieve the above object, as an aspect of the present invention, there is provided a neural network-based instruction test sequence generation method, including the steps of:
randomly generating a probability vector;
generating M instructions through an instruction generator according to the probability vector;
sending the instruction into a processor hardware code to obtain the coverage rate of the processor module;
training a neural network with the plurality of probability vectors as inputs and the processor module coverage as outputs;
after the neural network is trained, obtaining a fixed parameter neural network, inputting a randomly generated probability vector into the fixed parameter neural network, and judging whether the output of the fixed parameter neural network exceeds a threshold value;
in the event that the corresponding outputs of the processor module of interest all exceed the threshold, then the corresponding probability vector is selected to generate the test sequence.
The specific process of generating the instruction by the instruction generator comprises the following steps:
high cel (log)2M) bit probability vector generates 1 corresponding to binary bit, generated celi (log)2M) bit binary numbers represent different instructions, wherein M represents an instruction type number, and celiA represents a minimum integer which is larger than or equal to A;
reading out corresponding actual MIPS operation codes and function codes from a container of C + + storage integer elements;
the lower 26 bit probability value of the probability vector generates a binary 1 for the corresponding instruction bit.
Wherein the process of training the neural network comprises:
loss function
Figure BDA0002606343500000021
Wherein the content of the first and second substances,
Figure BDA0002606343500000022
is the desired coverage value, yi is the neural network output value, N modules;
and training a neural network by using a gradient descent method, and updating network parameters.
Wherein, the definition mode of the probability vector is that the probability that each bit of the probability vector represents the generation 1 is 0, 0.1, 0.2 … … 1.0.0.
As another aspect of the present invention, there is provided an instruction test sequence generation apparatus based on a neural network, the apparatus including a neural network training module and an instruction generation module, wherein,
the neural network training module comprises a random probability vector generating unit, an instruction generator and a neural network to be trained, and is used for training the neural network;
the instruction generation module comprises a random probability vector generation unit, a fixed parameter neural network, a decision device and an instruction generator and is used for generating a test sequence.
As still another aspect of the present invention, there is provided an electronic apparatus including:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Based on the technical scheme, compared with the prior art, the instruction test sequence generation method based on the neural network has at least one or part of the following beneficial effects:
the invention utilizes the neural network to establish the relation between the probability parameter of the instruction generator and the coverage rate of each module, and the neural network has higher accuracy rate for the problems of classification, regression and the like. For different modules or a plurality of modules, the neural network can judge the targeted probability parameters, and the judgment process does not need to generate instructions and input RTL for simulation, thereby saving a large amount of time. The targeted probability parameters generate the required instructions, and the higher coverage rate level can be achieved more quickly.
Drawings
FIG. 1 is a schematic diagram of a neuron structure in the prior art;
FIG. 2 is a schematic diagram of a neural network in the prior art;
FIG. 3 is a flow chart of a neural network based efficient test instruction generation method in an embodiment of the invention;
FIG. 4 is three instruction type R (register type), I (immediate type), J (jump type) instructions of the MIPS in an embodiment of the present invention, and a 33-bit probability vector for generating 95 instructions;
FIG. 5 is an ADD instruction of the MIPS in an embodiment of the present invention;
FIG. 6 is a flow diagram of a neural network training process in an embodiment of the present invention;
FIG. 7 is a diagram of a neural network training module system architecture (95 instructions for example) in an embodiment of the present invention;
FIG. 8 is a flow diagram of an efficient instruction generation process in an embodiment of the invention;
fig. 9 is a schematic diagram of a system structure (95 instructions are used as an example) of an efficient instruction generation module in the embodiment of the present invention.
Detailed Description
The processor is divided into an instruction value and predecoding module (fetch & preDec), an instruction distribution module (dispatch), a jump exception control module (BrExcp), an arithmetic operation module (ALU), a multiplication Module (MUL), a storage control module (LDST), an address translation module (TLB), an instruction cache module (ICACHE) and other modules, wherein the modules respectively have coverage rate data and can be viewed through tools such as DVE.
Neural networks have good performance in regression and classification. As shown in fig. 1, the neurons have linear parameters w1, w2, …, wn, b and a nonlinear function f, so the neural network can solve the linear indifference problem. As shown in fig. 2, the neural network is divided into an input layer, an output layer and a plurality of hidden layers, and each layer can learn different characteristics.
The probability parameter of the instruction generator has an effect on the proportion of the particular instructions generated, which has an effect on the coverage of the various modules of the processor. For example, the arithmetic class instruction ratio is high, the coverage rate of the arithmetic operation module is higher. The method utilizes the mapping relation between the probability parameter of the neural network learning instruction generator and the coverage rate of each module of the processor. Therefore, what instruction generator probability parameters should be adopted for rapidly improving the coverage of some concerned modules is guided so as to accelerate the verification coverage improvement of the concerned modules. The neural network is a regression network, namely the output is a specific numerical value, and a specific coverage value is fitted.
The invention provides an instruction generation method based on a neural network, aiming at the problem of improving the verification coverage rate of a processor. Simulation verification is the application of stimuli, such as instructions, to a processor, as compared to expected results. Verification is critical to ensure that the processor functions correctly. The coverage rate of the verification is an important index for measuring the completeness of the verification. The invention establishes the relation between the probability parameter and the coverage rate of the instruction generator through the neural network. The targeted instructions generated by different modules of the processor can be subjected to simulation verification, and higher coverage rate can be achieved more quickly.
Specifically, the invention discloses a test instruction sequence generation method based on a neural network, as shown in fig. 3. Which comprises the following steps:
randomly generating a probability vector;
generating M instructions through an instruction generator according to the probability vector;
sending the instruction into a processor hardware code to obtain the coverage rate of the processor module;
training a neural network with the plurality of probability vectors as inputs and the processor module coverage as outputs;
after the neural network is trained, obtaining a fixed parameter neural network, inputting a randomly generated probability vector into the fixed parameter neural network, and judging whether the output of the fixed parameter neural network exceeds a threshold value;
in the event that the corresponding outputs of the processor module of interest all exceed the threshold, then the corresponding probability vector is selected to generate the test sequence.
In order that the objects, technical solutions and advantages of the present invention will become more apparent, the present invention will be further described in detail with reference to the accompanying drawings in conjunction with the following specific embodiments.
As shown in fig. 4, the MIPS instruction is composed of 32 bits and is classified into R (register type), I (immediate type), and J (jump type) instructions. The upper 6 bits are the operation code, the operation code and function code represent different types of instructions, and the lower 26 bits represent the register number, address offset, immediate, function code, etc. This time, 95 types of MIPS instructions are shared, and any type of instructions with the number M can be used.
As FIG. 4 generates a 33-bit probability vector, the upper 7 bits can be 0.1, 0.2 … … 0.9.9, 1.0 per bit, representing the probability of corresponding to a binary number of 1 being 7 bits, which is applied to 95 instructionsThe code, for other types of numbers M, may be ceil (log)2M) bits, ceil represents the smallest integer that takes an expression greater than or equal to. Each of the lower 26 bits may be taken to be 0, 0.1, 0.2, … …, 0.9, 1.0, which represents the probability that each bit of the corresponding instruction bit is 1. A batch of instructions may be generated from the 33-bit probability vector, RAND () < (RAND _ MAX probability) giving 0 or 1 for each bit, RAND () being a random function, RAND _ MAX being the maximum of the random function. The instructions are entered into hardware code (RTL) for functional verification.
The specific generation process of the instruction generator is to read in 95 required instruction codes (defined constants for distinguishing different instructions) by using a vector (a container storing integer elements) of C + +, then generate a random number of 0 to 94 by using a 7-bit binary code representing the high 7 bits of the probability vector, and then obtain an instruction code prnum corresponding to the vector. And generating different parts of corresponding instructions such as operation codes, function codes, register numbers, immediate numbers and the like according to different prnum. As shown in the ADD instruction of FIG. 5, the probability values of 25 th to 21 th, 20 th to 16 th, and 15 th to 11 th respectively generate the register number represented by the binary number of 5 bits of rs, rt, and rd, and the other fixed bits 10-0 generate the operation code according to the format, and the other register numbers and the fixed bits of the function code form the complete instruction.
As shown in FIG. 6, ceil (log) is randomly generated2M) +26 bit probability vector, generating a batch of instructions through the probability vector and an instruction generator, sending the instructions into RTL (processor hardware code) to obtain a coverage value
Figure BDA0002606343500000051
When the randomly generated probability vector number meets the requirement. And training the network by taking the plurality of probability vectors as input and the N module coverage values as output. Loss function
Figure BDA0002606343500000052
Is the desired coverage value, yiThe output value of the neural network is used for training the neural network and updating the network parameters by using a gradient descent method.
As shown in fig. 7, the probability generation module randomly generates a 33-bit probability vector float [ 32: 0], as a parameter of the instruction generator, an instruction is generated. And sending the instruction into an RTL (processor hardware code) for verification to obtain the coverage rate of N modules. Randomly generated probability vectors float [ 32: 0] as input, N module coverage floats [ N: 0] as output, training the neural network.
After training the neural network, ceil (log) is randomly generated as shown in FIG. 82The probability vector of M) +26 bits is input into the neural network. It is then determined whether the neural network output exceeds a certain value, such as 60% (optionally adjustable), which is 1, otherwise 0. And if the output corresponding to one or more concerned modules is 1, the probability vector meets the requirement. In the process, an instruction does not need to be generated and the RTL is not needed to be simulated, so that a large amount of simulation time can be saved. The instruction generator outputs a batch of instructions in embodiments 1, 2 according to the probability vector meeting the requirements, and the batch of instructions is verified at the processor. The qualified probability vectors can generate targeted instructions to achieve higher coverage rates more quickly.
As shown in fig. 9, the probability generation module randomly generates a 33-bit probability vector float [ 32: 0], input to the neural network. The decision maker determines whether the output is greater than a value, if 60%, the output is 1, otherwise 0. if the outputs of the concerned modules are all 1, the probability vector float [ 32: 0 is used as the parameter of the instruction generator to generate the instruction which is sent to RTL verification.
The traditional simulation verification is based on random generation, and through analysis, the coverage rate in an ideal state approaches to 100% at the speed of a negative exponent of a natural constant e, and the coverage rate is improved more and more slowly. Random generation lacks artificial modulation of the instruction generator parameters, resulting in long periods of time required to achieve coverage. The invention utilizes the neural network to establish the relation between the probability parameter of the instruction generator and the coverage rate of each module, and the neural network has higher accuracy rate for the problems of classification, regression and the like. For different modules or a plurality of modules, the neural network can judge the targeted probability parameters, and the judgment process does not need to generate instructions and input RTL for simulation, thereby saving a large amount of time. The targeted probability parameters generate the required instructions, and the higher coverage rate level can be achieved more quickly.
The invention also disclosesThe device comprises a neural network training module based on instruction coverage and an efficient instruction generating module, wherein the neural network training module comprises a random probability vector generating module, an instruction generator and a neural network to be trained. The probability vector generation module randomly generates a probability vector, celi (log)2M) +26 bit vector, each bit representing the probability of generating a 1, which can be taken to be 0, 0.1, 0.2 … … 1.0.0. celi represents the smallest integer greater than or equal to the expression. M represents the number of instruction classes. The instruction generator generation process is as follows: high cel (log)2M) each bit probability of the bit probability vector generates a 1 for the corresponding binary bit. Generated cel (log)2M) bit binary numbers represent different instructions, and then the corresponding instruction number prnum is read from the vector of C + +. The lower 26 bit probability value of the probability vector generates a binary 1 for the corresponding instruction bit. And inputting each bit of a randomly generated probability vector in the training process of the neural network to be trained, and outputting the coverage rate of each module of the processor.
The high-efficiency instruction generation module comprises a random probability vector generation module, a fixed parameter neural network, a decision device and an instruction generator. The random probability vector generation module and the instruction generator have the same functions and are respectively realized as the probability vector generation module and the instruction generator. The fixed parameter neural network is the result obtained by training the neural network to be trained. And the decision device outputs 1 when the output of the neural network corresponding to the processor module is larger than a certain value, such as 60%, otherwise, outputs 0. If the outputs of the blocks of interest are all 1, the probability vector of the input neural network is provided to the instruction generator of claim 7.
According to a further embodiment of the present invention, the probability vector is defined such that each bit of the vector represents the probability of generating a 1, which may be 0, 0.1, 0.2 … … 1.0.0. Method for generating M instructions by using probability vector, and high celi (log)2M) each bit probability of the bit probability vector generates a 1 for the corresponding binary bit. Generated cel (log)2M) bit binary numbers represent different kinds of instructions, and then the corresponding instruction number prnum is read from the vector of C + +. The lower 26 bit probability value of the probability vector generates a binary 1 for the corresponding instruction bit. With pr abovenum may constitute the complete instruction.
According to a further embodiment of the present invention, the neural network comprises an input layer, a hidden layer, and an output layer. The network output is the coverage rate of each module of the processor, N module networks have N outputs, the coverage rate is 1 when the coverage rate is larger than a certain value, and the coverage rate is 0 otherwise. The inputs are each celi (log)2M) +26 bit probability vector. celi represents the smallest integer greater than or equal to the expression. The neural network is a multiple output regression network. The neural network is trained using the N module coverages and the probability vectors. Loss function
Figure BDA0002606343500000071
Is the desired coverage value, yiThe output value of the neural network is used for carrying out derivation, gradient descent operation, training of the neural network and updating of network parameters. Inputting the trained neural network into a celi (log)2And (3) generating a required instruction according to contents 2 and 3 by using the probability vector if the output of the concerned module is 1, and inputting the required instruction into an RTL code of a processor for verification.
An electronic device or server comprising: one or more processors; memory to store one or more instructions, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement a method as described above.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for generating an instruction test sequence based on a neural network is characterized by comprising the following steps:
randomly generating a probability vector;
generating M instructions through an instruction generator according to the probability vector;
sending the instruction into a processor hardware code to obtain the coverage rate of the processor module;
training a neural network with the plurality of probability vectors as inputs and the processor module coverage as outputs;
after the neural network is trained, obtaining a fixed parameter neural network, inputting a randomly generated probability vector into the fixed parameter neural network, and judging whether the output of the fixed parameter neural network exceeds a threshold value;
in the event that the corresponding outputs of the processor module of interest all exceed the threshold, then the corresponding probability vector is selected to generate the test sequence.
2. The method for generating the instruction test sequence according to claim 1, wherein the specific process of generating the instruction by the instruction generator comprises:
high cel (log)2M) bit probability vector generates 1 corresponding to binary bit, generated celi (log)2M) bit binary numbers represent different instructions, wherein M represents an instruction type number, and celiA represents a minimum integer which is larger than or equal to A;
reading out corresponding actual MIPS operation codes and function codes from a container of C + + storage integer elements;
the lower 26 bit probability value of the probability vector generates a binary 1 for the corresponding instruction bit.
3. The method of claim 1, wherein the training of the neural network comprises:
loss function
Figure FDA0002606343490000011
Wherein the content of the first and second substances,
Figure FDA0002606343490000012
is the desired coverage value, yiIs the neural network output value, N modules;
and training a neural network by using a gradient descent method, and updating network parameters.
4. The method of claim 1, wherein the probability vector is defined in such a way that each bit of the probability vector represents a probability of generating a 1, and the probability vector is 0, 0.1, 0.2 … … 1.0.0.
5. An instruction test sequence generation device based on a neural network is characterized by comprising a neural network training module and an instruction generation module, wherein,
the neural network training module comprises a random probability vector generating unit, an instruction generator and a neural network to be trained, and is used for training the neural network;
the instruction generation module comprises a random probability vector generation unit, a fixed parameter neural network, a decision device and an instruction generator and is used for generating a test sequence.
6. The apparatus according to claim 5, wherein the specific process of generating the instruction by the instruction generator comprises:
high cel (log)2M) bit probability vector generates 1 corresponding to binary bit, generated celi (log)2M) bit binary numbers represent different instructions, wherein M represents an instruction type number, and celi represents a minimum integer which is larger than or equal to an expression;
reading out corresponding actual MIPS operation codes and function codes from a container of C + + storage integer elements;
the lower 26 bit probability value of the probability vector generates a binary 1 for the corresponding instruction bit.
7. The apparatus of claim 5, wherein the process of training the neural network comprises:
loss function
Figure FDA0002606343490000021
Wherein the content of the first and second substances,
Figure FDA0002606343490000022
is the desired coverage value, yiIs the neural network output value, N modules;
and training a neural network by using a gradient descent method, and updating network parameters.
8. The apparatus as claimed in claim 6, wherein the probability vector is defined in such a way that each bit of the probability vector represents a probability of generating 1, and the probability vector takes 0, 0.1, 0.2 … … 1.0.0.
9. An electronic device, comprising:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
CN202010740929.3A 2020-07-28 2020-07-28 Efficient instruction test sequence generation method and device based on neural network Pending CN111858221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010740929.3A CN111858221A (en) 2020-07-28 2020-07-28 Efficient instruction test sequence generation method and device based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010740929.3A CN111858221A (en) 2020-07-28 2020-07-28 Efficient instruction test sequence generation method and device based on neural network

Publications (1)

Publication Number Publication Date
CN111858221A true CN111858221A (en) 2020-10-30

Family

ID=72948155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010740929.3A Pending CN111858221A (en) 2020-07-28 2020-07-28 Efficient instruction test sequence generation method and device based on neural network

Country Status (1)

Country Link
CN (1) CN111858221A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365917A (en) * 2020-12-04 2021-02-12 深圳市芯天下技术有限公司 Nonvolatile memory instruction combination verification method and device, storage medium and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076200A1 (en) * 2015-09-15 2017-03-16 Kabushiki Kaisha Toshiba Training device, speech detection device, training method, and computer program product
CN108664632A (en) * 2018-05-15 2018-10-16 华南理工大学 A kind of text emotion sorting algorithm based on convolutional neural networks and attention mechanism
CN110363282A (en) * 2019-06-06 2019-10-22 中国科学院信息工程研究所 A kind of network node label Active Learning Method and system based on figure convolutional network
US20200160181A1 (en) * 2018-05-31 2020-05-21 Neuralmagic Inc. Systems and methods for generation of sparse code for convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076200A1 (en) * 2015-09-15 2017-03-16 Kabushiki Kaisha Toshiba Training device, speech detection device, training method, and computer program product
CN108664632A (en) * 2018-05-15 2018-10-16 华南理工大学 A kind of text emotion sorting algorithm based on convolutional neural networks and attention mechanism
US20200160181A1 (en) * 2018-05-31 2020-05-21 Neuralmagic Inc. Systems and methods for generation of sparse code for convolutional neural networks
CN110363282A (en) * 2019-06-06 2019-10-22 中国科学院信息工程研究所 A kind of network node label Active Learning Method and system based on figure convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANGLIN YANG等: "On Complete Target Coverage in Wireless Sensor Networks With Random Recharging Rates", 《 IEEE WIRELESS COMMUNICATIONS LETTERS 》, vol. 4, no. 1, 28 February 2015 (2015-02-28), pages 50 - 53 *
付光杰等: "贝叶斯预测蜂群算法在无线传感器网络优化中的应用", 《重庆大学学报》, vol. 41, no. 5, 15 May 2018 (2018-05-15), pages 15 - 22 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365917A (en) * 2020-12-04 2021-02-12 深圳市芯天下技术有限公司 Nonvolatile memory instruction combination verification method and device, storage medium and terminal
CN112365917B (en) * 2020-12-04 2021-11-05 芯天下技术股份有限公司 Nonvolatile memory instruction combination verification method and device, storage medium and terminal

Similar Documents

Publication Publication Date Title
CN109978228B (en) PM2.5 concentration prediction method, device and medium
Zhang et al. System identification using binary sensors
CN102141958B (en) Method for evolving and generating path coverage test data facing defects
WO2023116111A1 (en) Disk fault prediction method and apparatus
CN113723070B (en) Text similarity model training method, text similarity detection method and device
US20240037408A1 (en) Method and apparatus for model training and data enhancement, electronic device and storage medium
Chhatwal et al. Empirical evaluations of preprocessing parameters' impact on predictive coding's effectiveness
CN111522736A (en) Software defect prediction method and device, electronic equipment and computer storage medium
CN115238804A (en) Spot welding data filling method and device based on generation countermeasure network and storage medium
CN113591093A (en) Industrial software vulnerability detection method based on self-attention mechanism
CN113704082A (en) Model evaluation method and device, electronic equipment and storage medium
CN111858221A (en) Efficient instruction test sequence generation method and device based on neural network
CN114826681A (en) DGA domain name detection method, system, medium, equipment and terminal
CN111581086A (en) Hybrid software error positioning method and system based on RankNet
CN113408804B (en) Electricity stealing behavior detection method, system, terminal equipment and storage medium
CN110516164A (en) A kind of information recommendation method, device, equipment and storage medium
CN116402630B (en) Financial risk prediction method and system based on characterization learning
CN113095511A (en) Method and device for judging in-place operation of automatic master station
Jeong et al. Adaptive-rag: Learning to adapt retrieval-augmented large language models through question complexity
CN114860617B (en) Intelligent pressure testing method and system
CN116401602A (en) Event detection method, device, equipment and computer readable medium
CN114065210A (en) Vulnerability detection method based on improved time convolution network
CN112434156A (en) Power grid operation warning method and device based on mixed text classification model
CN110414146B (en) Water environment treatment project design parameter optimization method based on deep learning
CN113887195A (en) Contract review method, device, equipment and storage medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination