CN114911692A - Generator, generation method and system for generating test case by operator - Google Patents

Generator, generation method and system for generating test case by operator Download PDF

Info

Publication number
CN114911692A
CN114911692A CN202110182217.9A CN202110182217A CN114911692A CN 114911692 A CN114911692 A CN 114911692A CN 202110182217 A CN202110182217 A CN 202110182217A CN 114911692 A CN114911692 A CN 114911692A
Authority
CN
China
Prior art keywords
test
generator
data
operator
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110182217.9A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Cambricon Information Technology Co Ltd
Original Assignee
Anhui Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Cambricon Information Technology Co Ltd filed Critical Anhui Cambricon Information Technology Co Ltd
Priority to CN202110182217.9A priority Critical patent/CN114911692A/en
Publication of CN114911692A publication Critical patent/CN114911692A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The invention relates to a generator, a method and an automatic test system for generating a test case aiming at an operator in deep learning, wherein the system is integrated in an integrated circuit device which comprises a universal interconnection interface and other processing devices. The system interacts with other processing devices to jointly complete the calculation operation designated by the user. The integrated circuit device may further include a storage device, which is connected to the system and the other processing device, respectively, for data storage of the system and the other processing device.

Description

Generator, generation method and system for generating test case by operator
Technical Field
The present invention relates generally to the field of deep learning. More particularly, the invention relates to a generator, a generation method and an automatic test system for generating a test case aiming at an operator in deep learning.
Background
The high accuracy of the operator operation is the basis of the high accuracy of the whole deep learning model. Testing of the deep learning operator is also critical and indispensable. At present, operator testing generally adopts a manual writing mode, and a section of corresponding code needs to be written for testing each scene, so that the mode has high cost and low efficiency, and the testing scenes are often not covered fully.
With the continuous development of Testing technology and the continuous abundance of application scenarios, Pair Independent Combinatorial Testing (PICT) and ALLPAIRS have appeared. The PICT is a software test case generation tool based on an orthogonal method, and requires that all horizontal combinations of any two factors (input conditions) are covered at least once; ALLPAIRS is a test case design tool for Windows, but is ported to a variety of platforms to accommodate some minor changes to this script file. All experimental technologies are automatically designed, and a small amount of data can be selected from massive data combinations to generate test cases through the method of the tool. These generation tools can be combined into a variety of preferred combinations. However, these tools can only satisfy the most basic test case scenario combination construction, cannot generate a targeted test case for an operator in deep learning, and cannot generate comparison data.
It follows that neither of the current solutions is ideal. In order to solve the problems, the invention provides a scheme for generating a test case aiming at an operator for deep learning.
Disclosure of Invention
In order to at least partially solve the technical problems mentioned in the background art, the invention provides a generator, a method and a system for generating a test case for an operator in deep learning.
In one aspect, the present invention discloses a generator for generating test cases for operators in deep learning, comprising: the configuration file unit is used for generating a configuration file required by the test case according to the data information of the operator; the scene generation unit is used for randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes; and the case generating unit is used for generating a plurality of test cases according to the plurality of test scenes.
In another aspect, the present invention discloses a method for generating test cases for operators in deep learning, the method comprising: generating a configuration file required by the test case according to the data information of the operator; randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes; and generating a plurality of test cases according to the plurality of test scenes.
In another aspect, the present invention discloses an automatic test system, comprising: a processor configured to execute an operator in deep learning based on the test case; and a generator according to the above.
By utilizing the scheme for generating the test cases, the test cases are generated based on the test scenes by randomly combining the data information required by the test scenes, so that the generated multiple test cases can cover a large number of test scenes.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. In the accompanying drawings, several embodiments of the present invention are illustrated by way of example and not by way of limitation, and like reference numerals designate like or corresponding parts throughout the several views, in which:
FIG. 1 is a schematic diagram of a convolutional neural network according to an embodiment of the present invention;
FIG. 2 is an apparatus diagram illustrating a generator of test cases of an embodiment of the present invention;
FIG. 3 is a block diagram of an integrated circuit device according to an embodiment of the present invention;
FIG. 4 is a block diagram of the board card according to the embodiment of the present invention;
FIG. 5 is a flow chart illustrating a test case generation method of an embodiment of the present invention; and
FIG. 6 is a flow chart illustrating a further method for test case generation in accordance with an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
It should be understood that the terms "first", "second", "third" and "fourth", etc. in the claims, the description and the drawings of the present invention are used for distinguishing different objects and are not used for describing a particular order. The terms "comprises" and "comprising," when used in the description and claims of the present invention, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification and claims of this application, the singular form of "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this specification refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when.. or" once "or" in response to a determination "or" in response to a detection ".
The following detailed description of embodiments of the invention refers to the accompanying drawings.
Deep learning is one of machine learning, and machine learning is a must-pass path for realizing artificial intelligence. The concept of deep learning is derived from the research of artificial neural networks, and a multi-layer perceptron comprising a plurality of hidden layers is a deep learning structure. Deep learning forms more abstract high-level representation attribute classes or features by combining low-level features to discover a distributed feature representation of the data. The motivation for studying deep learning is to build neural networks that simulate the human brain for analytical learning, which mimics the mechanisms of the human brain to interpret data such as images, sounds, and text. The deep learning is a multi-layer network structure, is similar to the cognitive structure of human brain, and can perform calculation and learning.
Convolutional neural networks are typical deep learning models, and generally consist of four layer structures: an input layer, a convolutional layer (convo l ut i on l eye), a pooling layer (po l i ng l eye), and a fully connected layer (fu l y connected l eye). Fig. 1 is a diagram showing a four-layer structure of a convolutional neural network 100.
The input layer 102 intercepts a part of information from the input image, and converts the part of information into a feature matrix for presentation, wherein the feature matrix carries features corresponding to the part of information.
Convolutional layer 104 is configured to receive the feature matrix from input layer 102, and perform feature extraction on the input image through a convolution operation. Although the convolutional layer 104 of fig. 1 only shows one layer of structure, in practical application, multiple convolutional layers can be built, where the convolutional layers in the first half are used to capture local and detailed information of the image, that is, each pixel of the output image only senses the result of calculation performed on a small range of values of the input image, and the sensing range of the following convolutional layers is increased layer by layer to capture more complex and abstract information of the image. And finally obtaining abstract representations of the images at different scales through the operation of a plurality of convolution layers. Through the convolution operation, the feature extraction of the input image is completed.
Pooling layer 106 is configured to replace a region of the image with a value that is typically the maximum or average of all values in the region. If a maximum is used, it is called maximum pooling; if an average is used, it is called mean pooling. By pooling, the size of the model can be reduced and the calculation speed can be increased without losing too much information.
The fully-connected layer 108 functions as a classifier in the whole convolutional neural network 100, and is equivalent to feature space transformation, extracts and integrates all the useful information in the foregoing, and in addition, the multi-layer fully-connected layer can theoretically simulate any nonlinear transformation to perform information comparison based on different classifications, thereby determining whether the input image is similar to the compared object.
At least one layer in the deep learning model obtains input data, wherein the input data can be images, sounds, texts and the like, the input data are processed, and deep learning operators such as convolution operators, pooling operators, activation operators and the like are operated in the processing process. The high accuracy of the operator is the basis of the high accuracy of the whole deep learning model.
A test case refers to a set of test inputs, execution conditions, and expected results tailored for a particular target to verify that a particular software requirement is met. In the operator testing process, different test input and execution conditions correspond to test cases of different test scenes. The test scenario includes a function test, a performance test, a pressure test, a bandwidth test, an I/O test, and the like.
FIG. 2 illustrates an automatic test system 20 according to an embodiment of the present invention, where the automatic test system 20 is configured to test operators in deep learning, and includes a generator 200 for generating test cases and a tester 205. The generator 200 is configured to generate a test case for an operator in a neural network, and includes a configuration file unit 201, a scenario generation unit 202, a case generation unit 203, and a data management unit 204. The use case generation unit 203 includes a calculation unit 213 and a storage unit 223. The tester 205 is configured to test operators in the neural network based on the test case generated by the generator 200 to determine the accuracy of the operators in the neural network, and includes an analysis unit 215, an execution unit 225, and a comparison unit 235.
The configuration file unit 201 is configured to generate a configuration file required by the test case according to the data information of the operator, send the configuration file to the scenario generation unit 202, and generate a plurality of test scenarios by the scenario generation unit 202.
In detail, the data information of the operator includes at least one of an input/output number, an input/output dimension, an input/output data type, a data distribution, and parameter information of the operator. The input/output number or the input/output dimension is used for determining the total number of data needed by an operator; the data distribution is used for determining the generation rule of the input data, and comprises normal distribution, binomial distribution, uniform distribution and the like; the input/output data type is used for determining the storage type of operator input/output data so as to facilitate the input/output data to participate in subsequent operation and memory access. In this embodiment, the configuration file uses a Json file as a carrier, and a user sets necessary data information such as the number of inputs/outputs, the input/output dimension, the input/output data type, the data distribution, the number of parameters, and the parameter type of an operator to be tested. The Json file is a simple data exchange format and can exchange data among servers. The Json file stores and represents data in a text format that is completely independent of the programming language. The compact and clear hierarchy makes Json an ideal data exchange language.
The scenario generation unit 202 is configured to randomly combine the data information in the configuration file according to a preset combination rule to generate a plurality of test scenarios. Based on the configuration file generated by the response file unit 201, the scene generation unit 202 parses the data information of the configuration file, and randomly combines the data information according to a preset combination rule. The preset combination rule comprises at least one of the input/output number, the input/output dimension, the input/output data type, the data distribution and the parameter information. The combination rule can realize traversing a large number of test scenes, edge test scenes and even all test cases, can comprehensively cover the test scenes of the deep learning operator, and improves the accuracy of operator test.
The use case generating unit 203 is configured to generate a plurality of test cases according to the plurality of test scenarios generated by the scenario generating unit 202.
The calculation unit 213 is configured to run a deep learning operator based on a plurality of test scenarios to obtain a plurality of calculation results. Each calculation result comprises a true value generated by the corresponding test scene, and the true value is a result actually obtained when the test case executes the deep learning operator. Each test case is a set of a corresponding test scenario, a corresponding true value and a standard output value. In addition to operating the deep learning operator, the computing unit 213 can also perform other simple tasks, such as processing intermediate results obtained during the execution of the deep learning operator, and performing other operations including accumulation, quantization, transposition, and precision improvement on data type conversion.
The storage unit 223 is configured to cooperate with the calculation unit 213 for holding a plurality of calculation results. The storage unit 223 is further configured to store the test case according to the constraint rule. The storage unit 223 may be various types of storage structures such as a buffer, a dedicated memory, a general-purpose memory, and the like. The constraint rule comprises a naming specification of the test case and a structure specification of the test case, wherein the structure specification comprises a standard name of the data information. Alternatively, ProtoBuf, Xml and Json files are used to set the constraint rules of the test cases, and the constraint rules are tool libraries with efficient protocol data exchange formats. The Xml and Json files directly use field names to maintain the mapping between fields and data in the serialized instance, and are generally stored in the serialized byte stream in the form of character strings. In the two files, the message and the definition of the message are relatively independent, and the readability is better. Protobuf is different from Xml and Json serialization modes, a binary byte serialization mode is adopted, and the relational mapping before the field is obtained by the field index and the field type through algorithm calculation, so that higher time efficiency and space efficiency are achieved, and the method is particularly suitable for occasions sensitive to data size and transmission rate.
The data management unit 204 is configured to classify the plurality of test cases according to a preset management rule. The data management unit 204 receives the test cases stored in the storage unit 223, and classifies the test cases according to different standards, so that the test cases can be uniformly managed and called after being generated, and functions of test case uploading, test case query, version management and the like are supported. Further, the preset management rule comprises version information of the test case. In the research and development process, the stored test cases are continuously updated according to the requirements, and the test cases are classified according to different versions, so that a user can conveniently and quickly inquire different test cases corresponding to different versions. The preset management rule further comprises hardware information for operating the deep learning model. The tested hardware information comprises different board card models, operating systems, Host system memories, types of CPUs or GPUs and the like, test cases are executed in different hardware environments, and the required test cases may have differences. The purpose of distinguishing different hardware information is to generate standard benchmarking data on a reliable hardware platform for comparison of accuracy and performance.
The analyzing unit 215 is configured to analyze the test case generated by the generator 200 to obtain a test scenario corresponding to the test case, a corresponding true value, and a standard output value, and send an analysis result to the executing unit 225, and execute a test on the test code according to the analysis result to obtain a test result of the test case. The comparing unit 235 compares the test result with the standard output value obtained by the analyzing unit 215 to obtain a final test result. Specifically, when the test result and the standard output value are within the error allowable range, the test is passed, and when the test result and the standard output value exceed the error allowable threshold value, the test is not passed.
Fig. 3 is a block diagram illustrating an integrated circuit device 300 according to an embodiment of the invention. As shown, integrated circuit device 300 includes system 302, and system 302 may be system 20 of FIG. 2. In addition, integrated circuit device 300 also includes a general interconnect interface 304 and other processing devices 306.
In this embodiment, the other processing device 306 can be one or more types of general purpose and/or special purpose processors such as a central processing unit, a graphics processor, an artificial intelligence processor, etc., and the number thereof is not limited but determined according to actual needs. In certain cases, the other processing device 306 may interface with external data and controls to the system 302, and perform basic controls including, but not limited to, data handling, and turning on and off of the machine learning computing device.
According to the technical solution of this embodiment, the universal interconnect interface 304 may be used for transmitting data and control instructions between the system 302 and other processing devices 306. For example, the system 302 may obtain required input data from other processing devices 306 via the universal interconnect interface 304, and write to a storage device on the system 302 chip. Further, the system 302 may obtain control instructions from other processing devices 306 via the universal interconnect interface 304 and write the control instructions into a control cache on the system 302 chip. Alternatively or in the alternative, the universal interconnect interface 304 may also read data from a memory module of the system 302 and transmit the data to the other processing device 306.
Optionally, integrated circuit device 300 may also include storage device 308, which may be connected to system 302 and other processing device 306, respectively. In one or more embodiments, storage device 308 may be used to store data for system 302 and other processing devices 306, and is particularly suitable for data that does not require computation to be stored in its entirety within the internal storage of system 302 or other processing devices 306.
According to different application scenarios, the integrated circuit device 300 of the present invention can be used as an SOC system-on-chip for devices such as mobile phones, robots, unmanned aerial vehicles, video capture devices, etc., thereby effectively reducing the core area, increasing the processing speed, and reducing the overall power consumption. In this case, the universal interconnect interface 304 of the integrated circuit device 300 is connected to certain components of the apparatus. Some of the components referred to herein may be, for example, a camera, a display, a mouse, a keyboard, a network card, or a wifi interface.
In some embodiments, the present invention also discloses a chip or integrated circuit chip, which includes an integrated circuit device 300. In other embodiments, the invention further discloses a chip packaging structure, which includes the chip.
In some embodiments, the invention also discloses a board card which comprises the chip packaging structure. Referring to fig. 4, which provides the aforementioned exemplary board 400, the board 400 may include other accessories in addition to the chip 402, which may include but are not limited to: a memory device 404, an interface device 406, and a control device 408.
The memory device 404 is coupled to the chip 402 within the chip package structure via a bus for storing data. The memory device 404 may include multiple banks of memory 410. Each group of memories 410 is connected to chip 402 via a bus. Each bank of memory 410 may be a DDR SDRAM ("Double Data Rate SDRAM").
In one embodiment, the memory device 404 may include 4 sets of memory 410. Each bank of memory 410 may include multiple DDR4 pellets (chips). In one embodiment, the chip 402 may include 4 72-bit DDR4 controllers, 64 bits of the 72-bit DDR4 controller are used for data transmission, and 8 bits are used for ECC checking.
In one embodiment, each bank of memory 410 may include a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. A controller for controlling DDR is provided in the chip 402 for controlling data transfer and data storage of each memory 410.
Interface device 406 is electrically connected to chip 402 within the chip package structure. Interface device 406 is used to enable data transfer between chip 402 and an external device 412, such as a server or computer. In one embodiment, the interface device 406 may be a standard PCIE interface. For example, the data to be processed is transmitted to the chip 402 by the server through the standard PCIE interface, so as to implement data transfer. In another embodiment, the interface device 406 may also be another interface, and the present invention is not limited to the specific expression of the other interface, and may implement a switching function. In addition, the calculation result of the chip 402 is still transmitted back to the external device 412 by the interface device 406.
Control device 408 is electrically connected to chip 402 to monitor the status of chip 402. Specifically, chip 402 and control device 408 may be electrically connected through an SPI interface. The control device 408 may include a single chip microprocessor ("MCU"). Chip 402 may include multiple processing chips, multiple processing cores, or multiple processing circuits, and may carry multiple loads. Thus, chip 402 may be in different operating states, such as multi-load and light-load. The control device 408 may be used to regulate the operating state of multiple processing chips, multiple processes, and/or multiple processing circuits within chip 402.
In some embodiments, the present invention further discloses an electronic device or apparatus, which includes the board card 400. According to different application scenarios, the electronic device or apparatus may include a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet computer, a smart terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device. The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance instrument, a B ultrasonic instrument and/or an electrocardiograph.
Fig. 5 shows another embodiment of the present invention, which is a method for generating test cases for operators in deep learning, and the method is applied in the test system 20, the combination apparatus 300, or the board 400. Fig. 5 shows a flow chart of this method.
Step 510: and generating a configuration file required by the test case according to the data information of the operator. In detail, the data information of the operator includes at least one of an input/output number, an input/output dimension, an input/output data type, a data distribution, and parameter information of the operator. The input/output number or the input/output dimension is used for determining the total number of data needed by an operator; the data distribution is used for determining the generation rule of the input data, and comprises normal distribution, binomial distribution, uniform distribution and the like; the input/output data type is used for determining the storage type of the input/output data of the operator so as to facilitate the input/output data to participate in subsequent operation and memory access. In this embodiment, the configuration file uses a Json file as a carrier, and a user sets necessary data information such as the number of inputs/outputs, the input/output dimension, the input/output data type, the data distribution, the number of parameters, and the parameter type of an operator to be tested.
Step 520: and randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes. The step further analyzes the data information of the configuration file and randomly combines the data information according to a preset combination rule. The preset combination rule comprises at least one of the input/output number, the input/output dimension, the input/output data type, the data distribution and the parameter information. The combination rule can realize traversing a large number of test cases even all test scenes, can comprehensively cover the test scenes of the deep learning operator, and improves the accuracy of operator testing.
Step 530: and generating a plurality of test cases according to the plurality of test scenes. In detail, this step can be expanded into a flowchart as shown in fig. 6.
Step 610: and operating the deep learning operator based on the plurality of test scenes to obtain a plurality of calculation results. Each calculation result comprises a true value generated by the corresponding test scene, and the true value is a result actually obtained when the test case executes the deep learning operator. Each test case is a set of a corresponding test scenario, a corresponding true value and a standard output value.
Step 620: a plurality of calculation results are saved. And storing the test case according to a constraint rule. The constraint rule comprises a naming specification of the test case and a structure specification of the test case, wherein the structure specification comprises a standard name of the data information. Alternatively, ProtoBuf, Xml and Json files are used to set the constraint rules of the test cases, and the constraint rules are tool libraries with efficient protocol data exchange formats. The Xml and Json files directly use field names to maintain the mapping relationship between fields and data in the serialized instance, and are generally stored in the serialized byte stream in the form of character strings. In the two files, the message and the definition of the message are relatively independent, and the readability is better. Protobuf is different from Xml and Json serialization modes, a binary byte serialization mode is adopted, and the relational mapping before the field is obtained through the field index and the field type through algorithm calculation, so that higher time efficiency and space efficiency are achieved, and the method is particularly suitable for occasions sensitive to data size and transmission rate.
Optionally, the method of this embodiment further includes classifying the plurality of test cases according to a preset management rule. In detail, the test cases are classified according to different standards, so that the test cases can be managed and called uniformly after being generated, and functions of test case uploading, test case query, version management and the like are supported. Further, the preset management rule comprises version information of the test case. In the research and development process, the stored test cases are continuously updated according to the requirements, and the test cases are classified according to different versions, so that a user can conveniently and quickly inquire the test cases of different versions corresponding to the requirements of the different versions. The preset management rule further comprises hardware information for operating the deep learning model. The tested hardware information comprises different board card models, operating systems, Host system memories and models of CPUs or GPUs, test cases are executed in different hardware environments, and the required test cases may have differences. The purpose of distinguishing different hardware information is to generate standard benchmarking data on a reliable hardware platform for comparison of accuracy and performance.
Based on the description of the embodiments, the test cases are generated based on the test scenes by randomly combining the data information required for generating the test scenes, so that a plurality of generated test cases can cover a large number of test scenes.
The foregoing may be better understood in light of the following clauses:
clause a1, a generator for generating test cases for operators in a neural network, comprising: the configuration file unit is used for generating a configuration file required by the test case according to the data information of the operator; the scene generation unit is used for randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes; and the case generating unit is used for generating a plurality of test cases according to the plurality of test scenes.
Clause a2, the generator of clause a1, the data information comprising at least one of number of inputs/outputs, input/output dimensions, input/output data type, data distribution, and parameter information for the operator.
Clause A3, the generator of clause a2, the preset composition rule comprising selecting at least one of the number of inputs/outputs, the input/output dimension, the input/output data type, the data distribution, and the parameter information.
Clause a4, the generator of clause a1, the use case generating unit comprising: the calculation unit is used for operating the operator based on the test scenes to obtain a plurality of calculation results; and the storage unit is used for storing the plurality of calculation results.
Clause a5, the generator of clause a4, each computation result including a true value generated by a corresponding test scenario, wherein each test case is a set of the corresponding test scenario, a true value, and a standard output value.
Clause a6, the generator of clause a5, the storage unit further for storing the test case according to constraint rules.
Clause a7, the generator of clause a6, the constraint rule comprising a naming specification of the test case and a structural specification of the test case, wherein the structural specification comprises a standard name of the data information.
Clause A8, the generator according to clause a1, further comprising a data management unit, configured to classify the plurality of test cases according to a preset management rule.
Clause a9, the generator of clause A8, the preset management rule including version information of the test case.
Clause a10, the generator of clause A8, the preset management rules including hardware information to run the neural network.
Clause a11, a method of generating test cases for operators in deep learning, the method comprising: generating a configuration file required by the test case according to the data information of the operator; randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes; and generating a plurality of test cases according to the plurality of test scenes.
Clause a12, an automatic test system, comprising: a processor configured to execute an operator in deep learning based on the test case; and the generator of any one of clauses a 1-a 10.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A generator for generating test cases for operators in deep learning, comprising:
the configuration file unit is used for generating a configuration file required by the test case according to the data information of the operator;
the scene generating unit is used for randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes; and
and the case generating unit is used for generating a plurality of test cases according to the plurality of test scenes.
2. A generator according to claim 1, wherein the data information comprises at least one of input/output number, input/output dimensions, input/output data type, data distribution, and parameter information of the operator.
3. A generator according to claim 2, wherein the preset composition rule comprises selecting at least one of the number of inputs/outputs, the input/output dimension, the input/output data type, the data distribution, and the parameter information.
4. The generator according to claim 1, wherein the use case generation unit includes:
the calculation unit is used for operating the operator based on the test scenes to obtain a plurality of calculation results; and
and the storage unit is used for storing the plurality of calculation results.
5. The generator of claim 4, wherein each computation result comprises a true value generated by a corresponding test scenario, and wherein each test case is a set of the corresponding test scenario, the corresponding true value, and a standard output value.
6. The generator of claim 5, wherein the storage unit is further configured to store the test case according to a constraint rule.
7. The generator of claim 6, wherein the constraint rule comprises a naming convention for the test case and a structural convention for the test case, wherein the structural convention comprises a standard name for the data information.
8. The generator of claim 1, further comprising a data management unit configured to classify the plurality of test cases according to a preset management rule.
9. The generator of claim 8, wherein the preset management rules comprise version information of the test cases.
10. A generator as claimed in claim 8, in which the preset management rules comprise hardware information for running the operators.
11. A method for generating a test case for an operator in deep learning, the method comprising:
generating a configuration file required by the test case according to the data information of the operator;
randomly combining the data information in the configuration file according to a preset combination rule to generate a plurality of test scenes; and
and generating a plurality of test cases according to the plurality of test scenes.
12. An automatic test system, comprising:
a processor configured to execute a deep learning operator based on the test case; and
a generator according to any one of claims 1 to 10.
CN202110182217.9A 2021-02-09 2021-02-09 Generator, generation method and system for generating test case by operator Pending CN114911692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110182217.9A CN114911692A (en) 2021-02-09 2021-02-09 Generator, generation method and system for generating test case by operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110182217.9A CN114911692A (en) 2021-02-09 2021-02-09 Generator, generation method and system for generating test case by operator

Publications (1)

Publication Number Publication Date
CN114911692A true CN114911692A (en) 2022-08-16

Family

ID=82761473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110182217.9A Pending CN114911692A (en) 2021-02-09 2021-02-09 Generator, generation method and system for generating test case by operator

Country Status (1)

Country Link
CN (1) CN114911692A (en)

Similar Documents

Publication Publication Date Title
CN110096309B (en) Operation method, operation device, computer equipment and storage medium
CN110119807B (en) Operation method, operation device, computer equipment and storage medium
CN112070202B (en) Fusion graph generation method and device and computer readable storage medium
CN110647981B (en) Data processing method, data processing device, computer equipment and storage medium
CN110458285B (en) Data processing method, data processing device, computer equipment and storage medium
CN115129460A (en) Method and device for acquiring operator hardware time, computer equipment and storage medium
CN112052040A (en) Processing method, processing device, computer equipment and storage medium
CN109740746B (en) Operation method, device and related product
CN114911692A (en) Generator, generation method and system for generating test case by operator
CN111353124A (en) Operation method, operation device, computer equipment and storage medium
CN111047005A (en) Operation method, operation device, computer equipment and storage medium
CN109543835B (en) Operation method, device and related product
CN109558565B (en) Operation method, device and related product
CN115409678A (en) Method for fusing operators of neural network and related product
CN115373646A (en) Information expansion method, device and related product
CN111047030A (en) Operation method, operation device, computer equipment and storage medium
CN111290789B (en) Operation method, operation device, computer equipment and storage medium
CN109543833B (en) Operation method, device and related product
CN109583580B (en) Operation method, device and related product
CN109558943B (en) Operation method, device and related product
CN109543834B (en) Operation method, device and related product
CN111339060B (en) Operation method, device, computer equipment and storage medium
CN111026440B (en) Operation method, operation device, computer equipment and storage medium
CN110826704B (en) Processing device and system for preventing overfitting of neural network
CN111124497B (en) Operation method, operation device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination