CN109634843B - Distributed automatic software testing method and platform for AI chip platform - Google Patents

Distributed automatic software testing method and platform for AI chip platform Download PDF

Info

Publication number
CN109634843B
CN109634843B CN201811285022.1A CN201811285022A CN109634843B CN 109634843 B CN109634843 B CN 109634843B CN 201811285022 A CN201811285022 A CN 201811285022A CN 109634843 B CN109634843 B CN 109634843B
Authority
CN
China
Prior art keywords
client
test
server
task
chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811285022.1A
Other languages
Chinese (zh)
Other versions
CN109634843A (en
Inventor
于佳耕
侯朋朋
卢欣晔
汲如意
苏航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN201811285022.1A priority Critical patent/CN109634843B/en
Publication of CN109634843A publication Critical patent/CN109634843A/en
Application granted granted Critical
Publication of CN109634843B publication Critical patent/CN109634843B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a distributed automatic software testing method and a distributed automatic software testing platform for an AI chip platform. The platform can automatically distribute AI software test tasks to different AI chip platform client computers for running through the server, unify test results to the server for management after the test is finished, then automatically issue an environment updating task according to requirements and then initiate the same test task for the second time to the X86 client, and the task performs pure software running on the X86 client and unifiedly collects and manages the test results for the second time. After the two testing operations are finished, the two testing results are automatically compared and analyzed, and the analysis result is output; and moreover, the changes of the off-line models twice before and after are traced, the nodes of the off-line models are positioned or errors are caused by input, and a software engineer and a hardware engineer are helped to troubleshoot problems.

Description

Distributed automatic software testing method and platform for AI chip platform
Technical Field
The invention belongs to the technical field of computer software; the method and the platform for the distributed automatic software testing of the AI chip platform are provided, which relate to the situation that the software testing of the AI chip platform is carried out on a plurality of computers or development boards at one time.
Background
Software testing (or software inspection) is a process used to identify the correctness, integrity, security, and quality of software. Software testing is a process of auditing or comparing actual outputs with expected outputs in order to operate on a program under specified conditions to discover bugs, to measure software quality, and to evaluate whether it meets design requirements. Software testing is generated along with the generation of software, and is completed by software developers at early times under the responsibility of the software developers, and is completed by special testers at present.
With the increasingly diversified chips and support algorithms with the AI calculation acceleration function, the functions of the AI software become more and more complex, and the corresponding test work becomes more and more complex and heavy. In actual test work, a plurality of different tests are often required for a certain software, for example, for testing application scenarios such as ResNet or SSD target detection, accuracy and performance of a neural network forward inference by an AI chip platform need to be tested for various parameters and image inputs, tens of different modules are involved, each module has tens or even hundreds of test points, and finally, hundreds or thousands of test cases are involved.
Generally, the mainstream method for testing the AI software is to manually perform the test or run a simple script, and record the test result. Usually, after the test case is designed and passes the review, the tester executes the test step by step according to the procedures described in the test case, obtains the actual result of the AI chip platform, and compares the actual result with the expected result of the CPU. The invention introduces the concept of distributed automatic test for the AI chip platform, and uses large-scale machines to carry out the automatic test of the AI software.
Software automated testing is a process that translates human-driven test behavior into machine execution. The field of software automation test research mainly focuses on the automation management of software test processes and the automation of dynamic tests (e.g., unit test, functional test, and performance). The advantages of test automation are evident in both areas compared to manual testing. Firstly, the automatic test can improve the test efficiency, so that testers can concentrate on the establishment and development of new test modules, thereby improving the test coverage rate; secondly, the automatic test is more convenient for the digital management of the test assets, so that the test assets can be reused in the whole test life cycle, and the characteristic is particularly significant in the functional test and the regression test; in addition, automated management of test procedures may enable testing activities of an organization to be conducted more procedurally. In actual work, there are often two testing and comparing scenarios for testing the correctness of the algorithm and the AI chip platform, for example, a CPU-based neural network algorithm is implemented, in order to check the validity of the AI chip platform, a test needs to be performed again on a machine having the AI chip platform, and then the data and performance of the two times before and after are compared. Conventional automated software testing does not support this scenario.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention aims to provide a distributed automatic software testing method and platform for an AI chip platform. The platform can automatically distribute AI software test tasks to different AI chip platform client computers for running through the server, unify test results to the server for management after the test is finished, then automatically issue an environment updating task according to requirements and then initiate the same test task for the second time to the X86 client, and the task performs pure software running on the X86 client and unifiedly collects and manages the test results for the second time. After the two testing operations are finished, the two testing results are automatically compared and analyzed, and the analysis result is output. And moreover, the changes of the off-line models twice before and after are traced, the nodes of the off-line models are positioned or errors are caused by input, and a software engineer and a hardware engineer are helped to troubleshoot problems.
The invention discloses a distributed automatic software testing platform for an AI chip platform.
The technical scheme mainly comprises the technical development of a server side, an x86 client side, an AI chip client side and a message queue for communication among the server side, the x86 client side, the AI chip client side and the AI chip client side. The automatic test platform is jointly constructed by a server Master and a plurality of attached client Slave computers. The Master at the server controls the whole process of software testing, distributes corresponding testing tasks for the client according to the current testing task execution condition of the current client, receives a testing result returned by the client and analyzes the result; the client tests the received test task according to the instruction of the server and feeds back the test result to the server; and the server communicates with the client through a message queue.
1) As shown in fig. 1, the server side mainly includes a system initialization module, a task sending module, a result acquisition module, and a result comparison and analysis module. The initialization module mainly completes the loading of a software test task list (test tasks are initialized and uniformly managed by a Master node), the loading of a client configuration file (information comprises an IP address of a client Slave machine, an X86/AI chip client, a CPU model and a test task list bound to the machine), and a heartbeat request is initiated to the client to ensure that each client is in an available state; the task distribution module is responsible for searching unexecuted test tasks and idle clients and distributing tasks to the idle clients; the result acquisition module receives a test result fed back by the client and stores the result in a database; and the result comparison and analysis module reads results of the two tests before and after from the database for comparison, and analyzes and outputs test result items with differences. And finally, carrying out differential analysis on the data of the derived binary offline model, tracing the changes of the offline model twice before and after, including changes of model data such as a calculation diagram, weight and the like, changes of the arrangement of a memory and the like, positioning nodes of the offline model or errors caused by input, and helping software engineers and hardware engineers to troubleshoot problems.
2) As shown in fig. 2, the client mainly includes three major parts, namely environment initialization, environment updating and neural network computing execution. The environment initialization is mainly to perform preparation of some universal detection work on environment initialization, deployment and other works. The components of the neural network computation are initialized for the X86 client and the environment is deployed for the AI chip client. The updating environment mainly aims at specific testing tasks to carry out installation and deployment of some specific testing platforms, including off-line model updating deployment and the like. The neural network calculation is executed mainly by running a specific test task according to a task distribution list of the server, after the running is finished, transmitting information such as a detection result and the like into a message queue, and obtaining and analyzing the information by the server.
3) The message queue is mainly used as a bridge between the server side and the client side and is used for transmitting messages between the server side and the client side. The server side submits the test tasks to the message queue, and each client side registers on the message queue, receives the tasks sent to the client side in the message queue and executes the tasks. And after the client executes the task, the client also sends the test result to a message queue to wait for the server to acquire the test result.
The method comprises the following specific steps:
1) initialization of a server end system: firstly, loading a use case test table, a client configuration table, automatically generating large-scale test data, and generating a compiling offline model by using an AI chip tool chain according to the information. The offline model is used by AI chip clients, while X86 clients directly use network-stored test data. After the above, heartbeat information is sent to each client, and whether each client is in a normal state or not is checked.
2) After receiving the heartbeat message, the client checks the basic environment of the client, and if all the conditions are normal, a ready response is replied to the server.
3) The server side updates the client state table according to the responses of the X86 and the AI chip client, and then sends an initialization command to each client to prepare the client for work.
4) The client side updates a local execution environment according to the declaration of the json file and feeds back the Ready state of the server side.
5) The server distributes tasks: according to the maintained client Slave metadata (including client hardware type, Ready state and Process state information) and the metadata of the test tasks (including the state of each task, distributed, completed and unexecuted), selecting the unexecuted tasks and the idle clients from the metadata, and sending the tasks to the idle clients for execution. And polling the client state for a certain time to maintain the Slave metadata.
6) The client side acquires the task: monitoring a message queue, and jumping to 7) when acquiring a task distribution message of a server side.
7) The client analyzes the task: and analyzing the task to obtain information such as a task number, a test list, a Client _ id, an offline model or a test data storage path and the like, and checking whether the storage path exists.
8) The client executes the test: and executing the specified test task and feeding back the Process state information of the server.
9) The client side feeds back a test result: and after the test result is obtained, the information such as the task number, the Client _ id, the state, the final result and the like of the current test task is sent to the server through the message queue.
10) Server result acquisition: and receiving a test result fed back from the client and storing the result in a database.
11) And (3) executing the CPU tasks: in order to determine whether the AI chip platform correctly executes the offline model, a CPU task of constructing the same network and model data is required. Updating system configuration of each client according to requirements, initiating the same processing of a CPU test task according to the steps from 5) to 10), sending the task to an X86 client in the cluster, executing the same test data and an offline model, storing a calculation result and performance information into a JSON file, and returning the JSON file to the server.
12) And (3) comparing and analyzing server results: and automatically comparing the two test results, and automatically outputting the parts with the difference.
13) Server error checking: and the server obtains the result of comparing the CPU test with the AI chip platform test, analyzes the modification of the offline model after confirming the result is wrong, and analyzes the offline model from the last time of testing the correct offline model to the current time of testing the offline model. Specifically, the data of the two derived offline models are subjected to differentiation analysis, two changes of the offline models before and after are traced, including changes of model data such as calculation diagram changes and weight values, and arrangement changes of a memory, so that the errors caused by node positioning or input of the offline models are assisted, and the modification and error correlation of the offline models are determined. Helping software engineers and hardware engineers to troubleshoot problems.
Compared with the prior art, the invention has the following positive effects:
(1) the traditional automatic test software is mostly based on a single machine mode, and the invention is based on a distributed mode, adopts a Master-Slave architecture and is more flexible and efficient;
(2) the traditional AI automatic test software does not have the functions of testing twice and automatically analyzing the two test results, and the invention provides the functions of contrast test and automatically analyzing the two test results aiming at an AI chip platform, thereby facilitating developers to analyze and solve corresponding problems.
(3) The error result of the AI chip platform may be caused by hardware or software, and in addition, the offline model node and edge analysis is complex, and defect repair is difficult to perform. The invention tracks the version of the off-line model, analyzes and derives the changes of model data such as a computation graph, a weight and the like, the arrangement of a memory and the like in the off-line model, causes of error results and helps software engineers and hardware engineers to efficiently troubleshoot problems.
Drawings
FIG. 1 is a server-side architecture diagram of a distributed automated software testing platform for an AI chip platform;
fig. 2 is a diagram of a client architecture of a distributed automation software test platform oriented to an AI chip platform.
Detailed Description
The invention is further illustrated by the following examples, without in any way limiting its scope.
The following test scenarios were set: one server, a server; the client 8 comprises AI _ client1, AI _ client2, AI _ client3, AI _ client4, x86_ client1, x86_ client2, x86_ client3 and x86_ client 4; the use case test table has 5 test items, namely case1, case2, case3, case4 and case5, and the specific contents of the 5 test items are as shown in table 1:
TABLE 1 description of 5 cases
Name of Case Description of the invention
Case1 Testing ResNet based on TensorFlow
Case2 Testing SSD based on TensorFlow
Case3 VGG testing based on Tensorflow
Case4 Test for AlexNet based on TensorFlow
Case5 Testing for Conv based on TensorFlow
The implementation steps are as follows:
1) initialization of a server end system: loading configuration files such as test tables 1-5, a client configuration table and the like, completing initialization of a server (including test data generation, offline models and the like), and then sending an initialization command to client AI _ clients 1-AI _ client4, x86_ client 1-x 86_ client4, wherein the command contains a script which can download and install a corresponding dependent software package TensorFlow;
2) client environment deployment: and after receiving the environment initialization command from the server side, running the command and installing the environment related to the TensorFlow platform.
3) Client ready feedback: after the client finishes initialization, a ready signal is sent to the server.
4) Server task distribution: according to the feedback of the client, 8 idle clients and 5 test tasks are found at present; the server side can program clients and test tasks, then distribute case1 to AI _ client1 and x86_ client1, distribute case2 to AI _ client2 and x86_ client2, distribute case3 to AI _ client3 and x86_ client3, distribute case4 to AI _ client4 and x86_ client4 when 8 clients are busy, test the target network, no suitable client can execute for the test task case5, and then case5 continues to wait.
5) The client AI _ client 1-AI _ client4, x86_ client 1-x 86_ client4 obtain tasks: monitoring a message queue, and jumping to 6) when acquiring a task distribution message of a server side.
6) The client analyzes the task: the task is analyzed to obtain information such as a task number, a test list, a Client _ id and the like, for example, AI _ Client1 and x86_ Client1 receive the task of case1, that is, the ResNet is tested.
7) The client executes the test: executing specified test tasks at the corresponding clients, such as AI _ client1 and x86_ client1, will start to execute the test for the ResNet based on the tensrflow, wherein AI _ client1 loads the offline model for testing, and x86_ client1 parses the network and test data for testing.
8) The client side feeds back a test result: after obtaining the test results, AI _ Client 1-AI _ Client4 and x86_ Client 1-x 86_ Client4 send the information of the task number, Client _ id, status, final result and the like of the current test task to the server through the message queue. Assuming here that AI _ client3 and x86_ client3 complete the test feedback first, the server jumps 4) at the same time, distributing test task case5 into AI _ client3 and x86_ client3 to perform the test.
9) Server result acquisition: and receiving a test result fed back from the client and storing the result in a database.
10) And (3) comparing and analyzing server results: after all the test tasks are completed, the two test results of the AI chip platform and the x86 platform of all cases are read and analyzed in the database, and if the analyzed result or the detection item with wrong performance or degradation is case2, an analysis result is finally output, and the result shows that the test results before and after case2 are wrong. If no error exists, the neural network offline model of the check is saved for subsequent comparative analysis.
11) Server error checking: according to the analysis results of the two test tasks before and after 10), firstly, a neural network directed graph cutting method is utilized to compare and analyze the same network offline model which is checked correctly at the latest and the current error offline model, so that the change conditions of model data such as a calculation graph, a weight and the like, the arrangement of a memory and the like in the offline model used for the error result are obtained, and a programmer is assisted to carry out error troubleshooting.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (8)

1. A distributed automatic software testing method facing an AI chip platform comprises the following steps:
1) the server side generates test data and an offline model according to the loaded use case test table and the client side configuration table, wherein an AI chip tool chain is used for generating a compiling offline model; the client comprises an x86 client and an AI chip client; the offline model is an AI chip client;
2) the client updates the local execution environment according to the initialization command of the server; the initialization command comprises an offline model, a test data storage path, a hardware matching library version and an environment variable;
3) the server side sends the unexecuted tasks to an idle AI chip client side for execution; the AI chip Client analyzes the received task to obtain a task number, a test list, a Client number Client _ id, offline model information or a test data storage path; then executing the specified test task and feeding the test result back to the server end; the server receives the test result fed back from the AI chip client and stores the test result in the database;
4) the server side sends the unexecuted CPU test task to an idle x86 client side for execution; the x86 Client analyzes the received task to obtain a task number, a test list, a Client number Client _ id, offline model information or a test data storage path; then executing the specified test task and feeding the test result back to the server end; the server side receives the test result fed back from the x86 client side and stores the test result in the database;
5) the server terminal automatically compares the two test results and outputs the parts with difference; the server side derives data of the offline model from the two test results and carries out differential analysis to obtain change information of the offline model, wherein the change information comprises calculation diagram change, weight change and memory arrangement change; the server compares the test result of each time with a preset test result, and checks whether the offline model is correct or not; and then analyzing the offline model which is checked correctly and the offline model which is in error by utilizing a neural network directed graph cutting method to obtain the change information of the offline model.
2. The method according to claim 1, wherein the server sends heartbeat information to each client, and checks whether each client is in a normal state; after receiving the heartbeat message, the client checks the basic environment of the client, and if all the conditions are normal, a ready response is replied to the server; then the server side updates the client state table according to the response of the client side, and sends an initialization command to each client side.
3. The method of claim 1, wherein the initialization command is a json file.
4. The method of claim 1, wherein the server side maintains metadata of the client side and metadata of the test task; the metadata of the client comprises a client hardware type, a Ready state and Process state information, and the metadata of the test tasks comprises the state of each task.
5. A distributed automatic software testing platform facing an AI chip platform is characterized by comprising a server side and clients, wherein the clients comprise an x86 client and an AI chip client; wherein the content of the first and second substances,
the server side generates test data and an offline model according to the loaded use case test table and the client side configuration table, wherein an AI chip tool chain is used for generating a compiling offline model as an AI chip client side; sending the unexecuted task to an idle AI chip client for execution, and sending the unexecuted CPU test task to an idle x86 client for execution; and according to the received test result, carrying out automatic comparison and outputting the part with difference; the server side derives data of the offline model from the two test results and carries out differential analysis to obtain change information of the offline model, wherein the change information comprises calculation diagram change, weight change and memory arrangement change; the server compares the test result of each time with a preset test result, and checks whether the offline model is correct or not; then analyzing the offline model which is checked correctly and the offline model which is in error by utilizing a neural network directed graph cutting method to obtain the change information of the offline model;
the AI chip Client is used for analyzing the received task to obtain a task number, a test list, a Client serial number Client _ id, offline model information or a test data storage path; then executing the specified test task and feeding the test result back to the server end; the server receives the test result fed back from the AI chip client and stores the test result in the database;
the x86 Client is used for analyzing the received task to obtain a task number, a test list, a Client serial number Client _ id, offline model information or a test data storage path; then executing the specified test task and feeding the test result back to the server end; the server receives the test results from the x86 client feedback and stores them in the database.
6. The platform of claim 5, wherein the information in the client configuration table includes an IP address of a client, an X86/AI chip client, a CPU model number, and a list of test tasks bound to the client.
7. The platform of claim 5, wherein the server side communicates with the x86 client and the AI chip client through message queues.
8. The platform of claim 7, wherein the server submits the test tasks to a message queue, and each client registers on the message queue and then receives and executes the tasks sent to itself in the message queue; and after the client executes the task, sending the test result to a message queue to wait for the server to obtain the test result.
CN201811285022.1A 2018-10-31 2018-10-31 Distributed automatic software testing method and platform for AI chip platform Expired - Fee Related CN109634843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811285022.1A CN109634843B (en) 2018-10-31 2018-10-31 Distributed automatic software testing method and platform for AI chip platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811285022.1A CN109634843B (en) 2018-10-31 2018-10-31 Distributed automatic software testing method and platform for AI chip platform

Publications (2)

Publication Number Publication Date
CN109634843A CN109634843A (en) 2019-04-16
CN109634843B true CN109634843B (en) 2021-09-21

Family

ID=66066994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811285022.1A Expired - Fee Related CN109634843B (en) 2018-10-31 2018-10-31 Distributed automatic software testing method and platform for AI chip platform

Country Status (1)

Country Link
CN (1) CN109634843B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070176A (en) * 2019-04-18 2019-07-30 北京中科寒武纪科技有限公司 The processing method of off-line model, the processing unit of off-line model and Related product
US11983535B2 (en) 2019-03-22 2024-05-14 Cambricon Technologies Corporation Limited Artificial intelligence computing device and related product
CN110910945B (en) * 2019-11-19 2021-08-17 深圳忆联信息系统有限公司 Method and device for testing robustness of SSD (solid State disk) signal, computer equipment and storage medium
CN111158967B (en) 2019-12-31 2021-06-08 北京百度网讯科技有限公司 Artificial intelligence chip testing method, device, equipment and storage medium
CN111506508A (en) * 2020-04-17 2020-08-07 北京百度网讯科技有限公司 Edge calculation test method, device, equipment and readable storage medium
CN113742202A (en) * 2020-05-29 2021-12-03 上海商汤智能科技有限公司 AI chip verification system, method, device and storage medium
CN111880433A (en) * 2020-07-02 2020-11-03 上海机电工程研究所 System and method for automatically realizing remote heterogeneous semi-physical simulation test task
CN111949329B (en) * 2020-08-07 2022-08-02 苏州浪潮智能科技有限公司 AI chip task processing method and device based on x86 architecture
CN112527676A (en) * 2020-12-23 2021-03-19 中移(杭州)信息技术有限公司 Model automation test method, device and storage medium
CN115827337A (en) * 2021-09-10 2023-03-21 华为技术有限公司 Model testing method and device
CN116069603B (en) * 2021-09-28 2023-12-08 华为技术有限公司 Performance test method of application, method and device for establishing performance test model
CN113779913B (en) * 2021-11-12 2022-03-22 浙江大学 Verification platform structure and test method for AI multi-chip system
CN117407299B (en) * 2023-10-18 2024-05-07 北京大学 Model test method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615159A (en) * 2009-07-31 2009-12-30 中兴通讯股份有限公司 Off-line test system and local data management method thereof and corresponding device thereof
CN105786667A (en) * 2016-02-29 2016-07-20 惠州Tcl移动通信有限公司 Distributed automated testing method and system
CN106970880A (en) * 2017-04-28 2017-07-21 中国科学院软件研究所 A kind of distributed automatization method for testing software and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030131088A1 (en) * 2002-01-10 2003-07-10 Ibm Corporation Method and system for automatic selection of a test system in a network environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615159A (en) * 2009-07-31 2009-12-30 中兴通讯股份有限公司 Off-line test system and local data management method thereof and corresponding device thereof
CN105786667A (en) * 2016-02-29 2016-07-20 惠州Tcl移动通信有限公司 Distributed automated testing method and system
CN106970880A (en) * 2017-04-28 2017-07-21 中国科学院软件研究所 A kind of distributed automatization method for testing software and system

Also Published As

Publication number Publication date
CN109634843A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109634843B (en) Distributed automatic software testing method and platform for AI chip platform
CN106970880B (en) Distributed automatic software testing method and system
CN109960643B (en) Code testing method and device
Sampaio et al. Supporting microservice evolution
US9569325B2 (en) Method and system for automated test and result comparison
US20190327160A1 (en) System and method for scheduling and executing automated tests
US20140372983A1 (en) Identifying the introduction of a software failure
CN107015842B (en) Server-side program compiling and publishing management method and system
Casanova et al. Architecture-based run-time fault diagnosis
US20130031532A1 (en) Method, computer, and device for validating execution of tasks in adaptable computer systems
CN111611157B (en) GMS continuous integration construction automatic test method and system
Kranzlmüller et al. NOPE: A nondeterministic program evaluator
CN115934559A (en) Testing method of intelligent form testing system
CN113177004A (en) Automatic ECU software testing method and system
CN111581081B (en) Automatic test system and method
CN110908918A (en) Unit testing method and device for multiple interdependent node
CN111651189A (en) Product delivery method and device of continuous integration system and electronic equipment
Lübke Calculating test coverage for BPEL processes with process log analysis
CN114138677B (en) Test method and device, test system and electronic equipment
CN115757088B (en) Fault injection method, device and equipment based on environment variable
CN111382047A (en) Block chain evaluation method, device and system and computer-storable medium
CN111651364B (en) SQL (structured query language) checking method and device under parallel development
Bergen et al. Post-debugging in large scale big data analytic systems
CN115473832A (en) Vehicle networking terminal cloud communication testing method, device, server, client and system
Jayapal et al. Automation of Trace Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210921