CN113360408A - Automatic testing method and device based on dragging type machine learning platform - Google Patents

Automatic testing method and device based on dragging type machine learning platform Download PDF

Info

Publication number
CN113360408A
CN113360408A CN202110799046.4A CN202110799046A CN113360408A CN 113360408 A CN113360408 A CN 113360408A CN 202110799046 A CN202110799046 A CN 202110799046A CN 113360408 A CN113360408 A CN 113360408A
Authority
CN
China
Prior art keywords
test
execution
case
test case
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110799046.4A
Other languages
Chinese (zh)
Inventor
郑小燕
李钦
刘翰林
王江宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110799046.4A priority Critical patent/CN113360408A/en
Publication of CN113360408A publication Critical patent/CN113360408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The application provides an automatic testing method and device based on a dragging type machine learning platform, wherein the method comprises the following steps: generating a test case of a test object according to a request body and a response body of the modeling DGA task flow; setting an operation environment, a dependency relationship and a serial-parallel mode for the test case; executing the test script regularly to obtain an execution result of the test case; comparing the execution result of the test case with an expected result; and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic. By the scheme, the problems of low testing efficiency and low accuracy of the conventional automatic testing of the task flow formed by the dragging type machine learning platform are solved, and the aim of efficiently and accurately automatically testing the task flow formed by the dragging type machine learning platform is fulfilled.

Description

Automatic testing method and device based on dragging type machine learning platform
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an automatic testing method and device based on a dragging type machine learning platform.
Background
At present, with the continuous development of artificial intelligence technology, in order to reduce the threshold of machine learning, the concept of a machine learning platform is proposed, wherein a towed modeling manner is a main machine learning platform mode. The towed machine learning platform integrates abundant machine learning components (operators) and algorithms, provides a friendly visual interface, and enables a modeling user to construct a complex modeling task flow by simply towing each component. A set of modeled task flows constructed by dragging multiple components can be referred to as a DAG, and a complete DAG task flow comprises: data set introduction, data set processing (multiple), data set splitting, feature extraction, model training and parameter adjustment, model evaluation, model effect evaluation, model release and model reasoning.
For testing of a task flow based on a machine learning platform, logic testing covering each component, logic testing after combination of a plurality of components, single-branch and multi-branch testing of the task flow, and parallel execution testing of a plurality of task flows are required. Besides the testing of the task flow, the method also comprises the combined verification of the algorithm tuning parameters, the evaluation verification of the effect of the algorithm model, the verification of the model issuing process and the verification of the model reasoning service. For these test contents, the tester cannot keep pace with the development of the platform by adopting manual test. On the other hand, the towed machine learning platform has a visual interface, and the traditional WEB automated testing method is mainly based on a page element positioning means to truly simulate the operation of a user. However, the drag-type interface is variable and unstable in element position, and it is extremely difficult to simulate mouse click, especially for a large number of components in the modeling-oriented DAG, which is time-consuming and labor-consuming in both positioning of elements and developing automation test code.
An effective solution is not provided at present for performing an automated test on a task flow formed by a towed machine learning platform.
Disclosure of Invention
The application aims to provide an automatic testing method and device based on a dragging type machine learning platform, and the purpose of automatically testing a task flow formed based on the dragging type machine learning platform can be achieved.
The application provides an automatic testing method and device based on a dragging type machine learning platform, which are realized as follows:
an automated testing method based on a drag-and-drop machine learning platform, the method comprising:
generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
executing the test script regularly to obtain an execution result of the test case;
comparing the execution result of the test case with an expected result;
and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
In one embodiment, the test object includes execution of component tasks of at least one of: data introduction, data processing, data splitting, feature extraction, model training, batch estimation, model release and model reasoning.
In one embodiment, generating test cases for test objects from a requester and a responder of a modeled DGA task flow comprises:
automatically intercepting a request body and a response body of a DAG task flow initiated by a user during modeling of a machine learning platform;
naming a request body and a response body in a mode of splicing the request URL with a timestamp;
and filling the URL, the request mode, the request parameters and the response body in the request body as variable values used in the test script into each column of the EXCEL in sequence according to rows to serve as a test case.
In one embodiment, setting a running environment, a dependency relationship and a serial-parallel mode for the test case includes:
dividing parameters of the test case into a fixed value and a variable value;
parameterizing the variable value, wherein the parameterized variable value is acquired in real time through interaction with a database;
setting a switching value for automatic correction judgment in a test case;
taking the value of the core index in the response body as a case expected value;
and setting the tasks with the dependency relationship in the DAG task flow as serial execution, and setting the tasks at the same level in the DAG task flow as parallel execution.
In one embodiment, the test script is executed periodically to obtain the execution result of the test case, including:
determining whether a preset trigger condition is met;
under the condition that the preset trigger condition is determined to be met, triggering to execute the test script;
judging whether a dependent script exists, and when the dependent script exists, continuing to wait for the completion of the execution of the dependent script to be successful and then starting the execution;
in the script execution process, reading a single case under the condition that the test cases are executed in series, and reading a plurality of cases in batch under the condition that the test cases are executed in parallel, wherein the number of the parallel reading is determined according to the preset number of the parallel reading.
In one embodiment, the script execution process further comprises:
calling a request class in the test code according to a request mode of a test object in the test case;
according to the called request class, judging the execution end of the test object;
under the condition of synchronous execution, taking the obtained response body as an end identifier;
in the case of asynchronous execution, the component execution state is polled by interacting with the database to determine if the task has finished executing.
In one embodiment, automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic comprises:
acquiring an automatic correction switch value of the test case;
under the conditions that the automatic correction switch value is on, the task is successfully executed, and the execution result is inconsistent with the expected result, replacing the expected value of the test case with the obtained test result;
in the case where the automatic correction switch value is off, or the task execution fails, the expected values of the test cases are not updated.
An automated testing device based on a drag-type machine learning platform, comprising:
the generating module is used for generating a test case of the test object according to the request body and the response body of the modeling DGA task flow;
the setting module is used for setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
the execution module is used for executing the test script in a timing manner to obtain an execution result of the test case;
the comparison module is used for comparing the execution result of the test case with an expected result;
and the correcting module is used for automatically correcting the expected value of the case according to the comparison result and the automatic correction judging logic.
An electronic device comprising a processor and a memory for storing processor-executable instructions, the instructions when executed by the processor implementing the steps of the method of:
generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
executing the test script regularly to obtain an execution result of the test case;
comparing the execution result of the test case with an expected result;
and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of a method comprising:
generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
executing the test script regularly to obtain an execution result of the test case;
comparing the execution result of the test case with an expected result;
and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
According to the automated testing method based on the dragging type machine learning platform, the test case which accords with the running of the script is generated by grabbing the request body and the response body, manual bag grabbing record is not needed, the running environment, the dependency relationship and the serial-parallel mode of the test case are established, the test script is triggered and executed regularly to realize the execution of the test case, the correction of the expected case value is automatically carried out based on the execution result, the error reporting and the adjustment of the execution of the script are realized, and the testing accuracy is improved. By the scheme, the problems of low testing efficiency and low accuracy of the conventional automatic testing of the task flow formed by the dragging type machine learning platform are solved, and the aim of efficiently and accurately automatically testing the task flow formed by the dragging type machine learning platform is fulfilled.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flowchart of an embodiment of a method for automated testing based on a pull-based machine learning platform provided herein;
FIG. 2 is a flow chart of a continuous testing method of machine learning modeling based on DAG graph provided by the present application;
FIG. 3 is a schematic diagram of a generate test object module provided herein;
FIG. 4 is a schematic diagram of test case management provided herein;
FIG. 5 is an exemplary diagram of test case execution optimization combination for a DAG task flow for machine learning modeling provided herein;
FIG. 6 is a schematic diagram of test script management provided herein;
FIG. 7 is a test script execution diagram provided herein;
FIG. 8 is a schematic diagram illustrating comparison of execution results provided herein;
FIG. 9 is a schematic diagram of the automatic correction decision provided by the present application;
FIG. 10 is a block diagram of a hardware structure of an electronic device according to an automated testing method based on a pull-type machine learning platform provided in the present application;
fig. 11 is a schematic block diagram of an embodiment of an automated testing apparatus based on a pull-type machine learning platform provided in the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of an embodiment of an automated testing method based on a drag-and-drop machine learning platform according to the present application. Although the present application provides method operational steps or apparatus configurations as illustrated in the following examples or figures, more or fewer operational steps or modular units may be included in the methods or apparatus based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution sequence of the steps or the module structure of the apparatus is not limited to the execution sequence or the module structure described in the embodiments and shown in the drawings of the present application. When the described method or module structure is applied in an actual device or end product, the method or module structure according to the embodiments or shown in the drawings can be executed sequentially or executed in parallel (for example, in a parallel processor or multi-thread processing environment, or even in a distributed processing environment).
Specifically, as shown in fig. 1, the automated testing method based on the pull-type machine learning platform may include the following steps:
step 101: generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
wherein the test object may include, but is not limited to, the execution of component tasks of at least one of: data introduction, data processing, data splitting, feature extraction, model training, batch estimation, model release and model reasoning.
When generating the test case, the following steps can be included:
step 1: automatically intercepting a request body and a response body of a DAG task flow initiated by a user during modeling of a machine learning platform;
namely, the user operation is automatically intercepted, so that manual packet capturing record is not needed.
Step 2: naming a request body and a response body in a mode of splicing the request URL with a timestamp;
and step 3: and filling the URL, the request mode, the request parameters and the response body in the request body as variable values used in the test script into each column of the EXCEL in sequence according to rows to serve as a test case.
Step 102: setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
specifically, the parameters of the test case can be divided into fixed values and variable values; parameterizing the variable value, wherein the parameterized variable value is acquired in real time through interaction with a database; setting a switching value for automatic correction judgment in a test case; taking the value of the core index in the response body as a case expected value; and setting the tasks with the dependency relationship in the DAG task flow as serial execution, and setting the tasks at the same level in the DAG task flow as parallel execution.
Namely, the setting is executed in series and parallel, and the automatic correction judgment switch and the case expected value are set, so that the adjustment of the expected value based on assertion can be automatically triggered when a test result exists.
Step 103: executing the test script regularly to obtain an execution result of the test case;
step 104: comparing the execution result of the test case with an expected result;
step 105: and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
In the above example, the automated testing method based on the drag-type machine learning platform generates the test case which accords with the running of the script by grabbing the request body and the response body without manually carrying out packet grabbing records, establishes the running environment, the dependency relationship and the serial-parallel mode of the test case, regularly triggers and executes the test script to implement the execution of the test case, and automatically corrects the expected value of the case based on the execution result, thereby implementing the error reporting and the adjustment of the execution of the script, and improving the accuracy of the test. By the mode, the problems of low testing efficiency and low accuracy of the existing automatic testing of the task flow formed by the dragging type machine learning platform are solved, and the purpose of carrying out efficient and accurate automatic testing on the task flow formed by the dragging type machine learning platform is achieved.
In a specific implementation, the executing the test script regularly to obtain an execution result of the test case may include: determining whether a preset trigger condition is met; under the condition that the preset trigger condition is determined to be met, triggering to execute the test script; judging whether a dependent script exists, and when the dependent script exists, continuing to wait for the completion of the execution of the dependent script to be successful and then starting the execution; in the script execution process, reading a single case under the condition that the test cases are executed in series, and reading a plurality of cases in batch under the condition that the test cases are executed in parallel, wherein the number of the parallel reading is determined according to the preset number of the parallel reading. That is, a trigger condition (e.g., whether time has been reached) may be set to trigger automatic execution of the test script. When the test script is executed, whether the test script is executed in parallel or in series can be determined according to the actual condition of the system, and different trigger case numbers are selected according to different execution modes, so that the test case can be executed in a targeted manner.
Considering that in the process of executing the test script, a termination judgment is needed, otherwise the test script runs all the time, which is obviously unreasonable, for this reason, in the process of executing the script, the request class in the test code can be called according to the request mode of the test object in the test case; according to the called request class, judging the execution end of the test object; under the condition of synchronous execution, taking the obtained response body as an end identifier; in the case of asynchronous execution, the component execution state is polled by interacting with the database to determine if the task has finished executing.
When the expected value of the case is automatically corrected according to the comparison result and the automatic correction judgment logic, the automatic correction switch value of the test case can be obtained; under the conditions that the automatic correction switch value is on, the task is successfully executed, and the execution result is inconsistent with the expected result, replacing the expected value of the test case with the obtained test result; in the case where the automatic correction switch value is off, or the task execution fails, the expected values of the test cases are not updated. That is, whether to perform an update of the expected values of the test cases is determined based on the auto-correct switch values, whether the execution was successful, and whether there is agreement with the expected results.
The above method is described below with reference to a specific example, however, it should be noted that the specific example is only for better describing the present application and is not to be construed as limiting the present application.
In order to solve the problems of difficult positioning of automatic test elements and complicated script development of a machine learning modeling DAG task flow mode, the embodiment provides a continuous test device implementation scheme based on DAG machine learning modeling, starting from an interface of front-end and back-end interaction, storing user Cookie through login transaction, and truly simulating the process of establishing and executing a DAG task flow on line by a user. The device can automatically collect the test data set, automatically execute DAG task flow, automatically compare the test result with the expected result, and automatically correct the test case according to the automatic correction switch. The testing script is deployed with Jenkins, and the machine learning platform quality can be continuously maintained and monitored, so that the testing efficiency of the dragging type task flow can be improved, and the method is also suitable for modeling personnel to monitor the model iteration effect.
Specifically, in this example, an automatic verification method based on DAG machine learning modeling is provided, which includes the following steps:
step 1, automatically acquiring a request body and a response body of the modeling DAG task flow, and generating a test object. Among them, the test object may be: the method comprises the steps of data introduction, data processing, data splitting, feature extraction, model training, batch estimation, model release, model inference and other component tasks execution and execution result checking.
Step 2: and refining the test case of the test object, quantizing the non-fixed parameter values, simplifying the expected value, and configuring whether to start automatic correction of the expected value.
And 3, configuring the test running environment, the dependency relationship of each test object in the DAG task flow test script and the serial/parallel execution mode.
And 4, deploying Jenkins by the test script, and triggering and executing at fixed time. Xml, parallel testing of multi-DAG task flows can be supported by configuring testng.
And 5: the subscriber views the test execution result report.
Specifically, the test method may be as shown in fig. 2, and includes:
s1: generating a test object:
and automatically generating a basic test case of the test object according to the request body and the response body of the modeling DAG task flow.
S2: managing test cases:
and refining the basic test case generated in the S1, wherein the method comprises the following steps: setting a login user, parameterizing variable parameter values, optimizing a response body as an expected value, setting an automatic correction switch value and the like.
S3: managing the test script:
configuring an operating environment, testing script dependency relationship, testing data and a testing script string parallel execution mode.
S4: executing the test script may include: the test case drives the test script execution and the execution end confirmation.
S5: comparing and executing results:
on the basis of S4, the test case execution result is compared to the expected result.
S6: judging whether to automatically correct:
and performing automatic correction judgment on the expected case result according to the execution result of the S5 and combining with automatic correction judgment logic.
S7: performing automatic correction:
if the execution result is inconsistent with the expected result and the case initiates auto-correction, the case expected value is updated and the case is re-executed until the case is consistent with the expected result.
S8: and (3) generating a test report:
after the execution of the test case is completed, a test report is automatically generated and pushed to the subscriber.
With respect to the above-mentioned S1 to S8, the following is specifically described:
as shown in fig. 3, to generate a schematic diagram of a test object, specifically, the method may include:
s101: capturing a DAG task flow interface request;
after setting a request body for initiating tasks and checking results, a matching rule of a response body, a file name and a storage path of a floor file of each component in a task flow in a parameter file of a packet capturing tool or a packet capturing script, automatically capturing the request body and the response body of a DAG task flow initiated by a user when a machine learning platform is modeled by using the packet capturing tool/the packet capturing script.
S102: a request body and a response body of the task flow subtask interface are grounded in sequence;
and naming each subtask in the task flow intercepted in the S101 in a mode of splicing the request URL with a timestamp, and writing the request content and the response content into a local file.
S103: and generating a DAG task flow basic test case.
And taking the URL, the request mode, the request parameters and the response body in the request body as variable values used in the test script, sequentially filling the variable values into each column of the EXCEL according to rows, and using the variable values as basic test cases for managing the test cases.
As shown in fig. 4, a schematic diagram for managing test case management specifically includes:
s201: parameterizing the test number of the test object;
based on the generated base test cases, the parameters are subdivided into fixed values and variable values in order to support repeated execution of the cases. Parameterizing the parameter values which change each time, and acquiring the parameter values in real time through interaction with the database. And setting the switch value of automatic correction judgment in the test case for judging whether to automatically update the case.
S202: a condensed assertion or a custom assertion;
the assertion obtains the core element from the response body grounded in the S1 as an assertion point, or self-defines the assertion. For example: and taking the value of the core index in the report as an expected value in the model evaluation report, if the value is greater than the expected value, the model effect is considered to be qualified, and if the value is less than the expected value, the model effect is considered to be reduced.
S203: the case performs the optimized combined configuration.
Dividing the sub tasks of the DAG task flow into serial execution and parallel execution, setting the tasks with dependency relationship in the DAG task flow as serial execution, and setting the tasks at the same level in the DAG task flow as parallel execution.
As shown in FIG. 5, an example of performing optimized combination for a test case of a machine learning modeling DAG task flow, horizontally for parallel execution of components, is as follows: introducing a data source 1-a data source 4; execution of data processing 1, data processing 2, and data processing 3; feature extraction and execution of feature extraction (batch) components; checking model training set evaluation, model verification set evaluation and model test set evaluation; the vertical is the serial execution of the components, such as: the data processing 4 must be executed after the data processing 1-3 is successfully executed. The data splitting depends on the data processing 4 to be executed after being successfully executed; model reasoning relies on the model being published before it is executed, and so on.
As shown in fig. 6, a schematic diagram for managing test script management specifically includes:
1) through configuration management, a script running environment is set, and cross-environment testing is facilitated.
2) And the Excel data driver is used for separating the test data from the test script. The execution of the test script is driven by the test data in the Excel table maintained in the management test case.
3) The test code in the test script can be divided into: a login tool class, a request tool class, a database interaction class, an asynchronous queue polling class, and a test case class. The login tool is used for simulating login transaction to obtain Cookie, and subsequent test cases are subjected to online modeling test based on the login transaction; the request tool class is based on an http connection method, packages and transmits Cookie stored in login transaction by taking POST, GET, PUT and DELETE as main branches according to different request modes, can simulate a user to send a task flow request to a modeling platform server by using a browser through the tool, and returns a response result; the interactive class of the database encapsulates the operations of adding, deleting, modifying and checking related to the operation of the database table; the asynchronous queue polling class polls the execution state of components in the DAG task flow to acquire the final execution state of the task by adopting a mode of interacting with a database aiming at the asynchronously executed task in the DAG task flow; and the test case class is driven by the test cases in the management test case, calls a corresponding method in the request tool class according to the request type in the case, obtains a return result, and compares and verifies the return result and the assertion set in the case for managing the test case. Meanwhile, the dependency relationship and the serial/parallel execution mode of each test object are set in the case class, and are consistent with the optimized combination in the management test case.
4) And the test report is based on a TestNG framework, the executed test case class is maintained in TestNG.
As shown in fig. 7, a schematic diagram for executing test script execution may include:
1) when the test script is executed, whether the dependent script exists or not is judged, and when the dependent script exists, the execution is started after the execution of the dependent script is finished successfully.
2) When the script is executed, the number of cases in the case of the read management test is determined according to whether the cases are executed in series or in parallel. The serial mode is to read a single case, the parallel mode is to obtain a plurality of cases in batch, and the specific number is determined according to the number of concurrent cases set in the test script management test case class. And calling a request class in the test code according to a test object request mode in the case, and judging the execution end according to whether the test object is executed asynchronously or synchronously. A synchronous execution mode, wherein the response body is used as an end identifier according to the obtained response body; and in the asynchronous execution mode, whether the task is executed and finished is acquired by interacting the polling component with the database to execute the state.
As shown in fig. 8, a schematic diagram of performing result comparison may include: and judging whether the task is successfully executed, and comparing the case execution result with an expected result maintained by Excel in test case management after the task is successfully executed. When the task fails to execute, the exception is directly thrown.
As shown in fig. 9, a schematic diagram of the automatic correction judgment may include: and according to the automatic correction switch value of the case obtained in the test case management, when the automatic correction switch is turned on, the current task is successfully executed, and the latest test result is automatically replaced into the expected value of the current case in the S2 under the condition that the execution result is inconsistent with the expected result. The current task fails to be executed, and the case is not automatically updated; the case is not automatically updated when the auto-correct switch is off.
And updating the expected value of the test case to the execution result of the time by the automatic correction execution, and executing the case again until the case passes the execution.
By the method, the test cases can be automatically acquired, invalid interfaces are effectively filtered by setting the matching rules of the request body and the response body of each component initiating the task and checking the result in the task flow, the request body and the response body are grabbed, the test cases which accord with the script operation are automatically generated, and manual single-point packet grabbing and recording are not needed. And the test script has high reuse rate, encapsulates four common HTTP request modes, covers DAG task flows combined from data processing, feature extraction, model training, batch estimation and model evaluation, model uploading, model reasoning and the like, and is suitable for testing modeling based on all algorithms covered by a platform. The execution sequence of the test script is consistent with the dependency of the DAG task flow, and is different from the traditional single-interface test, and the method in the embodiment really carries out verification of multi-component combination from the user modeling perspective; furthermore, parallel verification of multiple algorithms or multiple models can be achieved, different DAG task flows can be set, parallel execution strategies can be set, test time is shortened, and test efficiency is improved. An expected model evaluation index can be used as an assertion threshold value on a single case of model evaluation, the effect of the model after automatic training is compared with the assertion threshold value, if the effect is smaller than the threshold value, a script executes error reporting, and the effect of the model can be known to be reduced in time after the mail is pushed.
The method embodiments provided in the above embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the example of the method running on the electronic device, fig. 10 is a block diagram of a hardware structure of the electronic device according to the automated testing method based on the pull-type machine learning platform provided by the present application. As shown in fig. 10, the electronic device 10 may comprise one or more (only one shown in the figure) processors 02 (the processors 02 may comprise, but are not limited to, a processing means such as a microprocessor MCU or a programmable logic device FPGA), a memory 04 for storing data, and a transmission module 06 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 10 is merely illustrative and is not intended to limit the structure of the electronic device. For example, the electronic device 10 may also include more or fewer components than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
The memory 04 may be configured to store software programs and modules of application software, such as program instructions/modules corresponding to the automated testing method based on the pull-type machine learning platform in the embodiment of the present application, and the processor 02 executes various functional applications and data processing by running the software programs and modules stored in the memory 04, that is, implements the automated testing method based on the pull-type machine learning platform of the application program. The memory 04 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 04 may further include memory located remotely from the processor 02, which may be connected to the electronic device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 06 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 10. In one example, the transmission module 06 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission module 06 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In terms of software, the automated testing apparatus based on the pull-type machine learning platform may be as shown in fig. 11, and includes:
a generating module 1101, configured to generate a test case of a test object according to a request body and a response body of a modeling DGA task flow;
a setting module 1102, configured to set an operating environment, a dependency relationship, and a serial-parallel manner for the test case;
an execution module 1103, configured to execute the test script at regular time to obtain an execution result of the test case;
a comparison module 1104 for comparing the execution result of the test case with an expected result;
and a correcting module 1105, configured to automatically correct the expected value of the case according to the comparison result and the automatic correction judgment logic.
In one embodiment, the test object may include, but is not limited to, the execution of component tasks of at least one of: data introduction, data processing, data splitting, feature extraction, model training, batch estimation, model release and model reasoning.
In an embodiment, the generating module 1101 may be specifically configured to automatically intercept a request body and a response body of a DAG task flow initiated by a user during machine learning platform modeling; naming a request body and a response body in a mode of splicing the request URL with a timestamp; and filling the URL, the request mode, the request parameters and the response body in the request body as variable values used in the test script into each column of the EXCEL in sequence according to rows to serve as a test case.
In one embodiment, the setting module 1102 may be specifically configured to divide the parameters of the test case into a fixed value and a variable value; parameterizing the variable value, wherein the parameterized variable value is acquired in real time through interaction with a database; setting a switching value for automatic correction judgment in a test case; taking the value of the core index in the response body as a case expected value; and setting the tasks with the dependency relationship in the DAG task flow as serial execution, and setting the tasks at the same level in the DAG task flow as parallel execution.
In an embodiment, the executing module 1103 may be specifically configured to determine whether a preset trigger condition is met; under the condition that the preset trigger condition is determined to be met, triggering to execute the test script; judging whether a dependent script exists, and when the dependent script exists, continuing to wait for the completion of the execution of the dependent script to be successful and then starting the execution; in the script execution process, reading a single case under the condition that the test cases are executed in series, and reading a plurality of cases in batch under the condition that the test cases are executed in parallel, wherein the number of the parallel reading is determined according to the preset number of the parallel reading.
In one embodiment, in the process of executing the script, a request class in the test code can be called according to the request mode of the test object in the test case; according to the called request class, judging the execution end of the test object; under the condition of synchronous execution, taking the obtained response body as an end identifier; in the case of asynchronous execution, the component execution state is polled by interacting with the database to determine if the task has finished executing.
In one embodiment, the correcting module 1105 may be specifically configured to obtain an auto-correction switch value of a test case; under the conditions that the automatic correction switch value is on, the task is successfully executed, and the execution result is inconsistent with the expected result, replacing the expected value of the test case with the obtained test result; in the case where the automatic correction switch value is off, or the task execution fails, the expected values of the test cases are not updated.
An embodiment of the present application further provides a specific implementation manner of an electronic device, which is capable of implementing all steps in the automated testing method based on the drag-type machine learning platform in the foregoing embodiment, where the electronic device specifically includes the following contents: a processor (processor), a memory (memory), a communication Interface (Communications Interface), and a bus; the processor, the memory and the communication interface complete mutual communication through the bus; the processor is configured to call a computer program in the memory, and when the processor executes the computer program, the processor implements all the steps in the automated testing method based on the pull-type machine learning platform in the foregoing embodiments, for example, when the processor executes the computer program, the processor implements the following steps:
step 1: generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
step 2: setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
and step 3: executing the test script regularly to obtain an execution result of the test case;
and 4, step 4: comparing the execution result of the test case with an expected result;
and 5: and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
As can be seen from the above description, in the embodiment of the present application, the test case conforming to the running of the script is generated by capturing the request body and the response body, manual packet capturing record is not required, the running environment, the dependency relationship and the serial-parallel mode of the test case are established, the execution of the test case is triggered at regular time to implement the execution of the test case, and the case expected value is corrected automatically based on the execution result, so that the error reporting and the adjustment of the execution of the script are implemented, and the accuracy of the test is improved. By the mode, the problems of low testing efficiency and low accuracy of the existing automatic testing of the task flow formed by the dragging type machine learning platform are solved, and the purpose of carrying out efficient and accurate automatic testing on the task flow formed by the dragging type machine learning platform is achieved.
An embodiment of the present application further provides a computer-readable storage medium capable of implementing all the steps in the automated testing method based on the pull-type machine learning platform in the foregoing embodiment, where the computer-readable storage medium stores a computer program, and the computer program implements all the steps of the automated testing method based on the pull-type machine learning platform in the foregoing embodiment when executed by a processor, for example, the processor implements the following steps when executing the computer program:
step 1: generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
step 2: setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
and step 3: executing the test script regularly to obtain an execution result of the test case;
and 4, step 4: comparing the execution result of the test case with an expected result;
and 5: and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
As can be seen from the above description, in the embodiment of the present application, the test case conforming to the running of the script is generated by capturing the request body and the response body, manual packet capturing record is not required, the running environment, the dependency relationship and the serial-parallel mode of the test case are established, the execution of the test case is triggered at regular time to implement the execution of the test case, and the case expected value is corrected automatically based on the execution result, so that the error reporting and the adjustment of the execution of the script are implemented, and the accuracy of the test is improved. By the mode, the problems of low testing efficiency and low accuracy of the existing automatic testing of the task flow formed by the dragging type machine learning platform are solved, and the purpose of carrying out efficient and accurate automatic testing on the task flow formed by the dragging type machine learning platform is achieved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Although the present application provides method steps as described in an embodiment or flowchart, additional or fewer steps may be included based on conventional or non-inventive efforts. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or client product executes, it may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although embodiments of the present description provide method steps as described in embodiments or flowcharts, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or end product executes, it may execute sequentially or in parallel (e.g., parallel processors or multi-threaded environments, or even distributed data processing environments) according to the method shown in the embodiment or the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the embodiments of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, and the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The embodiments of this specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The described embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the embodiments of the present disclosure, and is not intended to limit the embodiments of the present disclosure. Various modifications and variations to the embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present specification should be included in the scope of the claims of the embodiments of the present specification.

Claims (10)

1. An automated testing method based on a drag-and-drop machine learning platform is characterized by comprising the following steps:
generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
executing the test script regularly to obtain an execution result of the test case;
comparing the execution result of the test case with an expected result;
and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
2. The method of claim 1, wherein the test object includes performance of component tasks of at least one of: data introduction, data processing, data splitting, feature extraction, model training, batch estimation, model release and model reasoning.
3. The method of claim 1 wherein generating test cases of test objects from a requester and a responder of a modeled DGA task stream comprises:
automatically intercepting a request body and a response body of a DAG task flow initiated by a user during modeling of a machine learning platform;
naming a request body and a response body in a mode of splicing the request URL with a timestamp;
and filling the URL, the request mode, the request parameters and the response body in the request body as variable values used in the test script into each column of the EXCEL in sequence according to rows to serve as a test case.
4. The method of claim 1, wherein setting a runtime environment, dependencies, and a serial-parallel approach for the test case comprises:
dividing parameters of the test case into a fixed value and a variable value;
parameterizing the variable value, wherein the parameterized variable value is acquired in real time through interaction with a database;
setting a switching value for automatic correction judgment in a test case;
taking the value of the core index in the response body as a case expected value;
and setting the tasks with the dependency relationship in the DAG task flow as serial execution, and setting the tasks at the same level in the DAG task flow as parallel execution.
5. The method of claim 1, wherein the periodically executing the test script to obtain the execution result of the test case comprises:
determining whether a preset trigger condition is met;
under the condition that the preset trigger condition is determined to be met, triggering to execute the test script;
judging whether a dependent script exists, and when the dependent script exists, continuing to wait for the completion of the execution of the dependent script to be successful and then starting the execution;
in the script execution process, reading a single case under the condition that the test cases are executed in series, and reading a plurality of cases in batch under the condition that the test cases are executed in parallel, wherein the number of the parallel reading is determined according to the preset number of the parallel reading.
6. The method of claim 5, further comprising during the script execution:
calling a request class in the test code according to a request mode of a test object in the test case;
according to the called request class, judging the execution end of the test object;
under the condition of synchronous execution, taking the obtained response body as an end identifier;
in the case of asynchronous execution, the component execution state is polled by interacting with the database to determine if the task has finished executing.
7. The method of claim 1, wherein automatically correcting the case expected values based on the comparison and the automatic correction decision logic comprises:
acquiring an automatic correction switch value of the test case;
under the conditions that the automatic correction switch value is on, the task is successfully executed, and the execution result is inconsistent with the expected result, replacing the expected value of the test case with the obtained test result;
in the case where the automatic correction switch value is off, or the task execution fails, the expected values of the test cases are not updated.
8. The utility model provides an automatic testing arrangement based on pull formula machine learning platform which characterized in that includes:
the generating module is used for generating a test case of the test object according to the request body and the response body of the modeling DGA task flow;
the setting module is used for setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
the execution module is used for executing the test script in a timing manner to obtain an execution result of the test case;
the comparison module is used for comparing the execution result of the test case with an expected result;
and the correcting module is used for automatically correcting the expected value of the case according to the comparison result and the automatic correction judging logic.
9. An electronic device comprising a processor and a memory for storing processor-executable instructions, the instructions when executed by the processor implementing the steps of the method of:
generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
executing the test script regularly to obtain an execution result of the test case;
comparing the execution result of the test case with an expected result;
and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
10. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of a method comprising:
generating a test case of a test object according to a request body and a response body of the modeling DGA task flow;
setting an operation environment, a dependency relationship and a serial-parallel mode for the test case;
executing the test script regularly to obtain an execution result of the test case;
comparing the execution result of the test case with an expected result;
and automatically correcting the expected value of the case according to the comparison result and the automatic correction judgment logic.
CN202110799046.4A 2021-07-15 2021-07-15 Automatic testing method and device based on dragging type machine learning platform Pending CN113360408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110799046.4A CN113360408A (en) 2021-07-15 2021-07-15 Automatic testing method and device based on dragging type machine learning platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110799046.4A CN113360408A (en) 2021-07-15 2021-07-15 Automatic testing method and device based on dragging type machine learning platform

Publications (1)

Publication Number Publication Date
CN113360408A true CN113360408A (en) 2021-09-07

Family

ID=77539533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110799046.4A Pending CN113360408A (en) 2021-07-15 2021-07-15 Automatic testing method and device based on dragging type machine learning platform

Country Status (1)

Country Link
CN (1) CN113360408A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578498A (en) * 2023-07-12 2023-08-11 西南交通大学 Automatic generation method and system for unit test cases

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391379A (en) * 2017-07-28 2017-11-24 武汉斗鱼网络科技有限公司 Interface automatic test approach and device
CN111694749A (en) * 2020-06-24 2020-09-22 深圳壹账通智能科技有限公司 Automatic interface testing method and device, computer equipment and readable storage medium
CN111949520A (en) * 2020-07-31 2020-11-17 上海中通吉网络技术有限公司 Automatic interface test method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391379A (en) * 2017-07-28 2017-11-24 武汉斗鱼网络科技有限公司 Interface automatic test approach and device
CN111694749A (en) * 2020-06-24 2020-09-22 深圳壹账通智能科技有限公司 Automatic interface testing method and device, computer equipment and readable storage medium
CN111949520A (en) * 2020-07-31 2020-11-17 上海中通吉网络技术有限公司 Automatic interface test method and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578498A (en) * 2023-07-12 2023-08-11 西南交通大学 Automatic generation method and system for unit test cases
CN116578498B (en) * 2023-07-12 2023-09-29 西南交通大学 Automatic generation method and system for unit test cases

Similar Documents

Publication Publication Date Title
CN107077385B (en) For reducing system, method and the storage medium of calculated examples starting time
CN112036577B (en) Method and device for applying machine learning based on data form and electronic equipment
CN111737073B (en) Automatic testing method, device, equipment and medium
CN110147327B (en) Multi-granularity-based web automatic test management method
CN108897587B (en) Pluggable machine learning algorithm operation method and device and readable storage medium
US20220284286A1 (en) Method and apparatus for providing recommendations for completion of an engineering project
CN110046088A (en) A kind of interface test method, device and equipment
CN110046100B (en) Packet testing method, electronic device and medium
CN112149838A (en) Method, device, electronic equipment and storage medium for realizing automatic model building
CN108228879A (en) A kind of data-updating method, storage medium and smart machine
CN115525724A (en) Modeling method and system applied to data warehouse and electronic equipment
CN113360408A (en) Automatic testing method and device based on dragging type machine learning platform
CN115391219A (en) Test case generation method and device, electronic equipment and storage medium
CN111159897A (en) Target optimization method and device based on system modeling application
US20240095529A1 (en) Neural Network Optimization Method and Apparatus
CN115129574A (en) Code testing method and device
US6466925B1 (en) Method and means for simulation of communication systems
CN110895460A (en) Jenkins-based robot system integration method and device and terminal equipment
CN111831380A (en) Task execution method and device, storage medium and electronic device
CN111767316A (en) Target task processing method and device and electronic equipment
CN110610060A (en) Product modeling method of open expandable information, computer device and computer readable storage medium
CN109271269A (en) A kind of processing method, device and equipment that application sudden strain of a muscle is moved back
CN117632905B (en) Database management method and system based on cloud use records
CN112861951B (en) Image neural network parameter determining method and electronic equipment
CN115941834B (en) Automatic operation method, device, equipment and storage medium of smart phone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination