CN113868035B - Automatic LTP performance test method and device - Google Patents

Automatic LTP performance test method and device Download PDF

Info

Publication number
CN113868035B
CN113868035B CN202110996846.5A CN202110996846A CN113868035B CN 113868035 B CN113868035 B CN 113868035B CN 202110996846 A CN202110996846 A CN 202110996846A CN 113868035 B CN113868035 B CN 113868035B
Authority
CN
China
Prior art keywords
test
items
module
item
test item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110996846.5A
Other languages
Chinese (zh)
Other versions
CN113868035A (en
Inventor
窦志冲
董世江
刘波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202110996846.5A priority Critical patent/CN113868035B/en
Publication of CN113868035A publication Critical patent/CN113868035A/en
Application granted granted Critical
Publication of CN113868035B publication Critical patent/CN113868035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2273Test methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides an automatic LTP performance test method and device, wherein the method comprises the following steps: acquiring all test items in the LTP test suite; splitting the test items into individual items, and constructing a total test item list by the split individual items; when the number of the test items in the total test item list is not zero, classifying the test items and executing the test item by item in a corresponding machine according to classification control; summarizing the test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists; and when the test is completed, outputting the generated state list. After the development of the operating system is completed, manual testing is not required for the developer in classification, the developer only needs to process the individual test items which fail for many times finally, then release of the version is completed, the time of the developer is saved, and the working efficiency of the developer is greatly improved.

Description

Automatic LTP performance test method and device
Technical Field
The invention relates to the technical field of computer automation pressure test, in particular to an automatic LTP performance test method and device.
Background
In recent years, along with the change of international information security forms, information security becomes an important point, and as an important ring of information security, an operating system is initially developed to a final stable version of the operating system, which is a continuous iterative process, LTP performance test needs to be performed before each version is released, and only the operating system passing the LTP performance test is a stable and reliable operating system. The general process from development to release of the version of the operating system is that after integrating the version of the operating system, a developer firstly carries out self-test to ensure the stability of the system, then sends the system to a tester for testing, and finally releases the operating system after no problem is ensured.
The LTP suite is a set of system test suites developed by Linux Test Project. It develops a combination of tests based on statistics of utilization of system resources to provide sufficient pressure for the system. The stability and reliability of the system are judged by pressure test. The stress test is a destructive test, i.e. the operation of the system under abnormal, overload conditions. The assessment of how the system will operate beyond maximum load is a test of the ability of the system to withstand a certain load strength under normal conditions.
After a developer finishes operating system development, the LTP test suite is firstly used for carrying out self-test on system performance, the LTP test suite divides test items into more than ten categories, the developer needs to test each category respectively, one category has tens to hundreds of items, after one category is tested, the developer needs to read test results, take out failed test items from the test results, analyze reasons and then carry out independent test until the failed items pass, and the test mode consumes a great amount of time and effort of the developer and seriously influences the working efficiency of the developer.
Disclosure of Invention
Aiming at the problems that the existing test mode consumes a great deal of time and energy of developers and seriously affects the working efficiency of the developers, the invention provides an automatic LTP performance test method and device.
The technical scheme of the invention is as follows:
in one aspect, the present invention provides an automated LTP performance testing method, including the following steps:
acquiring all test items in the LTP test suite;
splitting the test items into individual items, and constructing a total test item list by the split individual items;
when the number of the test items in the total test item list is not zero, classifying the test items and executing the test item by item in a corresponding machine according to classification control;
summarizing the test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists;
and when the test is completed, outputting the generated state list.
After the development of the operating system is completed, LTP performance test is performed through the method, manual test is not performed by the developer according to categories, the developer only needs to finally process the individual test items which fail for many times, then release of the version is completed, time of the developer is saved, and working efficiency of the developer is greatly improved.
Preferably, the step of acquiring all test items in the LTP test kit further comprises:
and (5) building a test system and configuring an LTP test environment.
Preferably, the step of classifying the test items and executing the test item by item on the corresponding machine according to the classification control includes:
dividing the test items into a single machine test item and a double machine test item according to the attribute, and constructing a single machine test item list and a double machine test item list in a database;
and controlling to test item by item according to the test item category on different machines according to the corresponding test item list.
Some test items can be tested by only one machine, and the test items are called single-machine test items. Two machines are required to interact with each other to complete the test, and these test items are called dual-machine test items.
Preferably, the step of controlling the testing according to the test item category according to the corresponding test item list in the step of testing according to the corresponding test item list item by item on different machines respectively includes:
step a: selecting test items in the test item list for testing;
step b: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step c;
step c: re-testing the test items which fail to be tested;
step d: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step e;
step e: counting the test times of the test item;
step f: judging whether the test times reach a set threshold value, if so, failing to test the test item, and executing the step a to select the next test item; if not, executing the step c.
In order to improve the test efficiency, the maximum test times are set for test items which fail in test, the test of the set times fails, the test time test items are output, the test personnel can conveniently and only conduct further processing on the test failure items, and the test time is saved.
Preferably, the step c comprises the following steps:
step c1: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step c2;
step c2: extracting keywords of the error report log;
step c3: searching whether related configuration exists in a knowledge base according to the keywords; if yes, executing the step c4; if not, executing the step c5;
step c4: acquiring relevant configuration and configuring a test environment;
step c5: retesting the test item; executing the step d;
in the step f, judging whether the test times reach a set threshold value, if so, testing the test item fails, and executing the step a to select the next test item; if not, go to step c5.
In order to eliminate the failure caused by the error of the environment configuration process, the test items with the failure are tested again after the environment configuration is carried out by re-acquiring the related configuration items, the configuration problem is automatically eliminated by the test in the process, and the test time is further saved for the testers.
Preferably, the step of splitting the test item into individual items and constructing a total test item list from the split individual items further comprises:
storing the test items according to different categories respectively;
storing the error report log of the history test and a knowledge base; after each round of testing is completed, the knowledge base is updated.
According to the test experience, most of the reasons of failure of the test items are due to problems of environmental configuration, so that a knowledge base is built according to keywords according to the prior test records and is used for the environmental configuration of the partial failure test items.
Preferably, the step of summarizing the test results, classifying the test items according to the status of the test results to generate a corresponding status list, and deleting the corresponding test items in the total test item list according to the generated status list includes:
summarizing test results, forming a passing test item list of test passing test items, and eliminating the test passing test items in the total test item list according to the passing test item list;
and forming a failed test item list from the test items which are failed to be tested, and rejecting the test items which are failed to be tested in the total test item list according to the failed test item list.
The test items in the single-machine test item list and the double-machine test item list are integrated to be the test items in the total test item list; and respectively carrying out cyclic test according to the single-machine test item list and the double-machine test item list until the number of test items in the total test item list is 0, stopping the test, and outputting the test passing through the test item list and the failure test item list.
On the other hand, the technical scheme of the invention provides an automatic LTP performance testing device, which comprises a test item acquisition module, a test item processing module, a control execution module, a test result processing module and an output module;
the test item acquisition module is used for acquiring all the test items in the LTP test suite;
the test item processing module is used for splitting the test item into independent items and constructing a total test item list by the split independent items;
the control execution module is used for classifying the test items and controlling the corresponding machine to execute the test item by item according to the classification when the number of the test items in the total test item list is not zero;
the test result processing module is used for summarizing test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists;
and the output module is used for stopping the test when the number of the test items in the total test item list is zero and outputting the generated state list.
Preferably, the device comprises an environment configuration module for building a test system and configuring an LTP test environment.
Preferably, the control execution module comprises a classification processing unit and a control execution unit;
the classification processing unit is used for dividing the test items into single-machine test items and double-machine test items according to the attributes, and constructing a single-machine test item list and a double-machine test item list in the database;
and the control execution unit is used for controlling the testing on different machines according to the corresponding test item list item by item according to the test item category.
Preferably, the control execution unit comprises a test control sub-module, a test result judging sub-module, a test frequency counting sub-module and a test threshold judging sub-module;
the test control sub-module is used for selecting test items in the test item list to test; when the test fails, controlling the test item which fails to be tested again;
the test result judging sub-module is used for judging whether the test passes or not;
the test frequency counting sub-module is used for counting the test frequency of the test item;
and the test threshold judging sub-module is used for judging whether the test times reach the set threshold.
Preferably, the control execution unit further comprises a keyword extraction sub-module, a search sub-module and a configuration information acquisition sub-module;
the keyword extraction sub-module is used for extracting keywords of the error report log;
the searching sub-module is used for searching relevant configuration in the knowledge base according to the keywords;
the configuration information acquisition sub-module is used for acquiring the searched relevant configuration and outputting the searched relevant configuration to the environment configuration module when the search module searches the relevant configuration;
and the environment configuration module is used for reconfiguring the test environment according to the received relevant configuration.
Preferably, the apparatus further comprises a database storage module and an update module;
the database storage module is used for respectively storing the test items according to different categories; storing the error report log of the history test and a knowledge base;
and the updating module is used for updating the knowledge base after each round of test is completed.
Preferably, the test result processing module is specifically configured to summarize test results, form a passing test item list of test passing test items, and reject test passing test items in the total test item list according to the passing test item list; and forming a failed test item list from the test items which are failed to be tested, and rejecting the test items which are failed to be tested in the total test item list according to the failed test item list.
From the above technical scheme, the invention has the following advantages: after the developer finishes the development task of the version of the operating system, the developer does not need to conduct manual testing according to the test item classification of the LTP test suite, and directly submits the test command, so that the test device automatically completes the test, the self-time of the developer is greatly saved, and the working efficiency of the developer is improved.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
It can be seen that the present invention has outstanding substantial features and significant advances over the prior art, as well as its practical advantages.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a method of one embodiment of the invention.
FIG. 2 is a schematic flow chart of a test procedure item by item in a method of one embodiment of the invention.
Fig. 3 is a schematic flow chart of a method of another embodiment of the invention.
Fig. 4 is a schematic block diagram of an apparatus of one embodiment of the invention.
Fig. 5 is a schematic block diagram of an apparatus of another embodiment of the present invention.
In the figure, the system comprises an 11-environment configuration module, a 22-test item acquisition module, a 33-test item processing module, a 44-control execution module, a 55-test result processing module, a 66-output module, a 77-database storage module, an 88-update module, a 441-classification processing unit, a 442-control execution unit, a 401-test control sub-module, a 402-test result judgment sub-module, a 403-test number statistics sub-module, a 404-test threshold judgment sub-module, a 405-keyword extraction sub-module, a 406-search sub-module and a 407-configuration information acquisition sub-module.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an automatic LTP performance testing method, including the following steps:
step 11: acquiring all test items in the LTP test suite;
step 12: splitting the test items into individual items, and constructing a total test item list by the split individual items;
step 13: when the number of the test items in the total test item list is not zero, classifying the test items and executing the test item by item in a corresponding machine according to classification control;
step 14: summarizing the test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists;
step 15: and when the test is completed, outputting the generated state list.
After the development of the operating system is completed, LTP performance test is performed through the method, manual test is not performed by the developer according to categories, the developer only needs to finally process the individual test items which fail for many times, then release of the version is completed, time of the developer is saved, and working efficiency of the developer is greatly improved.
It should be noted that, before performing the step of obtaining all the test items in the LTP test kit described in step 11, the method further includes:
step 10: and (5) building a test system and configuring an LTP test environment. For example, a preparation machine for testing stand-alone test items, for testing duplex test items, respectively, and configuring a basic test environment by an installation package;
in some embodiments, in step 13, the step of classifying the test items and controlling the execution of the test item by item at the corresponding machine according to the classification includes:
step 121: dividing the test items into a single machine test item and a double machine test item according to the attribute, and constructing a single machine test item list and a double machine test item list in a database;
step 122: and controlling to test item by item according to the test item category on different machines according to the corresponding test item list.
Some test items can be tested by only one machine, and the test items are called single-machine test items. Two machines are required to interact with each other to complete the test, and these test items are called dual-machine test items.
As shown in fig. 2, it should be further noted that, in step 122, the step of performing the test according to the corresponding test item list in the step of performing the test according to the corresponding test item list on different machines according to the test item category control includes:
step a: selecting test items in the test item list for testing;
step b: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step c;
step c: re-testing the test items which fail to be tested;
step d: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step e;
step e: counting the test times of the test item;
step f: judging whether the test times reach a set threshold value, if so, failing to test the test item, and executing the step a to select the next test item; if not, executing the step c.
In order to improve the test efficiency, the maximum test times are set for test items which fail in test, the test of the set times fails, the test time test items are output, the test personnel can conveniently and only conduct further processing on the test failure items, and the test time is saved.
In some embodiments, the step c comprises:
step c1: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step c2;
step c2: extracting keywords of the error report log;
step c3: searching whether related configuration exists in a knowledge base according to the keywords; if yes, executing the step c4; if not, executing the step c5;
step c4: acquiring relevant configuration and configuring a test environment;
step c5: retesting the test item; executing the step d;
in the step f, judging whether the test times reach a set threshold value, if so, testing the test item fails, and executing the step a to select the next test item; if not, go to step c5.
In order to eliminate the failure caused by the error of the environment configuration process, the test items with the failure are tested again after the environment configuration is carried out by re-acquiring the related configuration items, the configuration problem is automatically eliminated by the test in the process, and the test time is further saved for the testers.
It should be noted that, after the step of splitting the test item into separate items and constructing the total test item list from the split separate items, the method further includes:
storing the test items according to different categories respectively;
storing the error report log of the history test and a knowledge base; after each round of testing is completed, the knowledge base is updated.
According to the test experience, most of the reasons of failure of the test items are due to problems of environmental configuration, so that a knowledge base is built according to keywords according to the prior test records and is used for the environmental configuration of the partial failure test items.
In some embodiments, in step 14, summarizing the test results, classifying the test items according to the status of the test results to generate a corresponding status list, and deleting the corresponding test items in the total test item list according to the generated status list includes:
step 141: summarizing test results, forming a passing test item list of test passing test items, and eliminating the test passing test items in the total test item list according to the passing test item list;
step 142: and forming a failed test item list from the test items which are failed to be tested, and rejecting the test items which are failed to be tested in the total test item list according to the failed test item list.
The test items in the single-machine test item list and the double-machine test item list are integrated to be the test items in the total test item list; and respectively carrying out cyclic test according to the single-machine test item list and the double-machine test item list until the number of test items in the total test item list is 0, stopping the test, and then outputting the test items through the test item list and the failure test item list.
After the developer completes the development of the operating system, LTP test environment configuration is carried out; dividing test items into a single machine test and a double machine test; constructing different tables in a database, and respectively storing a total test item list, a single machine test item list, a double machine test item list, a knowledge base and error report log keywords; in the testing process, continuously receiving and transmitting data packets between two machines tested by the double machine by controlling to continuously communicate the data packets, and controlling whether repeated testing is performed on failed times and test results; the whole system carries out the circulation test until the total test item list has no test item; and outputting test results, wherein the test results are divided into a pass test item list and a fail test item list. As shown in fig. 3, the implementation process is as follows:
(1) Three machines are first prepared, one for testing single machine test items and the other two for testing dual machine test items, and the basic test environment is configured.
(2) Splitting all the test items in the LTP test suite into independent items, constructing a total test item list, judging the number of the test items in the total test item list, and executing the next step if the number of the test items is not zero.
(3) The developer test items are then separated into two categories, i.e., stand-alone test items and dual test items, and a list of stand-alone test items and a list of dual test items are built in the database.
(4) And testing on different machines according to the types of the test items, and continuously communicating the data packets by controlling the two machines in the process of double-machine testing.
(5) Distinguishing according to the test result, forming a passing test item list for the test items passing through the test, and eliminating the test items passing through the test in the total test item list according to the passing test item list; and for the failed test item, performing the next judgment according to the failed test times.
(6) If the failure times are more than 5 times, putting the test into a failure test item list, and executing the next operation; if the number of the test items is less than 5, extracting keywords of the error report log, searching in a knowledge base according to the keywords to see whether relevant configuration exists, and if so, configuring a test environment according to the relevant experience knowledge to retest the test items.
(7) And deleting the test items in the failed test item list in the total test item list according to the failed test item list.
(8) After each round of testing, the experience knowledge is updated to the knowledge base, and the knowledge base is enriched.
(9) And (3) testing according to the total test item list, until the number of test items in the total test item list is 0, stopping testing, and outputting the passing test item list and the failed test item list.
As shown in fig. 4, the embodiment of the present invention further provides an automatic LTP performance testing apparatus, which includes a test item acquisition module 22, a test item processing module 33, a control execution module 44, a test result processing module 55, and an output module 66;
a test item acquisition module 22, configured to acquire all test items in the LTP test suite;
the test item processing module 33 is configured to split the test item into individual items, and construct a total test item list from the split individual items;
a control execution module 44, configured to classify test items and control execution of the test item by item on a corresponding machine according to the classification when the number of test items in the total test item list is not zero;
the test result processing module 55 is used for summarizing test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists;
and the output module 66 is used for stopping the test when the number of the test items in the total test item list is zero and outputting the generated state list.
After the development of the operating system is completed, LTP performance test is carried out through the device, manual test is not carried out on the developers in a classified mode, the developers only need to finally process the individual test items which fail for many times, then release of the version is completed, time of the developers is saved, and working efficiency of the developers is greatly improved.
As shown in fig. 5, it should be noted that the apparatus further includes an environment configuration module 11, configured to build a test system and configure an LTP test environment. For example, a machine is prepared for testing stand-alone test items, for testing duplex test items, respectively, and a basic test environment is configured by an installation package.
In some embodiments, the control execution module 44 includes a classification processing unit 441, a control execution unit 442;
a classification processing unit 401, configured to divide the test items into a single machine test item and a double machine test item according to the attribute, and construct a single machine test item list and a double machine test item list in the database;
the control execution unit 402 is configured to control testing on different machines according to the corresponding test item list item by item according to the test item category.
It should be further noted that the control execution unit 442 includes a test control sub-module 401, a test result judging sub-module 402, a test number counting sub-module 403, and a test threshold judging sub-module 404;
a test control sub-module 401, configured to select a test item in the test item list for testing; when the test fails, controlling the test item which fails to be tested again;
a test result judging sub-module 402 for judging whether the test passes;
a test number statistics sub-module 403, configured to count the test number of the test item;
the test threshold judging sub-module 404 is configured to judge whether the number of tests reaches a set threshold.
In order to improve the test efficiency, the maximum test times are set for test items which fail in test, the test of the set times fails, the test time test items are output, the test personnel can conveniently and only conduct further processing on the test failure items, and the test time is saved.
In some embodiments, the control execution unit 442 further includes a keyword extraction submodule 405, a search submodule 406, and a configuration information acquisition submodule 407;
a keyword extraction submodule 405, configured to extract keywords of the error log;
a searching sub-module 406, configured to search the knowledge base for relevant configurations according to the keywords;
a configuration information obtaining sub-module 407, configured to obtain, when the search module searches for the relevant configuration, the searched relevant configuration and output the relevant configuration to the environment configuration module 11;
an environment configuration module 11 for reconfiguring the test environment according to the received relevant configuration.
In order to eliminate the failure caused by the error of the environment configuration process, the test items with the failure are tested again after the environment configuration is carried out by re-acquiring the related configuration items, the configuration problem is automatically eliminated by the test in the process, and the test time is further saved for the testers.
In some embodiments, the apparatus further comprises a database storage module 77 and an update module 88;
a database storage module 77 for storing test items according to different categories, respectively; storing the error report log of the history test and a knowledge base;
an update module 88 is provided for updating the knowledge base after each round of testing.
According to the test experience, most of the reasons of failure of the test items are due to problems of environmental configuration, so that a knowledge base is built according to keywords according to the prior test records and is used for the environmental configuration of the partial failure test items.
In some embodiments, the test result processing module 55 is specifically configured to aggregate test results, form a passing test item list of test passed test items, and reject test items passing through the total test item list according to the passing test item list; and forming a failed test item list from the test items which are failed to be tested, and rejecting the test items which are failed to be tested in the total test item list according to the failed test item list.
The test items in the single-machine test item list and the double-machine test item list are integrated to be the test items in the total test item list; and respectively carrying out cyclic test according to the single-machine test item list and the double-machine test item list until the number of test items in the total test item list is 0, stopping the test, and outputting the passing test item list and the failure test item list.
Although the present invention has been described in detail by way of preferred embodiments with reference to the accompanying drawings, the present invention is not limited thereto. Various equivalent modifications and substitutions may be made in the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and it is intended that all such modifications and substitutions be within the scope of the present invention/be within the scope of the present invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. An automatic LTP performance test method is characterized by comprising the following steps:
acquiring all test items in the LTP test suite;
splitting the test items into individual items, and constructing a total test item list by the split individual items;
when the number of the test items in the total test item list is not zero, classifying the test items and executing the test item by item in a corresponding machine according to classification control;
summarizing the test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists;
when the test is completed, outputting the generated state list;
the step of classifying the test items and executing the test item by item at the corresponding machine according to the classification control includes: dividing the test items into a single machine test item and a double machine test item according to the attribute, and constructing a single machine test item list and a double machine test item list in a database; according to the category control of the test items, testing is carried out on different machines item by item according to the corresponding test item list;
the step of controlling the step of testing item by item according to the corresponding test item list in different machines according to the test item category comprises the following steps:
step a: selecting test items in the test item list for testing;
step b: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step c;
step c: re-testing the test items which fail to be tested;
step c1: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step c2;
step c2: extracting keywords of the error report log;
step c3: searching whether related configuration exists in a knowledge base according to the keywords; if yes, executing the step c4; if not, executing the step c5;
step c4: acquiring relevant configuration and configuring a test environment;
step c5: retesting the test item; executing the step d;
step d: judging whether the test is passed or not; if yes, executing the step a to select the next test item; if not, executing the step e;
step e: counting the test times of the test item;
step f: judging whether the test times reach a set threshold value, if so, performing the step a to select the next test item, wherein the test item fails to test; if not, executing the step c;
in the step f, judging whether the test times reach a set threshold value, if so, testing the test item fails, and executing the step a to select the next test item; if not, go to step c5.
2. The automated LTP performance testing method of claim 1, wherein the step of obtaining all test items in the LTP test suite is preceded by the step of:
and (5) building a test system and configuring an LTP test environment.
3. The automated LTP performance testing method of claim 2, wherein after the step of splitting the test items into individual items and constructing the total test item list from the split individual items, further comprises:
storing the test items according to different categories respectively;
storing the error report log of the history test and a knowledge base; after each round of testing is completed, the knowledge base is updated.
4. The automated LTP performance testing method of claim 3, wherein the step of summarizing the test results, classifying the test items according to the status of the test results to generate a corresponding status list, and deleting the corresponding test items in the total test item list according to the generated status list includes:
summarizing test results, forming a passing test item list of test passing test items, and eliminating the test passing test items in the total test item list according to the passing test item list;
and forming a failed test item list from the test items which are failed to be tested, and rejecting the test items which are failed to be tested in the total test item list according to the failed test item list.
5. The automatic LTP performance testing device is characterized by comprising a test item acquisition module, a test item processing module, a control execution module, a test result processing module, an output module and an environment configuration module;
the test item acquisition module is used for acquiring all the test items in the LTP test suite;
the test item processing module is used for splitting the test item into independent items and constructing a total test item list by the split independent items;
the control execution module is used for classifying the test items and controlling the corresponding machine to execute the test item by item according to the classification when the number of the test items in the total test item list is not zero;
the test result processing module is used for summarizing test results, classifying the test items according to the test result states to generate corresponding state lists, and deleting the corresponding test items in the total test item list according to the generated state lists;
the output module is used for stopping the test when the number of the test items in the total test item list is zero and outputting the generated state list;
the control execution module comprises a classification processing unit and a control execution unit;
the classification processing unit is used for dividing the test items into single-machine test items and double-machine test items according to the attributes, and constructing a single-machine test item list and a double-machine test item list in the database;
the control execution unit is used for controlling the testing on different machines according to the corresponding test item list item by item according to the test item category;
the control execution unit comprises a test control sub-module, a test result judging sub-module, a test frequency counting sub-module, a test threshold judging sub-module, a keyword extraction sub-module, a search sub-module and a configuration information acquisition sub-module;
the test control sub-module is used for selecting test items in the test item list to test; when the test fails, controlling the test item which fails to be tested again;
the test result judging sub-module is used for judging whether the test passes or not;
the test frequency counting sub-module is used for counting the test frequency of the test item;
the test threshold judging sub-module is used for judging whether the test times reach a set threshold value;
the keyword extraction sub-module is used for extracting keywords of the error report log;
the searching sub-module is used for searching relevant configuration in the knowledge base according to the keywords;
the configuration information acquisition sub-module is used for acquiring the searched relevant configuration and outputting the searched relevant configuration to the environment configuration module when the search module searches the relevant configuration;
and the environment configuration module is used for reconfiguring the test environment according to the received relevant configuration.
6. The automated LTP performance testing apparatus of claim 5, wherein the environment configuration module is configured to build a testing system and configure an LTP testing environment.
CN202110996846.5A 2021-08-27 2021-08-27 Automatic LTP performance test method and device Active CN113868035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110996846.5A CN113868035B (en) 2021-08-27 2021-08-27 Automatic LTP performance test method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110996846.5A CN113868035B (en) 2021-08-27 2021-08-27 Automatic LTP performance test method and device

Publications (2)

Publication Number Publication Date
CN113868035A CN113868035A (en) 2021-12-31
CN113868035B true CN113868035B (en) 2023-07-18

Family

ID=78988592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110996846.5A Active CN113868035B (en) 2021-08-27 2021-08-27 Automatic LTP performance test method and device

Country Status (1)

Country Link
CN (1) CN113868035B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817033A (en) * 2022-04-26 2022-07-29 歌尔股份有限公司 Product testing method and device for automatic production line, terminal equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679390A (en) * 2013-12-26 2014-03-26 北京奇虎科技有限公司 Background testing method and device of configuration system
CN110221943A (en) * 2019-04-23 2019-09-10 努比亚技术有限公司 Configurable test method, test device and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679390A (en) * 2013-12-26 2014-03-26 北京奇虎科技有限公司 Background testing method and device of configuration system
CN110221943A (en) * 2019-04-23 2019-09-10 努比亚技术有限公司 Configurable test method, test device and computer readable storage medium

Also Published As

Publication number Publication date
CN113868035A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
Lou et al. Mining invariants from console logs for system problem detection
US7315973B1 (en) Method and apparatus for choosing tests for simulation and associated algorithms and hierarchical bipartite graph data structure
CN105068929A (en) Test script generation method, test script generation device, testing method, testing device and testing system
US20100262866A1 (en) Cross-concern code coverage assessment
CN109936479B (en) Control plane fault diagnosis system based on differential detection and implementation method thereof
CN109408385B (en) A kind of disfigurement discovery method based on mischief rule and classifying feedback
CN110297760A (en) Building method, device, equipment and the computer readable storage medium of test data
CN113868035B (en) Automatic LTP performance test method and device
CN111736865B (en) Database upgrading method and system
Zhou et al. Confmapper: Automated variable finding for configuration items in source code
CN112231163A (en) Multifunctional computer detection equipment and operation method thereof
CN105653455A (en) Program vulnerability detection method and detection system
Parsa et al. Software fault localization via mining execution graphs
CN109902012A (en) A kind of automation generates the method and device of server test report
CN116302984A (en) Root cause analysis method and device for test task and related equipment
Zhang et al. BuildSheriff: Change-aware test failure triage for continuous integration builds
CN113051582B (en) Computer software technology development and debugging system
Thilagaraj et al. Programming Routine Tasks Utilizing Scripting Automation in Generation of QSCAN Database
CN115098401A (en) HTML report verification method and device, electronic equipment and storage medium
CN113342632A (en) Simulation data automatic processing method and device, electronic equipment and storage medium
Li et al. Software misconfiguration troubleshooting based on state analysis
Huang et al. Software FMEA approach based on failure modes database
Coppola et al. Evolution and fragilities in scripted gui testing of android applications
CN115033415B (en) Chaotic engineering fault evaluation method based on FMEA
Schörgenhumer et al. Using crash frequency analysis to identify error-prone software technologies in multi-system monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant