WO2020137096A1 - Test device, and development support device - Google Patents

Test device, and development support device Download PDF

Info

Publication number
WO2020137096A1
WO2020137096A1 PCT/JP2019/040551 JP2019040551W WO2020137096A1 WO 2020137096 A1 WO2020137096 A1 WO 2020137096A1 JP 2019040551 W JP2019040551 W JP 2019040551W WO 2020137096 A1 WO2020137096 A1 WO 2020137096A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
information
project
unit
error prediction
Prior art date
Application number
PCT/JP2019/040551
Other languages
French (fr)
Japanese (ja)
Inventor
修治 宮下
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2020562382A priority Critical patent/JP7034334B2/en
Publication of WO2020137096A1 publication Critical patent/WO2020137096A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software

Definitions

  • the present invention relates to a test device for testing a target device and a development support device.
  • the conventional test apparatus evaluates the complexity of the source code of the program and the required resources, and searches for the optimum test combination that improves the quality of the entire software, thereby making the test of the target apparatus efficient (for example, , Patent Document 1).
  • the conventional test device can execute a test based on the source code, that is, a white box test, but it does not perform the test based on the information related to the specifications in the upstream process such as the required specifications, so the black test is performed. Box test cannot be made efficient. Furthermore, the conventional test apparatus cannot automatically present the optimum countermeasure based on the risk analysis according to the priority from the error information predicted to remain after the test is performed.
  • the present invention has been made in order to solve the above problems, and provides a test apparatus that makes the black box test more efficient in addition to the white box test. Furthermore, the present invention automatically generates an optimal countermeasure plan based on the risk analysis according to the priority from the error information that is expected to remain as a report, without interruption from the automatic test execution to the project diagnosis after the test completion. Provide a development support device to be implemented.
  • the test apparatus includes a processing unit.
  • the processing unit generates a test scenario for performing a test in the project from the information about the project to be tested.
  • the processing unit includes a learning unit, an error prediction information acquisition unit, a risk evaluation unit, a test item generation unit, a test case information acquisition/generation unit, and a test scenario generation unit.
  • the learning unit is a machine learning database that is a database that stores various types of information used in machine learning for an error prediction model that outputs error prediction information including information indicating the likelihood of detecting an error in an input test case.
  • the error prediction information acquisition unit inputs the information about the project to be tested into the error prediction model and acquires each error prediction information corresponding to each test case.
  • the risk evaluation unit calculates a risk evaluation value indicating a probability that an error is detected by a test case corresponding to the acquired error prediction information and a degree of influence of the error, and adds the risk evaluation value to the error prediction information.
  • the test item generation unit generates test item information indicating a test item to be executed based on the error prediction information with the risk evaluation value added.
  • the test case information acquisition/generation unit acquires test case information corresponding to the item of the test included in the test item information from the test case database included in the machine learning database.
  • the test scenario generation unit generates the test scenario based on the acquired test case information.
  • the test apparatus of the present invention can improve the efficiency of the black box test in addition to the white box test. Furthermore, the development support device of the present invention can efficiently present the optimum countermeasure plan according to the risk priority of each project diagnosis after the completion of the automatic test execution.
  • Block diagram showing the configuration of the test apparatus according to the first embodiment The figure which shows the information memorize
  • Flowchart showing error prediction model generation operation of test apparatus Flowchart showing scenario generation and execution operation of test apparatus
  • the figure which shows the component of a test apparatus, and the input-output data with respect to each component. Flowchart showing scenario generation and execution operation of test apparatus
  • Block diagram showing the configuration of the development support apparatus according to the third embodiment Flowchart showing the project diagnostic model generation operation of the development support device
  • Block diagram showing the configuration of the development support apparatus according to the fourth embodiment Flowchart showing the operation of risk analysis using the project diagnosis model of the development support device
  • Each of the plurality of embodiments and the modifications thereof described below has a characteristic configuration.
  • the characteristic configuration or operation in one form can be applied to another form, and the present invention is not limited to the following exemplified forms.
  • Embodiment 1 is a block diagram showing the configuration of the test apparatus 1 according to the first embodiment.
  • the test apparatus 1 executes a test regarding the operation based on the communication protocol between the test target apparatus 2A and the related apparatus 2B.
  • the test apparatus 1 generates a test scenario based on the information about the project to be tested, and executes the test according to the test scenario.
  • the project here refers to a project for product (device) development, system development, or the like that involves software design.
  • the test apparatus 1 includes a processor 10, a main storage unit 20, an auxiliary storage unit 30, a display unit 40, an operation unit 50, and a communication unit 60.
  • the test apparatus 1 is, for example, a personal computer.
  • the processor 10 is connected to other hardware via a signal line.
  • the processor 10 can be realized by a central processing unit (CPU), MPU, DSP, GPU, microcomputer, FPGA, ASIC and the like.
  • the processor 10 realizes various functions by reading an OS (Operating System), application programs, and various data stored in an auxiliary storage unit 30 described later and executing arithmetic processing.
  • the processor 10 includes the functional configuration described below.
  • the functional configuration may be implemented by firmware.
  • the processor 10 is an example of the processing unit 10.
  • the hardware in which the processor 10 and the main storage unit 20 and the auxiliary storage unit 30 described later are integrated is also referred to as a “processing circuit”.
  • the main storage unit 20 is a volatile storage unit.
  • the main storage unit 20 can be realized by a RAM (Random Access Memory) or the like.
  • the main storage unit 20 temporarily stores the data used, generated, input/output, or transmitted/received in the test apparatus 1.
  • the auxiliary storage unit 30 is a non-volatile storage unit.
  • the auxiliary storage unit 30 can be realized by a ROM (Read Only Memory), a HDD (Hard Disk Drive), a flash memory, or the like.
  • the auxiliary storage unit 30 stores the OS, application programs, and various data. At least a part of the OS is loaded into the main storage unit 20 and executed by the processor 10.
  • the auxiliary storage unit 30 further stores a machine learning database 31 and an error prediction model 32 generated based on the machine learning database 31.
  • the database will be referred to as "DB".
  • the error prediction model 32 is one that embodies one of the machine learning models.
  • the machine learning DB 31 is stored in, for example, an external server, and the test apparatus 1 and the machine learning DB 31 are tested so that the machine learning DB 31 is accessed from the test apparatus 1 via the external network and the interface unit of the test apparatus 1. It may be configured as a system.
  • the error prediction model 32 outputs the error prediction information 312 that is information about an error that can occur in the test target project when the input data 316 including information about the test target project is input.
  • the error prediction model 32 is generated and updated by the information stored in the machine learning DB 31. Details of the machine learning DB 31 will be described later.
  • the display unit 40 displays character strings and images according to user operations.
  • the display unit 40 is composed of a liquid crystal display, an organic EL display, or the like.
  • the operation unit 50 is composed of a keyboard, a mouse, a numeric keypad, and the like.
  • the user operates the test apparatus 1 via the operation unit 50.
  • the operation unit 50 may include a touch panel that is arranged so as to be superimposed on the display unit 40 and that can accept a touch operation by the user.
  • the communication unit 60 transmits/receives various data to/from the device under test 2A.
  • the communication unit 60 includes a receiver and a transmitter.
  • the receiver receives various data from the device under test 2A.
  • the transmitter transmits various data from the processor 10 to the device under test 2A.
  • the communication unit 60 can be realized by a communication chip, a NIC (Network Interface Card), or the like.
  • FIG. 2 is a diagram showing information stored in the machine learning DB 31 of the auxiliary storage unit 30.
  • the machine learning DB 31 is a database that stores various information used in machine learning. That is, the machine learning DB 31 includes a test result DB 301, a post-shipment defect DB 302, a test technology DB 303, a domain knowledge DB 304, and a project DB 305.
  • the machine learning DB 31 further includes a test case DB 306, a test scenario DB 307, a document DB 308, a structured DB 309, a supervised DB 310, and an unsupervised DB 311.
  • the information stored in the machine learning DB 31 is an example of information about an existing project for which test data already exists.
  • the machine learning DB 31 is stored in, for example, an external server, and is accessed from the test apparatus 1 via the external network and the interface section of the test apparatus 1, so that the test apparatus 1 and the machine learning DB 31 are accessed. May be configured as a test system.
  • the test record DB 301 stores test record information 301A regarding test records of existing projects.
  • the test record information 301A includes information such as a test case number, a test process name, a model name, whether or not an error has been detected, the content of the error, the location of the error, the cause of the error, a sequence diagram, and a test log. Composed. An error is a defect, design error, or the like that may cause a defect after shipping.
  • the test result information 301A is automatically generated by the test result comparison unit 112 described later.
  • the model name is a product code under development, but is not limited to this.
  • the model name may be a product name.
  • the post-shipment defect DB 302 stores post-shipment defect information 302A relating to post-shipment defects of existing projects.
  • the post-shipment defect information 302A includes information such as a system name, a model name, a software version name, the date and time of occurrence of the defect, the content of the defect, the cause of the defect, and the defect detection density.
  • the post-shipment defect information 302A is manually input by a person in charge of the quality assurance department.
  • the test technology DB 303 stores test technology information 303A regarding the test technology.
  • the test technique information 303A includes information such as a test technique name, a test viewpoint, and a test condition.
  • the test technique names are, for example, “white box test”, “black box test”, “boundary value analysis”, “equivalence division”, “CFD”, “data flow”, “decision table” and the like.
  • the test technique information 303A is manually input by a test engineer.
  • the domain knowledge DB 304 stores domain knowledge information 304A regarding the product field of the existing project.
  • the domain knowledge information 304A is composed of information about the configuration of the product, the function of the product, the hardware dependency, and the like. For example, in an air conditioner project, the product configurations are "indoor unit” and "outdoor unit”.
  • the domain knowledge information 304A is manually input by an existing project participation engineer.
  • the project DB 305 stores project information 305A regarding existing projects.
  • the project information 305A includes development scale, development period, development progress status, number of development personnel, development method, diversion rate, productivity, model name, function name, device version, person in charge, process, error detection density, It is composed of information such as risks.
  • the project information 305A is manually input by a project participating engineer.
  • the test case DB 306 stores the test case information 306A regarding the test executed for the existing project.
  • the test case information 306A is composed of information such as test case numbers, input values, pre-execution conditions, expected values, and post-execution conditions.
  • the test case information 306A is generated by the test case information acquisition/generation unit 109 described later.
  • the execution precondition information includes, for example, a test case number of a test case that needs to be executed before the test case is executed.
  • the expected value is a value expected as information indicated by the response signal when the device under test 2A operates correctly.
  • the information on the post-execution condition includes, for example, the test case number of the test case that needs to be executed after executing the test case.
  • the test scenario DB 307 stores a test scenario 307A in which the contents of the test performed on the existing project are described in time series.
  • the test scenario 307A is described by DSL (domain-specific language).
  • the test scenario 307A is, for example, XML format data.
  • the document DB 308 stores document information 308A such as design documents, requirement specifications, test specifications, source codes, state transition diagrams, specification change information, and sequence diagrams.
  • document information 308A is created by an engineer involved in an existing project.
  • the sequence diagram can be generated by the sequence diagram generation unit 113 described later.
  • the structured DB 309 is a database designed based on a relational model (Relational Data Model).
  • the contents of the test result information 301A, the post-shipment defect information 302A, the test technology information 303A, the domain knowledge information 304A, and the project information 305A are structured and stored by the data structuring unit 101 described later.
  • the structured DB 309 shows the model name, system name, function name, test case number, whether or not an error is detected in the test case, and whether or not the error is leaked as a defect in a later process or after shipping. Including information, etc.
  • the supervised DB 310 stores supervised data 310A used for machine learning of the error prediction model 32.
  • the supervised data 310A is generated by the supervised data generation unit 102 described below based on the information in the structured DB 309.
  • the supervised data 310A is, for example, for the test case number and the function name corresponding to the test case, whether or not an error is detected in each test case, and the error indicates a defect in a subsequent process or after shipping. It is data with a correct answer label indicating whether or not the data has leaked.
  • the unsupervised DB 311 stores unsupervised data 311A used for machine learning of the error prediction model 32.
  • the unsupervised data 311A is generated by the unstructured data analysis unit 103 described later based on the document information 308A (for example, required specifications).
  • the unsupervised data 311A includes, for example, a test case number and a function name corresponding to the test case.
  • the unsupervised data 311A does not include a correct answer label indicating whether or not an error is detected in each test case and whether or not the error is leaked as a defect in a subsequent process or after shipping.
  • 3 to 5 are diagrams showing the functional configuration of the processor 10 and the input/output data for each functional configuration.
  • FIG. 3 shows the functional configuration of the processor 10 and the input/output data for each functional configuration until the error prediction model 32 is generated based on the information stored in the machine learning DB 31.
  • the processor 10 includes a data structuring unit 101, a supervised data generation unit 102, an unstructured data analysis unit 103, and a learning unit 104 as a functional configuration.
  • the data structuring unit 101 stores these information in the structured DB 309. Store.
  • the supervised data generation unit 102 generates supervised data 310A when the information in the structured DB 309 is input. Details of the operation will be described later.
  • the unstructured data analysis unit 103 performs text analysis on the document DB 308 and the supervised data 310A, updates the supervised data 310A, and generates unsupervised data 311A. Details of the operation will be described later.
  • the learning unit 104 trains the error prediction model 32 using the unsupervised data 311A. Details of the operation will be described later.
  • the learning unit 104 trains the error prediction model 32 by, for example, the bootstrap method (BootStrap Method).
  • the processor 10 includes, as a functional configuration, an input data generation unit 105, an error prediction information acquisition unit 106, a risk evaluation unit 107, a test item generation unit 108, and a test case information acquisition/generation unit 109.
  • the processor 10 further includes, as a functional configuration, a test scenario generation unit 110, a test scenario execution unit 111, a test result comparison unit 112, and a sequence diagram generation unit 113.
  • the input data generation unit 105 generates the input data 316 when the project information about the project to be tested stored in the structured DB is input.
  • the error prediction information acquisition unit 106 inputs the input data 316 to the error prediction model 32 and acquires the error prediction information 312.
  • the error prediction information 312 is information indicating an error that may occur in the project to be tested.
  • the error prediction information 312 includes a test case number, the content of the test case, information about whether an error may occur in the test case, and whether the error can be leaked as a defect in a later process or after shipping. It is the associated information.
  • the error prediction information 312 is stored in, for example, a storage area (not shown) of the auxiliary storage unit 30 for temporarily storing a large amount of data.
  • the risk evaluation unit 107 calculates and adds a risk evaluation value for each test case of the error prediction information 312.
  • error prediction information 312A the error prediction information to which the risk evaluation value is added. The risk evaluation value will be described later.
  • test item generation unit 108 When the error prediction information 312A is input, the test item generation unit 108 generates the test item information 313.
  • the test item information 313 is information in which items of tests to be executed are described.
  • the test item information 313 is information in which a test case number and information indicating the content of the test case are associated with each other. The test item number is attached to the test item information 313 for each test case.
  • the test case information acquisition/generation unit 109 acquires the test case information 306A corresponding to each test item of the test item information 313 from the test case DB 306.
  • the test case information acquisition/generation unit 109 generates the test case information 306A corresponding to the test item according to the user operation.
  • the test case information acquisition/generation unit 109 stores the generated test case information 306A in the test case DB 306.
  • test scenario generation unit 110 When the test case information 306A is input, the test scenario generation unit 110 generates a test scenario 307A.
  • the test scenario execution unit 111 When the test scenario 307A is input, the test scenario execution unit 111 operates the test target device 2A according to the test scenario 307A and executes the test.
  • the test scenario execution unit 111 operates the test target device 2A by transmitting an operation request signal to the test target device 2A according to the test scenario 307A.
  • the test target device 2A transmits a response signal indicating the result of the operation corresponding to the request signal to the test scenario execution unit 111.
  • the test scenario execution unit 111 associates the information indicated by the request signal with the information indicated by the response signal to generate the test result information 314, and outputs the test result information 314 to the test result comparison unit 112.
  • test result comparison unit 112 When the test result information 314 is input, the test result comparison unit 112 generates test result comparison information 315.
  • the test result comparison information 315 is information in which the information indicated by the request signal, the information indicated by the response signal, and the expected value are associated with each other.
  • the sequence diagram generation unit 113 When the test result information 314 is input, the sequence diagram generation unit 113 generates the sequence diagram 308A and stores it in the document DB 308.
  • the test apparatus 1 generates the error prediction model 32 based on the information stored in the machine learning DB 31.
  • the test apparatus 1 generates input data 316 from the information about the test target project stored in the structured DB 309, inputs the input data 316 into the error prediction model 32, and tests the test target apparatus 2A.
  • a test scenario 307A is generated.
  • the test apparatus 1 executes the test of the test target apparatus 2A according to the test scenario 307A.
  • FIG. 6 is a flowchart showing the operation of the test apparatus 1 for generating the error prediction model 32.
  • the operation of the test apparatus 1 will be described with reference to the flowchart of FIG.
  • the data structuring unit 101 stores the test record information 301A, post-shipment defect information 302A, test technology information 303A, domain knowledge information 304A, and project information 305A in the structured DB 309 (S101).
  • the data structuring unit 101 determines whether there is a model name, a function name, a system name, a test case number, a test case content, an error or defect content, a record in which the error is detected, a process after the error.
  • Information about whether or not there is a leaked record as a defect after shipping is associated and stored in the structured DB 309.
  • the supervised data generation unit 102 when the information in the structured DB 309 is input, the supervised data generation unit 102 generates supervised data 310A (S102).
  • the supervised data generation unit 102 determines whether or not there is a record in which an error is detected with respect to the test case number included in the structured DB 309, and whether or not there is a record in which the error is leaked as a defect in a subsequent process or after shipping.
  • the supervised data 310 ⁇ /b>A is generated by adding a correct answer label indicating whether or not there is an error or a defect.
  • the unstructured data analysis unit 103 performs text analysis on the document DB 308 (S103). For example, the unstructured data analysis unit 103 text-analyzes the document DB 308 using the function name as a search keyword.
  • the unstructured data analysis unit 103 analyzes the similarity between the information in the document DB 308 and the information in the supervised data 310A based on the text analysis result, updates the supervised data 310A, and The nonexistent data 311A is generated (S104).
  • the learning unit 104 trains the error prediction model 32 by the semi-supervised learning algorithm (S105).
  • the test apparatus 1 trains the error prediction model 32 based on the source code and the information such as the required specifications. Therefore, the test apparatus 1 according to the present embodiment includes the error prediction model 32 that outputs, in addition to the test items of the white box test, the test items of the black box test such as the test based on the requirements described in the requirement specifications. Can be built.
  • FIG. 7 is a flowchart showing the operation of the test apparatus 1 for generating the test scenario 307A and executing the test of the test target apparatus 2A according to the test scenario 307A.
  • the operation of the test apparatus 1 will be described with reference to the flowchart of FIG.
  • the processor 10 sets a test strategy according to a user operation (S200). For example, the processor 10 generates the test item information 313 corresponding to all the test cases among the test cases whose risk evaluation values described later are equal to or more than a predetermined threshold value, or the test corresponding to only the important test cases. This is to generate the item information 313.
  • the predetermined threshold value is set in advance by the user. For example, the predetermined threshold is set in advance by the user inputting the degree of coverage indicating the ratio of the number of items to be executed to the total number of items in the test item information 313A. Important test cases are set in advance by the user.
  • the input data generation unit 105 when the information about the project to be tested, which is stored in the structured DB 309, is input, the input data generation unit 105 generates the input data 316 (S201).
  • the information about the project to be tested is input according to a user operation via the operation unit 50, for example.
  • the error prediction information acquisition unit 106 inputs the project information 305A into the error prediction model 32 and acquires the error prediction information 312 (S202).
  • the project information 305A is stored in the project DB 305, for example.
  • the risk evaluation unit 107 calculates a risk evaluation value for each test case of the error prediction information 312 and generates error prediction information 312A to which the risk evaluation value is added. Yes (S203).
  • the risk evaluation value is, for example, a value obtained by integrating the likelihood that an error will be detected and the influence degree of the error.
  • the degree of influence is calculated by analyzing the dependency relationships and relationships between the test cases.
  • the test item generation unit 108 generates the test item information 313 based on the test strategy and the risk evaluation value (S204).
  • the test case information acquisition/generation unit 109 acquires the test case information 306A corresponding to each test item of the test item information 313 from the test case DB 306 (S205). At this time, when the test case information acquisition/generation unit 109 determines that the test item information 313 includes a test item that does not exist in the test case DB 306, it generates the test case information 306A corresponding to the test item. The test case information acquisition/generation unit 109 also sets the order of execution of each test case based on the pre-execution condition and post-execution condition of each test case information 306A. Also, the test case information acquisition/generation unit 109 may change the order of execution of each test case according to a user operation.
  • test scenario generation unit 110 generates a test scenario 307A based on the test case information 306A acquired/generated by the test case information acquisition/generation unit 109 (S206).
  • the test scenario execution unit 111 operates the test target device 2A according to the test scenario 307A to execute the test (S207).
  • the test scenario execution unit 111 operates the test target device 2A by transmitting an operation request signal to the test target device 2A.
  • the test target device 2A transmits response information indicating the result of the operation corresponding to the request signal to the test scenario execution unit 111.
  • the test scenario executing unit 111 associates the information indicated by the request signal with the information indicated by the response signal to generate the test result information 314, and outputs the test result information 314 to the test result comparing unit 112.
  • test result comparison unit 112 generates the test result comparison information 315 based on the test result information 314 and displays the test result comparison information 315 on the display unit 40 (S208).
  • the test result comparison unit 112 displays the information indicated by each request signal, the information indicated by each response signal, and each expected value of the test result comparison information 315 in association with each other on the display unit 40.
  • sequence diagram generation unit 113 generates the sequence diagram 308A based on the test result information 314 (S209).
  • the sequence diagram generation unit 113 stores the sequence diagram 308A in the document DB 308.
  • the error prediction information 312 output by the error prediction model 32 includes information about the black box test in addition to the white box test. Therefore, the test apparatus 1 can execute the black box test in addition to the white box test.
  • the test apparatus 1 includes the processing unit 10.
  • the processing unit 10 generates a test scenario 307A for performing a test in the project from the information about the test target project.
  • the processing unit 10 includes a learning unit 104, an error prediction information acquisition unit 106, a risk evaluation unit 107, a test item generation unit 108, a test case information acquisition/generation unit 109, and a test scenario generation unit 110. ..
  • the learning unit 104 is a database that stores various information used in machine learning for the error prediction model 32 that outputs the error prediction information including the information indicating the probability that an input test case detects an error.
  • each test case in the test in each project detects an error based on at least the information about the existing project stored in the learning database 31 and the information about the plurality of test cases defining the individual processing in the test scenario 307A. Make them learn whether or not.
  • the error prediction information acquisition unit 106 inputs information about the project to be tested into the error prediction model 32, and acquires each error prediction information 312 corresponding to each test case.
  • the risk evaluation unit 107 calculates a risk evaluation value indicating the likelihood that an error will be detected by the acquired error prediction information 312 and the corresponding test case information 306A and the influence degree of the error, and adds the risk evaluation value to the error prediction information. To do.
  • the test item generation unit 108 generates test item information 313 indicating a test item to be executed based on the error prediction information 312A to which the risk evaluation value is added.
  • the test case information acquisition/generation unit 109 acquires the test case information 306A corresponding to the test item included in the test item information 313 from the test case database 306 included in the machine learning database 31.
  • the test scenario generation unit 110 generates a test scenario 307A based on the acquired test case information 306A.
  • the test apparatus 1 can improve the efficiency of the black box test in addition to the white box test. Furthermore, the test apparatus of the present invention can predict an error similar to an error that occurred in a past test and generate a test scenario by using the past test information.
  • the test item generation unit 108 generates test item information 313 for all test cases whose risk evaluation value is higher than a predetermined threshold value.
  • the test item generation unit 108 generates the test item information 313 for an important test case out of the test cases whose risk evaluation value is higher than a predetermined threshold value.
  • the test apparatus 1 can generate the test item information 313 according to a preset test strategy.
  • the test case information acquisition/generation unit 109 generates test case information when it is determined that the test case DB 306 does not have the test case information indicating the content of the test indicated by each test item in the test item information 313.
  • test apparatus 1 can newly generate test case information that does not exist in the test case DB 306.
  • the processor 10 further includes a test scenario execution unit 111 that executes a test of the target device according to the test scenario 307A.
  • the test apparatus 1 can execute the test of the test target apparatus according to the test scenario 307A.
  • the test apparatus 1 clarifies the dependency relationship between test cases and risks regarding the test cases to be executed in the target process from both the source code and the document, and also displays the test results of the programs of similar past projects. In consideration of this, the error can be predicted comprehensively.
  • Embodiment 2 The test apparatus 1 according to the first embodiment generated the test result comparison information 315 and the like and ended the operation.
  • the test apparatus 1 according to the second embodiment trains the error prediction model 32 based on the test result comparison information 315 and executes the test again.
  • FIG. 8 and 9 are diagrams showing the components of the test apparatus 1 of the present embodiment and the input/output data for each component.
  • FIG. 10 is a flowchart showing the scenario generation and execution operation of the test apparatus 1 according to this embodiment.
  • the processor 10 of the present embodiment includes a test result comparison information input unit 114 in addition to the functional configuration of the first embodiment.
  • the test result comparison information input unit 114 trains the error prediction model 32 based on the test result comparison information 315 from the test result comparison unit 112.
  • the processor 10 of the present embodiment executes step S200A instead of step S200 of FIG. 7 of the first embodiment. Further, the processor 10 of the present embodiment also executes the processing of steps S210 and S211.
  • the processor 10 sets a test strategy and a test end condition according to a user operation (S200A).
  • the test termination condition is, for example, the number of times the test is executed.
  • steps S201 to S209 are the same as those in FIG.
  • the processor 10 executes the operations of steps S202 to S208 and S210 until the condition for ending the test is satisfied (NO in S210).
  • the test apparatus 1 according to the present embodiment automatically executes the test until the end condition preset by the user is satisfied. Therefore, the test apparatus 1 of the present embodiment can improve the efficiency of the test.
  • the processing unit 10 of the present embodiment includes the test result comparison information input unit 114 in addition to the functional configuration of the first embodiment.
  • the test result comparison information input unit 114 trains the error prediction model 32 based on the result of executing the test of the target device according to the test scenario 307A.
  • the test apparatus 1 automatically generates a test case based on the error prediction, executes the test, and sequentially feeds back the test result to the next error prediction. Tests can be performed to improve the accuracy of error prediction.
  • Embodiment 3 The test apparatus according to the first or second embodiment can automatically perform the test using the error analysis model. However, the test device cannot automatically present the optimum countermeasure based on the risk analysis from the error information that is expected to remain after the test is performed.
  • the development support apparatus automatically generates an optimum countermeasure plan based on risk analysis as a report from error information predicted to remain, from automatic test execution to project diagnosis after test completion. It will be implemented without interruption.
  • FIG. 11 is a block diagram showing the configuration of the project diagnosis function device 700 according to the third embodiment.
  • the project diagnosis function device 700 is based on the test result data regarding the operation based on the communication protocol between the test target device 2A and the related device 2B, which is acquired from the test device 1 according to the first and second embodiments. Perform analysis.
  • the project here refers to a project for product (device) development, system development, or the like that involves software design.
  • the project diagnostic function device 700 is a device that can be attached to the test device 1 including the processor 10, the main storage unit 20, the auxiliary storage unit 30, the display unit 40, the operation unit 50, and the communication unit 60. ..
  • the test apparatus 1 is, for example, a personal computer.
  • the processor 10 is connected to other hardware via a signal line.
  • the processor 10 can be realized by a central processing unit (CPU), MPU, DSP, GPU, microcomputer, FPGA, ASIC and the like.
  • the processor 10 realizes various functions by reading an OS (Operating System), application programs, and various data stored in an auxiliary storage unit 30 described later and executing arithmetic processing.
  • the processor 10 includes the functional configuration described below.
  • the functional configuration may be implemented by firmware.
  • the processor 10 is an example of the processing unit 10.
  • the hardware in which the processor 10 and the main storage unit 20 and the auxiliary storage unit 30 described later are integrated is also referred to as a "processing circuit".
  • the main storage unit 20 is a volatile storage unit.
  • the main storage unit 20 can be realized by a RAM (Random Access Memory) or the like.
  • the main storage unit 20 temporarily stores the data used, generated, input/output, or transmitted/received in the test apparatus 1.
  • the auxiliary storage unit 30 is a non-volatile storage unit.
  • the auxiliary storage unit 30 can be realized by a ROM (Read Only Memory), a HDD (Hard Disk Drive), a flash memory, or the like.
  • the auxiliary storage unit 30 stores the OS, application programs, and various data. At least a part of the OS is loaded into the main storage unit 20 and executed by the processor 10.
  • the auxiliary storage unit 30 further stores a machine learning DB 620, and an error prediction model 32 and a project diagnosis model 800 generated based on the machine learning DB 620.
  • the error prediction model 32 and the project diagnosis model 800 embody one of the machine learning models.
  • the error prediction model 32 has been described in detail in the first and second embodiments.
  • the project diagnosis model 800 is realized by, for example, a deep neural network (deep learning device).
  • the machine learning DB 620 is stored in, for example, an external server, and is accessed from the project diagnosis function device 700 via the external network and the interface unit of the project diagnosis function device 700 so that the project diagnosis function device 700 can be accessed.
  • the machine learning DB 31 may be configured as a project diagnosis function system.
  • the project diagnosis model 800 outputs risk and countermeasure plan information 550, which is information on a risk that can occur in the test target project and countermeasures when input data including residual error information about the test target project is input. To do.
  • the project diagnosis model 800 is generated and updated by the information stored in the machine learning DB 620. Details of the machine learning DB 620 will be described later.
  • the display unit 40 displays character strings and images according to user operations.
  • the display unit 40 is composed of a liquid crystal display, an organic EL display, or the like.
  • the operation unit 50 is composed of a keyboard, a mouse, a numeric keypad, and the like.
  • the user operates the project diagnosis function device 700 via the operation unit 50.
  • the operation unit 50 may include a touch panel that is arranged so as to be superimposed on the display unit 40 and that can accept a touch operation by the user.
  • the communication unit 60 transmits/receives various data to/from the device under test 2A.
  • the communication unit 60 includes a receiver and a transmitter.
  • the receiver receives various data from the device under test 2A.
  • the transmitter transmits various data from the processor 10 to the device under test 2A.
  • the communication unit 60 can be realized by a communication chip, a NIC (Network Interface Card), or the like.
  • FIG. 12 is a diagram showing information stored in the machine learning DB 620 of the auxiliary storage unit 30.
  • the machine learning DB 620 includes a test record DB 301, a post-shipment defect DB 302, a test technology DB 303, a domain knowledge DB 304, and a project DB 305.
  • the machine learning DB 31 further includes a test case DB 306, a test scenario DB 307, a document DB 308, a structured DB 309, a supervised DB 310, a second supervised DB 440, an unsupervised DB 311, and a second unsupervised 450.
  • the information stored in the machine learning DB 620 is an example of information about an existing project in which test data and residual error prediction information already exist.
  • the machine learning DB 620 is stored in, for example, an external server, and is accessed from the project diagnosis function device 700 via the external network and the interface unit of the project diagnosis function device 700 so that the project diagnosis function device 700 can access the project diagnosis function.
  • the functional device 700 and the machine learning DB 31 may be configured as a project diagnosis function system.
  • the supervised DB 310 stores supervised data 310A used for machine learning of the error prediction model 32.
  • the supervised data 310A is generated based on the information in the structured DB 309 by the supervised data generation unit 102 described in the first embodiment.
  • the supervised data 310A is, for example, for the test case number and the function name corresponding to the test case, whether or not an error is detected in each test case, and the error indicates a defect in a subsequent process or after shipping. It is data with a correct answer label indicating whether or not the data has leaked.
  • the unsupervised DB 311 stores unsupervised data 311A used for machine learning of the error prediction model 32.
  • the unsupervised data 311A is generated by the unstructured data analysis unit 103 described in the first embodiment based on the document information 308A (for example, required specifications).
  • the unsupervised data 311A includes, for example, a test case number and a function name corresponding to the test case.
  • the unsupervised data 311A does not include a correct answer label indicating whether or not an error is detected in each test case and whether or not the error is leaked as a defect in a subsequent process or after shipping.
  • the second supervised DB 440 stores second supervised data 440A used for machine learning of the project diagnosis model 800.
  • the second supervised data 440A is generated by the second supervised data creation unit 52, which will be described later, based on the information in the structured DB 309.
  • the second supervised data 440A was implemented as a countermeasure against what kind of risk was detected in the residual error prediction information (number of cases, classification, occurrence place, etc.) based on the system test result, for example. This is data with a correct answer label indicating whether or not the countermeasure plan was effective.
  • the second unsupervised DB 450 stores second unsupervised data 450A used for machine learning of the project diagnosis model 800.
  • the second unsupervised data 450A is generated by the second unstructured data analysis unit 55, which will be described later, based on the document information 308A (for example, required specifications or source code).
  • the second unsupervised data 450A includes, for example, risks corresponding to the residual error prediction information (number of cases, classification, occurrence place, etc.) and their risk countermeasures.
  • the second unsupervised data 450A does not include the risk corresponding to the residual error prediction information (the number of cases, the classification, the occurrence location, etc.) and the correct answer label indicating whether or not the risk countermeasures are effective.
  • error information (test case, error detection presence, error post-process outflow presence or absence that is predicted to be performed in the process after the system test is performed. ) Is constructed and a test is carried out.
  • the project diagnosis model 800 is created by the project diagnosis model learning unit 790 from the residual error prediction information (number of cases, classification, occurrence location, etc.) based on the system test result after the test execution.
  • FIG. 13 is a block diagram showing a functional configuration of the processor 10 and input/output data for each functional configuration, and a configuration example of a project diagnostic model learning unit 790 for constructing the project diagnostic model 800.
  • the structured data integration unit 51 includes a test result DB 301 of a plurality of existing projects, a post-shipment defect DB 302, a test technology DB 303, a domain knowledge DB 304, a project DB 305, a residual error/risk/effect information DB 400, a risk register DB 410, and a design technology DB 420.
  • the data from the traceability information DB 430 is integrated into one structured data.
  • the second teacher-existing data creation unit 52 that creates the initial teacher data performs what kind of risk is detected in the residual error prediction information (number of cases, classification, occurrence place, etc.) and measures for those risks.
  • the data with the correct answer label indicating whether or not the countermeasure plan is effective is used to create the second supervised data 440A that is initial teacher data.
  • the unstructured data integration unit 54 is unstructured data related to software products (such as requirement specifications, source code, test documents, sequence diagrams, state transition diagrams, specification change information, and static/dynamic analysis results).
  • the certain document information 308A is integrated.
  • the second unstructured data analysis unit 55 analyzes the document information 308A that is integrated unstructured data, analyzes the relevance to the second supervised data 440A that is initial teacher data, and Unsupervised data 450A is generated.
  • the characteristic element extracting unit 56 is a characteristic element for outputting a related risk and an effective countermeasure plan from the residual error prediction information (number of cases, classification, occurrence location, etc.) in the structured data and the unstructured data. (For example, coding personnel, diversion rate, requirement change rate, etc.) are extracted.
  • the unstructured data conversion unit 57 converts the analysis result of the unstructured data and the extracted feature element into structured data having regularity (for example, XML file).
  • the supervised data input unit 53a inputs the second supervised data 440A into the learning algorithm for the project diagnosis model 800.
  • the verification evaluation criterion extraction unit 58 evaluates (feedback) the residual error prediction information (number of cases, classification, occurrence location, etc.) to what extent relevant risks and effective countermeasures are proposed. Or verify the verification criteria.
  • the model creating unit 59 builds a project diagnosis model 800 based on the second supervised data 440A input to the learning algorithm.
  • the unsupervised data input unit 53b has not been able to evaluate to what extent relevant risks and effective countermeasures have been taken for residual error prediction information (number of cases, classification, occurrence location, etc.),
  • the unsupervised data 450A of 2 is input to the project diagnosis model 800.
  • the model recreating unit 61 reconstructs the error prediction model 32 based on the input second unsupervised data 450A.
  • the verification evaluation output unit 62 outputs an output for actually evaluating (feeding back) the degree of related risk and the effective countermeasure plan for the residual error prediction information (number of cases, classification, occurrence location, etc.). To do.
  • Various types of information data are expected for the construction and learning of the project diagnosis model 800.
  • the following is an example of information data that may be needed to build and learn the project diagnostic model 800.
  • the project diagnosis model 800 is constructed using a semi-supervised learning algorithm.
  • the test result DB 301 in which the execution results of the existing projects are stored the post-shipment defect DB 302, the test technology DB 303, the domain knowledge DB 304, the residual error/risk/effect information DB 400, the risk register DB 410,
  • the data is integrated from each database of the design technology DB 420 and the traceability information DB 430, and the correspondence relationship between the data of the risk associated with the residual error prediction information (the number of cases, the classification, the occurrence place, etc.) and the effective countermeasure plan data is arranged ( S501).
  • the supervised data 440A is created (S502).
  • the second supervised data 440A which is supervised data for initial learning, as search keywords, non-specifications such as design documents, test specifications, sources, specification change information, static/dynamic analysis results, etc.
  • the text analysis is performed including the structured data (S504).
  • the similarity to the second supervised data 440A which is supervised data for initial learning, is analyzed, and where the content of the residual error is described in the specification, and there is a possibility that the specification is changed. Whether or not there is any, and the corresponding features such as the correlation between models that actually require countermeasures are extracted (S505).
  • the structured data and the unstructured data are mapped from the traceability information so that the extracted feature quantities are consistent (S506).
  • a project diagnosis model 800 that predicts the occurrence of related risks and countermeasures for them is generated based on the residual error information (S509).
  • Residual error information (predicted number, residual error prediction generator function and source location, residual error type), unsupervised data whose risk to residual error information and the effect of risk countermeasures are unknown (second unsupervised data 450A) Is input (S510).
  • the residual error information is extracted using the learning algorithm of the unsupervised data, including the information of the analysis result of the unstructured data, by extracting the feature amount of the input unsupervised data (the second unsupervised data 450A).
  • a project diagnosis model 800 for predicting the number of cases, residual error prediction generator function and source location, residual error type), risk for residual error information and risk countermeasures is generated (S511).
  • Residual error information (predicted number, residual error prediction generator function and source location, residual error type), risk to residual error information and effect of risk countermeasures are not recorded Test specifications, test results, defect information, etc.
  • the risk prediction model 32 is reconstructed by predicting the regularity of the risk and the effect of the risk countermeasures from such data and evaluating the regularity (S512).
  • the development support device is a development support device including the test device 1 according to the first or second embodiment.
  • the development support device includes a project diagnosis model 800 that presents related risks and countermeasures based on the residual error prediction information remaining at the time of completion of the test.
  • the development support device includes a project diagnosis model learning unit 790 that learns the project diagnosis model 800 based on the residual error prediction information remaining when the test is completed.
  • the project diagnosis model learning unit 790 further includes Measures based on the results of text analysis of unstructured data, analysis of the similarity between unstructured data and supervised data with clear risk and countermeasures against residual errors, and by utilizing traceability information Extract the corresponding feature elements that required Further, the project diagnosis model learning unit 790 includes an unstructured data conversion unit 57 that converts the text analysis result of the unstructured data and the extracted feature element into structured data having regularity.
  • the development support device can efficiently present the optimal countermeasure plan according to the risk of each project diagnosis after the completion of the automatic test execution.
  • the development support device operates the project diagnosis model 800 constructed in the third embodiment after the completion of the test according to the first or second embodiment in which the device under test and the test device are connected to each other. , Related risks and their effective risk countermeasures are automatically extracted based on the residual error prediction information (number of cases, classification, occurrence location) that is latent in the test object, and a report is presented.
  • FIG. 15 is a block diagram showing a configuration example of the test apparatus 1 and the project diagnosis function apparatus 700 according to the present embodiment, that is, the development support apparatus.
  • the development support apparatus according to the present embodiment automatically performs risk analysis based on residual error prediction information (number of cases, classification, occurrence location) after completion of test execution for a device under test that operates based on communication between devices. And present a measure plan. That is, the development support apparatus inputs the project, the function, and the module to be diagnosed by the risk priority input unit 510 after the test is completed, and sets the risk (QCDRS) priority level.
  • QDRS risk priority level
  • the test result data storage unit 520 loads the test result information of the project for which the test is completed. From the loaded test result information, the residual error prediction information acquisition unit 530 extracts residual error prediction information (number of cases, classification, occurrence location) 540 at the time when the test is completed.
  • the project diagnosis model 800 inputs the residual error prediction information (number of cases, classification, occurrence location) at the time when the test is completed, and outputs the risk related to the residual error prediction information and countermeasure plan information 550 thereof.
  • the risk evaluation unit 560 evaluates the extracted risk based on the degree of influence and the probability of occurrence.
  • the priority adjustment unit 570 adjusts the priority of the extracted risk by reflecting the priority level of the QCDRS set based on the risk priority input unit 510.
  • the countermeasure item selection unit 580 selects a countermeasure plan based on the adjusted risk.
  • the report creation unit 590 performs risk analysis based on the residual error prediction information (number of cases, classification, occurrence location) and creates countermeasure plans in the form of reports.
  • the residual error/risk information display unit 600 displays the report created by the report creation unit 590 on the screen of the display unit 40.
  • the residual error prediction information (number of cases, classification, occurrence location) is displayed as a graph of prediction information based on the error occurrence record for each development process.
  • the countermeasure plan report 610 is registered in the residual error/risk/effect information DB 400 together with the residual error and risk.
  • the risk countermeasure effect judgment input unit 615 feeds back the effect degree as a result of applying the risk countermeasure plan to the residual error/risk/effect information DB 400 as feedback.
  • the project diagnosis model 800 is updated through the project diagnosis model learning unit 790 with respect to the degree of the effect as a result of applying the feedback input risk countermeasure plan.
  • the project diagnosis function device 700 extracts risks and countermeasures from the residual error information after the test is performed automatically and according to the priority given to the risk by the user. Present results automatically. Therefore, it is possible to efficiently implement the countermeasure against the residual error.
  • test result of the exploratory automatic test based on the embodiment 1 or 2 of the project is loaded into the test result data storage unit 520 and saved in the test result DB 301 (S701).
  • Residual error prediction information 540 is acquired from the application result of the error prediction model 32 based on the test apparatus 1 according to the first or second embodiment (S702).
  • the number, type, and occurrence location of the residual error prediction information 540 are displayed in a graph (S703).
  • the project diagnosis model learning unit 790 integrates the following project data (S704).
  • ⁇ Residual error/risk/effect information DB 400 ⁇ Test result DB301 ⁇ Project DB305 ⁇ Post-shipment defect DB 302 ⁇ Risk register DB410 ⁇ Design technology DB 420 ⁇ Traceability information DB430
  • the project diagnosis model learning unit 790 creates input data to the project diagnosis model 800 from the integrated data generated in step S704 (S705).
  • the project diagnosis model learning unit 790 performs text analysis including unstructured data such as design documents, test specifications, sources, and specification change information of the project (S706).
  • the characteristics of the residual error of the project are analyzed, whether the error is described in the specifications, whether there is a possibility of being affected by the specification change, and correlation between models.
  • Information such as is output as structured data such as an XML file (S707).
  • the data of the residual error prediction information 540 based on the test result of the project is input to the project diagnosis model 800 (S708).
  • the related risk and its countermeasure plan information 550 are extracted from the error information predicted to remain in the project from the project diagnosis model 800 (S709).
  • parameters such as QCDRS are input in the risk priority input section 510 (S710).
  • the risk evaluation unit 560 analyzes the probability of occurrence and the degree of impact of the predicted risk and evaluates the risk (S711).
  • the priority adjustment unit 570 determines whether it is necessary to adjust the priority (S712).
  • the priority adjustment unit 570 determines whether or not it is necessary to limit the target (S713).
  • the priority adjusting unit 570 limits and extracts error types and locations (S714).
  • the priority adjustment unit 570 determines whether the target risk is known or unknown (S715).
  • the priority adjusting unit 570 loads the contingency plan from the risk register DB 410 and sets it as a countermeasure (S716).
  • the priority adjustment unit 570 determines whether the target risk is unknown (S717).
  • the priority adjustment unit 570 loads a countermeasure plan assumed from the risk of the residual error information predicted by the project diagnosis model 800 (S718).
  • the priority adjustment unit 570 sorts the remaining risks of prediction errors and their countermeasures by priority, and the countermeasure item selection unit 580 selects countermeasure items (S719).
  • the report creation unit 590 creates a report on risks and countermeasures from the viewpoint of QCDRS (S720).
  • the risk countermeasure effect determination input unit 615 inputs the determination of the risk countermeasure effect (S721).
  • test result DB 301 The update information of the test result DB 301 (test items that have been executed, residual error prediction information thereof, risks, their countermeasures, and degree of effectiveness) is fed back to the project diagnosis model 800 (S722).
  • the development support apparatus has a known risk that the project diagnosis model 800 outputs a risk due to an error that is expected to remain. It includes a priority adjustment unit 570 that determines the priority after determining whether it is unknown and presents a countermeasure item.
  • a risk countermeasure effect determination input unit 615 for inputting whether or not to learn is included.
  • the development support device can efficiently present the optimum countermeasure plan according to the risk priority of each project diagnosis after the completion of the automatic test execution.
  • the embodiment has been described as an example of the technique of the present invention.
  • the technique of the present invention is not limited to this, and can be applied to the embodiment in which changes, replacements, additions, omissions, etc. are appropriately made. Further, it is also possible to combine the constituent elements described in the above-described embodiment to form a new embodiment.
  • the project of the present invention refers to a product (device) or system development project involving software development, but is not limited to this.
  • the project may be a predetermined function of a product (apparatus) accompanied by software development, or a module development project.
  • the risk evaluation unit 107 calculates the risk evaluation value for each test case number of the error prediction information 312 by adding up the probability that an error is detected and the influence degree of the error. Although added as a value, it is not limited to this (S203 in FIG. 7).
  • the test apparatus 1 stores repair history information for errors, and the risk evaluation unit 107 adds a risk evaluation value higher to a test case having repair history information than a test case having no repair history information. May be.
  • the test item generation unit 108 selects a test case to be executed according to a user operation, but is not limited to this (S204 in FIG. 7).
  • the test item generation unit 108 may select a test case having a predetermined risk evaluation value or more as a test case to be automatically executed.
  • the test item generation unit 108 may select a test case corresponding to a predetermined test viewpoint or an important test item as a test case to be automatically executed.
  • the predetermined risk evaluation value, test viewpoint, and important test item are set in advance by a user operation.
  • the test item generation unit 108 changes the order in which the test cases are executed according to the user operation, but is not limited to this (S204 in FIG. 7).
  • the test item generation unit 108 may change the order in which the test cases are executed in order from the highest risk evaluation value.
  • the learning unit 104 trains the error prediction model 32 by the bootstrap method (BootStrap Method), but is not limited to this.
  • the learning unit 104 may train the error prediction model 32 by another known method, for example, deep learning.
  • the project diagnosis model 800 according to the third and fourth embodiments can be realized by a deep neural network (deep learning device), but may be realized by a technique related to other artificial intelligence.
  • the processor 10 further includes a test item adjustment unit, and the test item adjustment unit changes the test items generated by the test item generation unit 108 or changes the order of the test items according to a user operation. You may do it.
  • the user changes the test item information 313 via the operation unit 50, for example, so as to cover the test items for the modules and functions of high importance, and for the modules and functions common to the existing projects. You can change the order of the test items to give priority to the test.
  • Test Device 10 Processor 20 Main Storage Unit 30 Auxiliary Storage Unit 31 Machine Learning DB 32 error prediction model 40 display unit 50 operation unit 60 communication unit 101 data structuring unit 102 supervised data generation unit 103 unstructured data analysis unit 104 learning unit 105 input data generation unit 106 error prediction information acquisition unit 107 risk evaluation unit 108 Test item generation unit 109 Test case information acquisition/generation unit 110 Test scenario generation unit 111 Test scenario execution unit 112 Test result comparison unit 113 Sequence diagram generation unit 114 Test result comparison information input unit 301 Test result information DB 302 Post-shipment defect information DB 303 Test Technology DB 304 domain knowledge information DB 305 Project Information DB 306 Test Case DB 307 Test scenario DB 308 Document DB 309 structured DB 310 Supervised DB 311 Unsupervised DB 400 Residual error/risk/effect information DB 410 Risk Register DB 420 Design Technology DB 430 Traceability Information DB 620 Machine learning DB 700 project diagnosis function device 790 project diagnosis model learning unit 800 project diagnosis model 51 structured data integration unit

Abstract

Provided is a test device which creates a test scenario by predicting errors similar to errors that have occurred in previous tests. The test device is provided with a processing unit. The processing unit creates a test scenario for performing a test on a project to be tested, from information relating to the project. Further, the test device communicates with a test case database storing a plurality of sets of test case information prescribing each process in the test scenario.

Description

試験装置及び開発支援装置Test equipment and development support equipment
 本発明は、対象装置を試験する試験装置、及び、開発支援装置に関する。 The present invention relates to a test device for testing a target device and a development support device.
 従来の試験装置は、プログラムのソースコードの複雑性及び必要となるリソースを評価し、ソフトウェア全体の品質が向上する最適な試験の組み合わせを探索することにより、対象装置の試験を効率化する(例えば、特許文献1)。 The conventional test apparatus evaluates the complexity of the source code of the program and the required resources, and searches for the optimum test combination that improves the quality of the entire software, thereby making the test of the target apparatus efficient (for example, , Patent Document 1).
特開2004-220269号公報JP, 2004-220269, A
 従来の試験装置は、ソースコード等に基づく試験、即ちホワイトボックステストを実行することはできるが、要求仕様書等の上流工程での仕様に関わる情報に基づいて試験を行うものではないので、ブラックボックステストを効率化することができない。
 更に、従来の試験装置は、試験実施後に、残存すると予測される誤り情報から自動で優先度に応じたリスク分析に基づく最適な対策を提示することができない。
The conventional test device can execute a test based on the source code, that is, a white box test, but it does not perform the test based on the information related to the specifications in the upstream process such as the required specifications, so the black test is performed. Box test cannot be made efficient.
Furthermore, the conventional test apparatus cannot automatically present the optimum countermeasure based on the risk analysis according to the priority from the error information predicted to remain after the test is performed.
 本発明は、上記のような問題点を解決するためになされたものであり、ホワイトボックステストに加えてブラックボックステストも効率化する試験装置を提供する。
 更に、本発明は、残存すると予測される誤り情報から、優先度に応じたリスク分析に基づく最適な対策案を自動的にレポートとして生成し、自動試験実施から試験完了後のプロジェクト診断まで中断なく実施する開発支援装置を提供する。
The present invention has been made in order to solve the above problems, and provides a test apparatus that makes the black box test more efficient in addition to the white box test.
Furthermore, the present invention automatically generates an optimal countermeasure plan based on the risk analysis according to the priority from the error information that is expected to remain as a report, without interruption from the automatic test execution to the project diagnosis after the test completion. Provide a development support device to be implemented.
 本発明の一態様による試験装置は、処理部を備える。処理部は、試験対象のプロジェクトに関する情報から、前記プロジェクトにおいて試験を行うためのテストシナリオを生成する。処理部は、学習部と、誤り予測情報取得部と、リスク評価部と、テスト項目生成部と、テストケース情報取得/生成部と、テストシナリオ生成部と、を含む。学習部は、入力されたテストケースが誤りを検出する見込み度合いを示す情報を含む誤り予測情報を出力する誤り予測モデルに対し、機械学習で用いられる種々の情報を記憶するデータベースである機械学習データベースに格納された、少なくとも既存のプロジェクトに関する情報およびテストシナリオにおける個々の処理を規定する複数のテストケースに関する情報に基づいて、各プロジェクトにおける試験での各前記テストケースが誤りを検出したか否かを学習させる。誤り予測情報取得部は、試験対象のプロジェクトに関する情報を、誤り予測モデルに入力して、各テストケースと対応する各誤り予測情報を取得する。リスク評価部は、取得された誤り予測情報と対応するテストケースによって誤りが検出される見込み度合い及び当該誤りの影響度合いを示すリスク評価値を算出して、誤り予測情報に付加する。テスト項目生成部は、リスク評価値が付加された誤り予測情報に基づいて、実施する試験項目を示すテスト項目情報を生成する。テストケース情報取得/生成部は、機械学習データベースに含まれるテストケースデータベースから、前記テスト項目情報が含む前記試験の項目に対応するテストケース情報を取得する。テストシナリオ生成部は、取得された前記テストケース情報に基づいて、前記テストシナリオを生成する。 The test apparatus according to one aspect of the present invention includes a processing unit. The processing unit generates a test scenario for performing a test in the project from the information about the project to be tested. The processing unit includes a learning unit, an error prediction information acquisition unit, a risk evaluation unit, a test item generation unit, a test case information acquisition/generation unit, and a test scenario generation unit. The learning unit is a machine learning database that is a database that stores various types of information used in machine learning for an error prediction model that outputs error prediction information including information indicating the likelihood of detecting an error in an input test case. Whether or not each of the test cases in the test in each project detected an error, based on at least the information about the existing project and the information about the multiple test cases that define the individual treatments in the test scenario, stored in Let them learn. The error prediction information acquisition unit inputs the information about the project to be tested into the error prediction model and acquires each error prediction information corresponding to each test case. The risk evaluation unit calculates a risk evaluation value indicating a probability that an error is detected by a test case corresponding to the acquired error prediction information and a degree of influence of the error, and adds the risk evaluation value to the error prediction information. The test item generation unit generates test item information indicating a test item to be executed based on the error prediction information with the risk evaluation value added. The test case information acquisition/generation unit acquires test case information corresponding to the item of the test included in the test item information from the test case database included in the machine learning database. The test scenario generation unit generates the test scenario based on the acquired test case information.
 本発明の試験装置は、ホワイトボックステストに加えてブラックボックステストも効率化することができる。
 更に、本発明の開発支援装置は、自動試験実施完了後に各プロジェクト診断のリスクの優先度に応じて最適な対策案を効率的に提示することができる。
The test apparatus of the present invention can improve the efficiency of the black box test in addition to the white box test.
Furthermore, the development support device of the present invention can efficiently present the optimum countermeasure plan according to the risk priority of each project diagnosis after the completion of the automatic test execution.
実施の形態1に係る試験装置の構成を示すブロック図Block diagram showing the configuration of the test apparatus according to the first embodiment 試験装置の記憶部の機械学習データベースに記憶される情報を示す図The figure which shows the information memorize|stored in the machine learning database of the memory|storage part of a test device. 試験装置の構成要素と各構成要素に対する入出力データとを示す図The figure which shows the component of a test apparatus, and the input-output data with respect to each component. 試験装置の構成要素と各構成要素に対する入出力データとを示す図The figure which shows the component of a test apparatus, and the input-output data with respect to each component. 試験装置の構成要素と各構成要素に対する入出力データとを示す図The figure which shows the component of a test apparatus, and the input-output data with respect to each component. 試験装置の誤り予測モデル生成動作を示すフローチャートFlowchart showing error prediction model generation operation of test apparatus 試験装置のシナリオ生成及び実行動作を示すフローチャートFlowchart showing scenario generation and execution operation of test apparatus 実施の形態2の試験装置の構成要素と各構成要素に対する入出力データとを示す図The figure which shows the component of the test apparatus of Embodiment 2, and the input/output data with respect to each component. 試験装置の構成要素と各構成要素に対する入出力データとを示す図The figure which shows the component of a test apparatus, and the input-output data with respect to each component. 試験装置のシナリオ生成及び実行動作を示すフローチャートFlowchart showing scenario generation and execution operation of test apparatus 開発支援装置の構成要素と各構成要素に対する入出力データとを示す図The figure which shows the component of a development support apparatus, and the input/output data with respect to each component. 開発支援装置の記憶部の機械学習データベースに記憶される情報を示す図The figure which shows the information memorize|stored in the machine learning database of the memory|storage part of a development support apparatus. 実施の形態3に係る開発支援装置の構成を示すブロック図Block diagram showing the configuration of the development support apparatus according to the third embodiment 開発支援装置のプロジェクト診断モデル生成動作を示すフローチャートFlowchart showing the project diagnostic model generation operation of the development support device 実施の形態4に係る開発支援装置の構成を示すブロック図Block diagram showing the configuration of the development support apparatus according to the fourth embodiment 開発支援装置のプロジェクト診断モデルを活用したリスク分析の動作を示すフローチャートFlowchart showing the operation of risk analysis using the project diagnosis model of the development support device 開発支援装置のプロジェクト診断モデルを活用したリスク分析の動作を示すフローチャートFlowchart showing the operation of risk analysis using the project diagnosis model of the development support device
 以下で説明する複数の実施の形態及びその変形例は、それぞれ、特徴的構成を有する。
ある形態における特徴的構成または動作は、他の形態においても適用可能であり、また、本発明は、以下の例示する形態に限定されるものではない。
Each of the plurality of embodiments and the modifications thereof described below has a characteristic configuration.
The characteristic configuration or operation in one form can be applied to another form, and the present invention is not limited to the following exemplified forms.
実施の形態1.
1.構成
 図1は、実施の形態1に係る試験装置1の構成を示すブロック図である。試験装置1は、試験対象装置2Aと関連装置2Bとの間の、通信プロトコルに基づく動作に関する試験を実行する。試験装置1は、試験対象のプロジェクトについての情報に基づいて、テストシナリオを生成し、テストシナリオに従って試験を実行する。ここでのプロジェクトは、ソフトウェア設計を伴った製品(装置)開発やシステム開発等を行うプロジェクトのことをいう。
Embodiment 1.
1. Configuration FIG. 1 is a block diagram showing the configuration of the test apparatus 1 according to the first embodiment. The test apparatus 1 executes a test regarding the operation based on the communication protocol between the test target apparatus 2A and the related apparatus 2B. The test apparatus 1 generates a test scenario based on the information about the project to be tested, and executes the test according to the test scenario. The project here refers to a project for product (device) development, system development, or the like that involves software design.
 試験装置1は、プロセッサ10と、主記憶部20と、補助記憶部30と、表示部40と、操作部50と、通信部60と、を備える。試験装置1は、例えば、パーソナルコンピュータである。 The test apparatus 1 includes a processor 10, a main storage unit 20, an auxiliary storage unit 30, a display unit 40, an operation unit 50, and a communication unit 60. The test apparatus 1 is, for example, a personal computer.
 プロセッサ10は、信号線を介して他のハードウェアと接続される。プロセッサ10は、中央演算処理装置(CPU)、MPU、DSP、GPU、マイコン、FPGA、ASIC等で実現できる。プロセッサ10は、後述する補助記憶部30に記憶されたOS(Operating System)、アプリケーションプログラム、種々のデータを読み込んで演算処理を実行することにより、種々の機能を実現する。プロセッサ10は、後述する機能的構成を含む。当該機能的構成は、ファームウェアにより実現されてもよい。プロセッサ10は、処理部10の一例である。プロセッサ10と、後述する主記憶部20及び補助記憶部30と、をまとめたハードウェアを、「プロセッシングサーキットリ」ともいう。 The processor 10 is connected to other hardware via a signal line. The processor 10 can be realized by a central processing unit (CPU), MPU, DSP, GPU, microcomputer, FPGA, ASIC and the like. The processor 10 realizes various functions by reading an OS (Operating System), application programs, and various data stored in an auxiliary storage unit 30 described later and executing arithmetic processing. The processor 10 includes the functional configuration described below. The functional configuration may be implemented by firmware. The processor 10 is an example of the processing unit 10. The hardware in which the processor 10 and the main storage unit 20 and the auxiliary storage unit 30 described later are integrated is also referred to as a “processing circuit”.
 主記憶部20は、揮発性の記憶部である。主記憶部20は、RAM(Random Access Memory)等で実現できる。主記憶部20は、試験装置1において使用され、生成され、入出力され、或いは送受信されるデータを一時的に記憶する。 The main storage unit 20 is a volatile storage unit. The main storage unit 20 can be realized by a RAM (Random Access Memory) or the like. The main storage unit 20 temporarily stores the data used, generated, input/output, or transmitted/received in the test apparatus 1.
 補助記憶部30は、不揮発性の記憶部である。補助記憶部30は、ROM(Read Only Memory)、HDD(Hard Disk Drive)、フラッシュメモリ等で実現できる。補助記憶部30は、OS、アプリケーションプログラム、種々のデータを記憶している。OSの少なくとも一部は、主記憶部20にロードされて、プロセッサ10によって実行される。 The auxiliary storage unit 30 is a non-volatile storage unit. The auxiliary storage unit 30 can be realized by a ROM (Read Only Memory), a HDD (Hard Disk Drive), a flash memory, or the like. The auxiliary storage unit 30 stores the OS, application programs, and various data. At least a part of the OS is loaded into the main storage unit 20 and executed by the processor 10.
 補助記憶部30はさらに、機械学習データベース31と、機械学習データベース31に基づいて生成される誤り予測モデル32も記憶する。以下、データベースを「DB」という。誤り予測モデル32は、機械学習モデルのうちの一つを具体化したものである。
 なお、機械学習DB31は、例えば、外部サーバに格納されて、外部ネットワーク、及び、試験装置1のインタフェース部を介して、試験装置1からアクセスされるように、試験装置1及び機械学習DB31が試験システムとして構成されてもよい。
The auxiliary storage unit 30 further stores a machine learning database 31 and an error prediction model 32 generated based on the machine learning database 31. Hereinafter, the database will be referred to as "DB". The error prediction model 32 is one that embodies one of the machine learning models.
The machine learning DB 31 is stored in, for example, an external server, and the test apparatus 1 and the machine learning DB 31 are tested so that the machine learning DB 31 is accessed from the test apparatus 1 via the external network and the interface unit of the test apparatus 1. It may be configured as a system.
 誤り予測モデル32は、試験対象のプロジェクトについての情報を含む入力データ316が入力されると、試験対象のプロジェクトにて発生し得る誤りに関する情報である誤り予測情報312を出力する。誤り予測モデル32は、機械学習DB31に格納される情報によって生成されて、更新される。機械学習DB31の詳細は後述する。 The error prediction model 32 outputs the error prediction information 312 that is information about an error that can occur in the test target project when the input data 316 including information about the test target project is input. The error prediction model 32 is generated and updated by the information stored in the machine learning DB 31. Details of the machine learning DB 31 will be described later.
 表示部40は、ユーザ操作に従って文字列や画像を表示する。表示部40は、液晶ディスプレイ、有機ELディスプレイなどで構成される。 The display unit 40 displays character strings and images according to user operations. The display unit 40 is composed of a liquid crystal display, an organic EL display, or the like.
 操作部50は、キーボード、マウス、テンキーなどで構成される。ユーザは、操作部50を介して試験装置1を操作する。また、操作部50は、表示部40に重畳して配置されて、ユーザのタッチ操作を受け付け可能なタッチパネルを含んでもよい。 The operation unit 50 is composed of a keyboard, a mouse, a numeric keypad, and the like. The user operates the test apparatus 1 via the operation unit 50. In addition, the operation unit 50 may include a touch panel that is arranged so as to be superimposed on the display unit 40 and that can accept a touch operation by the user.
 通信部60は、試験対象装置2Aと各種データを送受信する。通信部60は、レシーバと、トランスミッタと、を備える。レシーバは、試験対象装置2Aからの各種データを受信する。
トランスミッタは、プロセッサ10からの各種データを、試験対象装置2Aに送信する。通信部60は、通信チップ、NIC(Network Interface Card)等で実現できる。
The communication unit 60 transmits/receives various data to/from the device under test 2A. The communication unit 60 includes a receiver and a transmitter. The receiver receives various data from the device under test 2A.
The transmitter transmits various data from the processor 10 to the device under test 2A. The communication unit 60 can be realized by a communication chip, a NIC (Network Interface Card), or the like.
 図2は、補助記憶部30の機械学習DB31に格納される情報を示す図である。機械学習DB31は、機械学習で用いられる種々の情報を記憶するデータベースである。即ち、機械学習DB31は、テスト実績DB301と、出荷後不具合DB302と、テスト技術DB303と、ドメイン知識DB304と、プロジェクトDB305と、を含む。機械学習DB31はさらに、テストケースDB306と、テストシナリオDB307と、ドキュメントDB308と、構造化DB309と、教師ありDB310と、教師なしDB311と、を含む。機械学習DB31に格納される情報は、試験データが既に存在する、既存のプロジェクトに関する情報の一例である。
 前述のように機械学習DB31は、例えば、外部サーバに格納されて、外部ネットワーク、及び、試験装置1のインタフェース部を介して、試験装置1からアクセスされるように、試験装置1及び機械学習DB31が試験システムとして構成されてもよい。
FIG. 2 is a diagram showing information stored in the machine learning DB 31 of the auxiliary storage unit 30. The machine learning DB 31 is a database that stores various information used in machine learning. That is, the machine learning DB 31 includes a test result DB 301, a post-shipment defect DB 302, a test technology DB 303, a domain knowledge DB 304, and a project DB 305. The machine learning DB 31 further includes a test case DB 306, a test scenario DB 307, a document DB 308, a structured DB 309, a supervised DB 310, and an unsupervised DB 311. The information stored in the machine learning DB 31 is an example of information about an existing project for which test data already exists.
As described above, the machine learning DB 31 is stored in, for example, an external server, and is accessed from the test apparatus 1 via the external network and the interface section of the test apparatus 1, so that the test apparatus 1 and the machine learning DB 31 are accessed. May be configured as a test system.
 テスト実績DB301は、既存のプロジェクトのテスト実績に関するテスト実績情報301Aを格納する。テスト実績情報301Aは、テストケース番号、試験の工程名、機種名、誤りが検出されたか否か、誤りの内容、誤りが検出された箇所、誤りの原因、シーケンス図、試験ログ等の情報で構成される。誤りは、出荷後不具合の原因となり得る欠陥や設計誤り等のことをいう。テスト実績情報301Aは、後述するテスト結果比較部112により自動生成される。本実施の形態において、機種名は、開発中の製品コードであるがこれに限定されない。例えば、機種名は、製品名であってもよい。 The test record DB 301 stores test record information 301A regarding test records of existing projects. The test record information 301A includes information such as a test case number, a test process name, a model name, whether or not an error has been detected, the content of the error, the location of the error, the cause of the error, a sequence diagram, and a test log. Composed. An error is a defect, design error, or the like that may cause a defect after shipping. The test result information 301A is automatically generated by the test result comparison unit 112 described later. In the present embodiment, the model name is a product code under development, but is not limited to this. For example, the model name may be a product name.
 出荷後不具合DB302は、既存のプロジェクトの出荷後の不具合に関する出荷後不具合情報302Aを格納する。出荷後不具合情報302Aは、システム名、機種名、ソフトウェアのバージョン名、不具合の発生日時、不具合の内容、不具合の原因、不具合の検出密度等の情報で構成される。出荷後不具合情報302Aは、品質保証部門の担当者により手動で入力される。 The post-shipment defect DB 302 stores post-shipment defect information 302A relating to post-shipment defects of existing projects. The post-shipment defect information 302A includes information such as a system name, a model name, a software version name, the date and time of occurrence of the defect, the content of the defect, the cause of the defect, and the defect detection density. The post-shipment defect information 302A is manually input by a person in charge of the quality assurance department.
 テスト技術DB303は、テスト技術に関するテスト技術情報303Aを格納する。テスト技術情報303Aは、テスト技法名、テスト観点、テストの条件等の情報で構成される。テスト技法名は、例えば、「ホワイトボックステスト」,「ブラックボックステスト」,「境界値分析」,「同値分割」,「CFD」,「データフロー」,「デシジョンテーブル」等である。テスト技術情報303Aは、テスト技術者により手動で入力される。 The test technology DB 303 stores test technology information 303A regarding the test technology. The test technique information 303A includes information such as a test technique name, a test viewpoint, and a test condition. The test technique names are, for example, “white box test”, “black box test”, “boundary value analysis”, “equivalence division”, “CFD”, “data flow”, “decision table” and the like. The test technique information 303A is manually input by a test engineer.
 ドメイン知識DB304は、既存のプロジェクトの製品分野に関するドメイン知識情報304Aを格納する。ドメイン知識情報304Aは、製品の構成、製品が有する機能、ハードウェア依存等についての情報で構成される。例えば、空調器のプロジェクトにおいて、製品の構成は、「室内機」,「室外機」である。ドメイン知識情報304Aは、既存のプロジェクトの参画技術者により手動で入力される。 The domain knowledge DB 304 stores domain knowledge information 304A regarding the product field of the existing project. The domain knowledge information 304A is composed of information about the configuration of the product, the function of the product, the hardware dependency, and the like. For example, in an air conditioner project, the product configurations are "indoor unit" and "outdoor unit". The domain knowledge information 304A is manually input by an existing project participation engineer.
 プロジェクトDB305は、既存のプロジェクトに関するプロジェクト情報305Aを格納する。プロジェクト情報305Aは、開発規模、開発期間、開発の進捗状況、開発要員人数、開発手法、流用率、生産性、機種名、機能名、機器のバージョン、担当者名、工程、誤りの検出密度、リスク等の情報で構成される。プロジェクト情報305Aは、プロジェクトの参画技術者により手動で入力される。 The project DB 305 stores project information 305A regarding existing projects. The project information 305A includes development scale, development period, development progress status, number of development personnel, development method, diversion rate, productivity, model name, function name, device version, person in charge, process, error detection density, It is composed of information such as risks. The project information 305A is manually input by a project participating engineer.
 テストケースDB306は、既存のプロジェクトについて実行した試験についてのテストケース情報306Aを格納する。テストケース情報306Aは、テストケース番号、入力値、実行事前条件、期待値、実行事後条件等の情報で構成される。テストケース情報306Aは、後述するテストケース情報取得/生成部109により生成される。実行事前条件の情報は、例えば、そのテストケースを実行する前に実行する必要があるテストケースのテストケース番号を含む。期待値は、試験対象装置2Aが正しく動作した場合の応答信号が示す情報として期待される値である。実行事後条件の情報は、例えば、そのテストケースを実行した後に実行する必要があるテストケースのテストケース番号を含む。 The test case DB 306 stores the test case information 306A regarding the test executed for the existing project. The test case information 306A is composed of information such as test case numbers, input values, pre-execution conditions, expected values, and post-execution conditions. The test case information 306A is generated by the test case information acquisition/generation unit 109 described later. The execution precondition information includes, for example, a test case number of a test case that needs to be executed before the test case is executed. The expected value is a value expected as information indicated by the response signal when the device under test 2A operates correctly. The information on the post-execution condition includes, for example, the test case number of the test case that needs to be executed after executing the test case.
 テストシナリオDB307は、既存のプロジェクトに対して行った試験の内容を時系列で記述したテストシナリオ307Aを格納する。テストシナリオ307Aは、DSL(domain-specific language)により記述される。テストシナリオ307Aは、例えば、XML形式のデータである。 The test scenario DB 307 stores a test scenario 307A in which the contents of the test performed on the existing project are described in time series. The test scenario 307A is described by DSL (domain-specific language). The test scenario 307A is, for example, XML format data.
 ドキュメントDB308は、設計書、要求仕様書、試験仕様書、ソースコード、状態遷移図、仕様変更情報、シーケンス図等のドキュメント情報308Aを格納する。ここで、ドキュメント情報308Aは、既存のプロジェクトの参画技術者により作成される。また、シーケンス図は、後述するシーケンス図生成部113により生成され得る。 The document DB 308 stores document information 308A such as design documents, requirement specifications, test specifications, source codes, state transition diagrams, specification change information, and sequence diagrams. Here, the document information 308A is created by an engineer involved in an existing project. The sequence diagram can be generated by the sequence diagram generation unit 113 described later.
 構造化DB309は、関係モデル(Relational Data Model)に基づいて設計されるデータベースである。構造化DB309には、後述するデータ構造化部101により、テスト実績情報301A、出荷後不具合情報302A、テスト技術情報303A、ドメイン知識情報304A、プロジェクト情報305Aの内容が構造化されて格納される。構造化DB309は、機種名、システム名、機能名、テストケース番号、当該テストケースにて誤りが検出されたか否か、当該誤りが後の工程や出荷後の不具合として流出したか否かについての情報等を含む。 The structured DB 309 is a database designed based on a relational model (Relational Data Model). In the structured DB 309, the contents of the test result information 301A, the post-shipment defect information 302A, the test technology information 303A, the domain knowledge information 304A, and the project information 305A are structured and stored by the data structuring unit 101 described later. The structured DB 309 shows the model name, system name, function name, test case number, whether or not an error is detected in the test case, and whether or not the error is leaked as a defect in a later process or after shipping. Including information, etc.
 教師ありDB310は、誤り予測モデル32の機械学習に用いられる教師ありデータ310Aを格納する。教師ありデータ310Aは、後述する教師ありデータ生成部102により、構造化DB309の情報に基づいて生成される。教師ありデータ310Aは、例えば、テストケース番号、及びテストケースに対応する機能名に対して、各テストケースにて誤りが検出されたか否か、及び当該誤りが後の工程や出荷後の不具合として流出したか否かを示す正解ラベルが付されたデータである。 The supervised DB 310 stores supervised data 310A used for machine learning of the error prediction model 32. The supervised data 310A is generated by the supervised data generation unit 102 described below based on the information in the structured DB 309. The supervised data 310A is, for example, for the test case number and the function name corresponding to the test case, whether or not an error is detected in each test case, and the error indicates a defect in a subsequent process or after shipping. It is data with a correct answer label indicating whether or not the data has leaked.
 教師なしDB311は、誤り予測モデル32の機械学習に用いられる教師なしデータ311Aを格納する。教師なしデータ311Aは、後述する非構造化データ解析部103により、ドキュメント情報308A(例えば、要求仕様書)に基づいて生成される。教師なしデータ311Aは、例えば、テストケース番号、及びテストケースに対応する機能名を含む。教師なしデータ311Aは、各テストケースにて誤りが検出されたか否か、及び当該誤りが後の工程や出荷後の不具合として流出したか否かを示す正解ラベルを含まない。 The unsupervised DB 311 stores unsupervised data 311A used for machine learning of the error prediction model 32. The unsupervised data 311A is generated by the unstructured data analysis unit 103 described later based on the document information 308A (for example, required specifications). The unsupervised data 311A includes, for example, a test case number and a function name corresponding to the test case. The unsupervised data 311A does not include a correct answer label indicating whether or not an error is detected in each test case and whether or not the error is leaked as a defect in a subsequent process or after shipping.
 図3~5は、プロセッサ10の機能的構成と各機能的構成に対する入出力データとを示す図である。 3 to 5 are diagrams showing the functional configuration of the processor 10 and the input/output data for each functional configuration.
 図3は、機械学習DB31に格納された情報に基づいて誤り予測モデル32を生成するまでの、プロセッサ10の機能的構成と各機能的構成に対する入出力データとを示している。プロセッサ10は、機能的構成として、データ構造化部101と、教師ありデータ生成部102と、非構造化データ解析部103と、学習部104と、を含む。 FIG. 3 shows the functional configuration of the processor 10 and the input/output data for each functional configuration until the error prediction model 32 is generated based on the information stored in the machine learning DB 31. The processor 10 includes a data structuring unit 101, a supervised data generation unit 102, an unstructured data analysis unit 103, and a learning unit 104 as a functional configuration.
 データ構造化部101は、既存のプロジェクトに関する、テスト実績情報301A、出荷後不具合情報302A、テスト技術情報303A、ドメイン知識情報304A、プロジェクト情報305Aが入力されると、これらの情報を構造化DB309に格納する。 When the test result information 301A, the post-shipment defect information 302A, the test technology information 303A, the domain knowledge information 304A, and the project information 305A regarding the existing project are input, the data structuring unit 101 stores these information in the structured DB 309. Store.
 教師ありデータ生成部102は、構造化DB309の情報が入力されると、教師ありデータ310Aを生成する。動作の詳細は後述する。 The supervised data generation unit 102 generates supervised data 310A when the information in the structured DB 309 is input. Details of the operation will be described later.
 非構造化データ解析部103は、ドキュメントDB308及び教師ありデータ310Aをテキスト解析して、教師ありデータ310Aを更新し、教師なしデータ311Aを生成する。動作の詳細は後述する。 The unstructured data analysis unit 103 performs text analysis on the document DB 308 and the supervised data 310A, updates the supervised data 310A, and generates unsupervised data 311A. Details of the operation will be described later.
 学習部104は、教師ありデータ310A及び教師なしデータ311Aが入力されると、当該教師なしデータ311Aを用いて、誤り予測モデル32を学習させる。動作の詳細は後述する。学習部104は、例えば、ブートストラップ法(Boot Strap Method)によって、誤り予測モデル32を学習させる。 When the supervised data 310A and the unsupervised data 311A are input, the learning unit 104 trains the error prediction model 32 using the unsupervised data 311A. Details of the operation will be described later. The learning unit 104 trains the error prediction model 32 by, for example, the bootstrap method (BootStrap Method).
 図4及び図5は、構造化DB309に格納された試験対象のプロジェクトについての情報に基づいて、試験対象装置2Aの試験を実行し、試験の結果に関する情報を生成するまでの、プロセッサ10の機能的構成と各機能的構成に対する入出力データとを示している。プロセッサ10は、機能的構成として、入力データ生成部105と、誤り予測情報取得部106と、リスク評価部107と、テスト項目生成部108と、テストケース情報取得/生成部109と、を含む。プロセッサ10はさらに、機能的構成として、テストシナリオ生成部110と、テストシナリオ実行部111と、テスト結果比較部112と、シーケンス図生成部113と、を含む。 4 and 5 are functions of the processor 10 until the test of the test target device 2A is executed based on the information about the test target project stored in the structured DB 309 and the information about the test result is generated. And the input/output data for each functional configuration are shown. The processor 10 includes, as a functional configuration, an input data generation unit 105, an error prediction information acquisition unit 106, a risk evaluation unit 107, a test item generation unit 108, and a test case information acquisition/generation unit 109. The processor 10 further includes, as a functional configuration, a test scenario generation unit 110, a test scenario execution unit 111, a test result comparison unit 112, and a sequence diagram generation unit 113.
 入力データ生成部105は、構造化DBに格納された試験対象のプロジェクトに関するプロジェクト情報が入力されると、入力データ316を生成する。 The input data generation unit 105 generates the input data 316 when the project information about the project to be tested stored in the structured DB is input.
 誤り予測情報取得部106は、入力データ316が入力されると、当該入力データ316を誤り予測モデル32に入力して、誤り予測情報312を取得する。誤り予測情報312は、試験対象のプロジェクトにて生じ得る誤りを示す情報である。誤り予測情報312は、テストケース番号と、当該テストケースの内容、当該テストケースにおいて誤りが生じ得るか、及び当該誤りが後の工程や出荷後の不具合として流出し得るかについての情報と、を対応付けた情報である。誤り予測情報312は、例えば、補助記憶部30が有する、大容量のデータを一時的に記憶するための記憶領域(図示せず)に記憶される。 When the input data 316 is input, the error prediction information acquisition unit 106 inputs the input data 316 to the error prediction model 32 and acquires the error prediction information 312. The error prediction information 312 is information indicating an error that may occur in the project to be tested. The error prediction information 312 includes a test case number, the content of the test case, information about whether an error may occur in the test case, and whether the error can be leaked as a defect in a later process or after shipping. It is the associated information. The error prediction information 312 is stored in, for example, a storage area (not shown) of the auxiliary storage unit 30 for temporarily storing a large amount of data.
 リスク評価部107は、誤り予測情報312が入力されると、当該誤り予測情報312のテストケース毎にリスク評価値を算出して付加する。以下、リスク評価値が付加された誤り予測情報を誤り予測情報312Aという。リスク評価値については後述する。 When the error prediction information 312 is input, the risk evaluation unit 107 calculates and adds a risk evaluation value for each test case of the error prediction information 312. Hereinafter, the error prediction information to which the risk evaluation value is added is referred to as error prediction information 312A. The risk evaluation value will be described later.
 テスト項目生成部108は、誤り予測情報312Aが入力されると、テスト項目情報313を生成する。テスト項目情報313は、実行する試験の項目が記載された情報である。テスト項目情報313は、テストケース番号と、当該テストケースの内容を示す情報と、を対応付けた情報である。テスト項目情報313には、テストケース毎に、テスト項目番号が付されている。 When the error prediction information 312A is input, the test item generation unit 108 generates the test item information 313. The test item information 313 is information in which items of tests to be executed are described. The test item information 313 is information in which a test case number and information indicating the content of the test case are associated with each other. The test item number is attached to the test item information 313 for each test case.
 テストケース情報取得/生成部109は、テスト項目情報313が入力されると、テストケースDB306から、テスト項目情報313の各テスト項目に対応するテストケース情報306Aを取得する。テストケースDB306に、テスト項目に対応するテストケース情報がないと判断した場合、テストケース情報取得/生成部109は、ユーザ操作に従って、当該テスト項目に対応するテストケース情報306Aを生成する。テストケース情報取得/生成部109は、生成したテストケース情報306Aを、テストケースDB306に格納する。 When the test item information 313 is input, the test case information acquisition/generation unit 109 acquires the test case information 306A corresponding to each test item of the test item information 313 from the test case DB 306. When it is determined that the test case information corresponding to the test item does not exist in the test case DB 306, the test case information acquisition/generation unit 109 generates the test case information 306A corresponding to the test item according to the user operation. The test case information acquisition/generation unit 109 stores the generated test case information 306A in the test case DB 306.
 テストシナリオ生成部110は、テストケース情報306Aが入力されると、テストシナリオ307Aを生成する。 When the test case information 306A is input, the test scenario generation unit 110 generates a test scenario 307A.
 テストシナリオ実行部111は、テストシナリオ307Aが入力されると、試験対象装置2Aをテストシナリオ307Aに従って動作させて、試験を実行する。テストシナリオ実行部111は、テストシナリオ307Aに従って、動作の要求信号を試験対象装置2Aに送信することによって、試験対象装置2Aを動作させる。試験対象装置2Aは、要求信号に対応する動作の結果を示す応答信号を、テストシナリオ実行部111に送信する。テストシナリオ実行部111は、要求信号が示す情報と応答信号が示す情報とを対応付けてテスト結果情報314を生成して、当該テスト結果情報314をテスト結果比較部112に出力する。 When the test scenario 307A is input, the test scenario execution unit 111 operates the test target device 2A according to the test scenario 307A and executes the test. The test scenario execution unit 111 operates the test target device 2A by transmitting an operation request signal to the test target device 2A according to the test scenario 307A. The test target device 2A transmits a response signal indicating the result of the operation corresponding to the request signal to the test scenario execution unit 111. The test scenario execution unit 111 associates the information indicated by the request signal with the information indicated by the response signal to generate the test result information 314, and outputs the test result information 314 to the test result comparison unit 112.
 テスト結果比較部112は、テスト結果情報314が入力されると、テスト結果比較情報315を生成する。テスト結果比較情報315は、要求信号が示す情報と、応答信号が示す情報と、期待値とを対応付けたものである。 When the test result information 314 is input, the test result comparison unit 112 generates test result comparison information 315. The test result comparison information 315 is information in which the information indicated by the request signal, the information indicated by the response signal, and the expected value are associated with each other.
 シーケンス図生成部113は、テスト結果情報314が入力されると、シーケンス図308Aを生成して、ドキュメントDB308に格納する。 When the test result information 314 is input, the sequence diagram generation unit 113 generates the sequence diagram 308A and stores it in the document DB 308.
2.動作
 以上のように構成される試験装置1の動作について、図1~7を参照して説明する。試験装置1は、機械学習DB31に格納された情報に基づいて誤り予測モデル32を生成する。試験装置1は、構造化DB309に格納された試験対象のプロジェクトについての情報から入力データ316を生成し、当該入力データ316を誤り予測モデル32に入力して、試験対象装置2Aを試験するためのテストシナリオ307Aを生成する。試験装置1は、テストシナリオ307Aに従って、試験対象装置2Aの試験を実行する。
2. Operation The operation of the test apparatus 1 configured as above will be described with reference to FIGS. The test apparatus 1 generates the error prediction model 32 based on the information stored in the machine learning DB 31. The test apparatus 1 generates input data 316 from the information about the test target project stored in the structured DB 309, inputs the input data 316 into the error prediction model 32, and tests the test target apparatus 2A. A test scenario 307A is generated. The test apparatus 1 executes the test of the test target apparatus 2A according to the test scenario 307A.
2-1.誤り予測モデルの生成動作
 図6は、試験装置1の、誤り予測モデル32を生成する動作を示すフローチャートである。以下、図6のフローチャートに即して、試験装置1の動作を説明する。
2-1. Error Prediction Model Generation Operation FIG. 6 is a flowchart showing the operation of the test apparatus 1 for generating the error prediction model 32. Hereinafter, the operation of the test apparatus 1 will be described with reference to the flowchart of FIG.
 最初に、データ構造化部101は、テスト実績情報301A、出荷後不具合情報302A、テスト技術情報303A、ドメイン知識情報304A、プロジェクト情報305Aを、構造化DB309に格納する(S101)。データ構造化部101は、機種名、機能名、システム名、テストケース番号、テストケースの内容、誤りや不具合の内容、誤りが検出された記録があるか否か、当該誤りが後の工程や出荷後の不具合として流出した記録があるか否か等についての情報を対応付けて、構造化DB309に格納する。 First, the data structuring unit 101 stores the test record information 301A, post-shipment defect information 302A, test technology information 303A, domain knowledge information 304A, and project information 305A in the structured DB 309 (S101). The data structuring unit 101 determines whether there is a model name, a function name, a system name, a test case number, a test case content, an error or defect content, a record in which the error is detected, a process after the error. Information about whether or not there is a leaked record as a defect after shipping is associated and stored in the structured DB 309.
 次に、教師ありデータ生成部102は、構造化DB309の情報が入力されると、教師ありデータ310Aを生成する(S102)。教師ありデータ生成部102は、構造化DB309が含むテストケース番号に対して、誤りが検出された記録があるか否か、当該誤りが後の工程や出荷後の不具合として流出した記録があるか否か、及び、誤りや不具合の内容についての正解ラベルを付して、教師ありデータ310Aを生成する。 Next, when the information in the structured DB 309 is input, the supervised data generation unit 102 generates supervised data 310A (S102). The supervised data generation unit 102 determines whether or not there is a record in which an error is detected with respect to the test case number included in the structured DB 309, and whether or not there is a record in which the error is leaked as a defect in a subsequent process or after shipping. The supervised data 310</b>A is generated by adding a correct answer label indicating whether or not there is an error or a defect.
 次に、非構造化データ解析部103は、ドキュメントDB308をテキスト解析する(S103)。例えば、非構造化データ解析部103は、ドキュメントDB308を、機能名を検索キーワードとしてテキスト解析する。 Next, the unstructured data analysis unit 103 performs text analysis on the document DB 308 (S103). For example, the unstructured data analysis unit 103 text-analyzes the document DB 308 using the function name as a search keyword.
 次に、非構造化データ解析部103は、テキスト解析の結果に基づいてドキュメントDB308の情報と教師ありデータ310Aの情報との間の類似性を解析して、教師ありデータ310Aを更新し、教師なしデータ311Aを生成する(S104)。 Next, the unstructured data analysis unit 103 analyzes the similarity between the information in the document DB 308 and the information in the supervised data 310A based on the text analysis result, updates the supervised data 310A, and The nonexistent data 311A is generated (S104).
 次に、学習部104は、教師ありデータ310A及び教師なしデータ311Aが入力されると、半教師あり学習アルゴリズムにより誤り予測モデル32を学習させる(S105)。 Next, when the supervised data 310A and the unsupervised data 311A are input, the learning unit 104 trains the error prediction model 32 by the semi-supervised learning algorithm (S105).
 上述したように、試験装置1は、ソースコードに加えて、要求仕様書等の情報に基づいて誤り予測モデル32を学習させる。したがって、本実施の形態の試験装置1は、ホワイトボックステストのテスト項目に加えて、例えば、要求仕様書に記載の要求に基づく試験等のブラックボックステストのテスト項目も出力する誤り予測モデル32を構築することができる。 As described above, the test apparatus 1 trains the error prediction model 32 based on the source code and the information such as the required specifications. Therefore, the test apparatus 1 according to the present embodiment includes the error prediction model 32 that outputs, in addition to the test items of the white box test, the test items of the black box test such as the test based on the requirements described in the requirement specifications. Can be built.
2-2.テストシナリオの生成及び実行動作
 図7は、試験装置1の、テストシナリオ307Aを生成して、当該テストシナリオ307Aに従って、試験対象装置2Aの試験を実行する動作を示すフローチャートである。以下、図7のフローチャートに即して、試験装置1の動作を説明する。
2-2. Generation and Execution Operation of Test Scenario FIG. 7 is a flowchart showing the operation of the test apparatus 1 for generating the test scenario 307A and executing the test of the test target apparatus 2A according to the test scenario 307A. Hereinafter, the operation of the test apparatus 1 will be described with reference to the flowchart of FIG.
 最初に、プロセッサ10は、ユーザ操作に従って、テスト戦略を設定する(S200)。例えば、プロセッサ10は、後述するリスク評価値が所定の閾値以上であるテストケースのうち、全てのテストケースに対応するテスト項目情報313を生成すること、或いは、重要なテストケースのみに対応するテスト項目情報313を生成することである。所定の閾値は、ユーザにより、事前に設定される。例えば、所定の閾値は、ユーザが、テスト項目情報313Aの全項目数のうちの実行する項目数の割合を示す網羅度を入力することにより、事前に設定される。重要なテストケースは、事前にユーザにより設定される。 First, the processor 10 sets a test strategy according to a user operation (S200). For example, the processor 10 generates the test item information 313 corresponding to all the test cases among the test cases whose risk evaluation values described later are equal to or more than a predetermined threshold value, or the test corresponding to only the important test cases. This is to generate the item information 313. The predetermined threshold value is set in advance by the user. For example, the predetermined threshold is set in advance by the user inputting the degree of coverage indicating the ratio of the number of items to be executed to the total number of items in the test item information 313A. Important test cases are set in advance by the user.
 次に、入力データ生成部105は、構造化DB309に格納された、試験対象のプロジェクトについての情報が入力されると、入力データ316を生成する(S201)。試験対象のプロジェクトについての情報は、例えば、操作部50を介して、ユーザ操作に従って入力される。 Next, when the information about the project to be tested, which is stored in the structured DB 309, is input, the input data generation unit 105 generates the input data 316 (S201). The information about the project to be tested is input according to a user operation via the operation unit 50, for example.
 次に、誤り予測情報取得部106は、入力データ316が入力されると、当該プロジェクト情報305Aを誤り予測モデル32に入力して、誤り予測情報312を取得する(S202)。当該プロジェクト情報305Aは、例えば、プロジェクトDB305に格納されている。 Next, when the input data 316 is input, the error prediction information acquisition unit 106 inputs the project information 305A into the error prediction model 32 and acquires the error prediction information 312 (S202). The project information 305A is stored in the project DB 305, for example.
 次に、リスク評価部107は、誤り予測情報312が入力されると、当該誤り予測情報312のテストケース毎にリスク評価値を算出して、当該リスク評価値を付加した誤り予測情報312Aを生成する(S203)。リスク評価値は、例えば、誤りが検出される見込み度合いと、当該誤りの影響度合いとを積算した値である。影響度合いは、各テストケースの間の依存関係及び関連性を分析することによって算出される。 Next, when the error prediction information 312 is input, the risk evaluation unit 107 calculates a risk evaluation value for each test case of the error prediction information 312 and generates error prediction information 312A to which the risk evaluation value is added. Yes (S203). The risk evaluation value is, for example, a value obtained by integrating the likelihood that an error will be detected and the influence degree of the error. The degree of influence is calculated by analyzing the dependency relationships and relationships between the test cases.
 次に、テスト項目生成部108は、誤り予測情報312Aが入力されると、テスト戦略及びリスク評価値に基づいて、テスト項目情報313を生成する(S204)。 Next, when the error prediction information 312A is input, the test item generation unit 108 generates the test item information 313 based on the test strategy and the risk evaluation value (S204).
 次に、テストケース情報取得/生成部109は、テストケースDB306から、テスト項目情報313の各テスト項目に対応するテストケース情報306Aを取得する(S205)。このとき、テストケース情報取得/生成部109は、テスト項目情報313に、テストケースDB306に存在しないテスト項目があると判断した場合、当該テスト項目に対応するテストケース情報306Aを生成する。また、テストケース情報取得/生成部109は、各テストケース情報306Aの実行事前条件および実行事後条件に基づいて、各テストケースの実行する順番を設定する。また、テストケース情報取得/生成部109は、ユーザ操作に従って、各テストケースの実行する順番を変更してもよい。 Next, the test case information acquisition/generation unit 109 acquires the test case information 306A corresponding to each test item of the test item information 313 from the test case DB 306 (S205). At this time, when the test case information acquisition/generation unit 109 determines that the test item information 313 includes a test item that does not exist in the test case DB 306, it generates the test case information 306A corresponding to the test item. The test case information acquisition/generation unit 109 also sets the order of execution of each test case based on the pre-execution condition and post-execution condition of each test case information 306A. Also, the test case information acquisition/generation unit 109 may change the order of execution of each test case according to a user operation.
 次に、テストシナリオ生成部110は、テストケース情報取得/生成部109が取得/生成したテストケース情報306Aに基づいて、テストシナリオ307Aを生成する(S206)。 Next, the test scenario generation unit 110 generates a test scenario 307A based on the test case information 306A acquired/generated by the test case information acquisition/generation unit 109 (S206).
 次に、テストシナリオ実行部111は、試験対象装置2Aをテストシナリオ307Aに従って動作させて、試験を実行する(S207)。テストシナリオ実行部111は、動作の要求信号を、試験対象装置2Aに送信することによって、試験対象装置2Aを動作させる。試験対象装置2Aは、要求信号に対応する動作の結果を示す応答情報を、テストシナリオ実行部111に送信する。テストシナリオ実行部111は、要求信号が示す情報と応答信号が示す情報とを対応付けてテスト結果情報314を生成し、当該テスト結果情報314をテスト結果比較部112に出力する。 Next, the test scenario execution unit 111 operates the test target device 2A according to the test scenario 307A to execute the test (S207). The test scenario execution unit 111 operates the test target device 2A by transmitting an operation request signal to the test target device 2A. The test target device 2A transmits response information indicating the result of the operation corresponding to the request signal to the test scenario execution unit 111. The test scenario executing unit 111 associates the information indicated by the request signal with the information indicated by the response signal to generate the test result information 314, and outputs the test result information 314 to the test result comparing unit 112.
 次に、テスト結果比較部112は、テスト結果情報314に基づいて、テスト結果比較情報315を生成し、当該テスト結果比較情報315を表示部40に表示する(S208)。テスト結果比較部112は、テスト結果比較情報315の、各要求信号が示す情報と、各応答信号が示す情報と、各期待値と、を対応付けて表示部40に表示する。 Next, the test result comparison unit 112 generates the test result comparison information 315 based on the test result information 314 and displays the test result comparison information 315 on the display unit 40 (S208). The test result comparison unit 112 displays the information indicated by each request signal, the information indicated by each response signal, and each expected value of the test result comparison information 315 in association with each other on the display unit 40.
 次に、シーケンス図生成部113は、テスト結果情報314に基づいて、シーケンス図308Aを生成する(S209)。シーケンス図生成部113は、シーケンス図308Aを、ドキュメントDB308に格納する。 Next, the sequence diagram generation unit 113 generates the sequence diagram 308A based on the test result information 314 (S209). The sequence diagram generation unit 113 stores the sequence diagram 308A in the document DB 308.
 上述したように、誤り予測モデル32が出力する誤り予測情報312は、ホワイトボックステストに加えて、ブラックボックステストに関する情報も含む。したがって、試験装置1は、ホワイトボックステストに加えて、ブラックボックステストも実行することができる。 As described above, the error prediction information 312 output by the error prediction model 32 includes information about the black box test in addition to the white box test. Therefore, the test apparatus 1 can execute the black box test in addition to the white box test.
3.まとめ
 以上説明したように、本実施の形態に係る試験装置1は、処理部10を備える。処理部10は、試験対象のプロジェクトに関する情報から、当該プロジェクトにおいて試験を行うためのテストシナリオ307Aを生成する。処理部10は、学習部104と、誤り予測情報取得部106と、リスク評価部107と、テスト項目生成部108と、テストケース情報取得/生成部109と、テストシナリオ生成部110と、を含む。学習部104は、入力されたテストケースが誤りを検出する見込み度合いを示す情報を含む誤り予測情報を出力する誤り予測モデル32に対し、機械学習で用いられる種々の情報を記憶するデータベースである機械学習データベース31に格納された、少なくとも既存のプロジェクトに関する情報およびテストシナリオ307Aにおける個々の処理を規定する複数のテストケースに関する情報に基づいて、各プロジェクトにおける試験での各テストケースが誤りを検出したか否かを学習させる。誤り予測情報取得部106は、試験対象のプロジェクトに関する情報を、誤り予測モデル32に入力して、各テストケースと対応する各誤り予測情報312を取得する。リスク評価部107は、取得された誤り予測情報312と対応するテストケース情報306Aによって誤りが検出される見込み度合い及び当該誤りの影響度合いを示すリスク評価値を算出して、当該誤り予測情報に付加する。テスト項目生成部108は、リスク評価値が付加された誤り予測情報312Aに基づいて、実施する試験項目を示すテスト項目情報313を生成する。テストケース情報取得/生成部109は、機械学習データベース31に含まれるテストケースデータベース306から、テスト項目情報313が含む試験の項目に対応するテストケース情報306Aを取得する。テストシナリオ生成部110は、取得されたテストケース情報306Aに基づいて、テストシナリオ307Aを生成する。
3. Summary As described above, the test apparatus 1 according to this embodiment includes the processing unit 10. The processing unit 10 generates a test scenario 307A for performing a test in the project from the information about the test target project. The processing unit 10 includes a learning unit 104, an error prediction information acquisition unit 106, a risk evaluation unit 107, a test item generation unit 108, a test case information acquisition/generation unit 109, and a test scenario generation unit 110. .. The learning unit 104 is a database that stores various information used in machine learning for the error prediction model 32 that outputs the error prediction information including the information indicating the probability that an input test case detects an error. Whether each test case in the test in each project detects an error based on at least the information about the existing project stored in the learning database 31 and the information about the plurality of test cases defining the individual processing in the test scenario 307A. Make them learn whether or not. The error prediction information acquisition unit 106 inputs information about the project to be tested into the error prediction model 32, and acquires each error prediction information 312 corresponding to each test case. The risk evaluation unit 107 calculates a risk evaluation value indicating the likelihood that an error will be detected by the acquired error prediction information 312 and the corresponding test case information 306A and the influence degree of the error, and adds the risk evaluation value to the error prediction information. To do. The test item generation unit 108 generates test item information 313 indicating a test item to be executed based on the error prediction information 312A to which the risk evaluation value is added. The test case information acquisition/generation unit 109 acquires the test case information 306A corresponding to the test item included in the test item information 313 from the test case database 306 included in the machine learning database 31. The test scenario generation unit 110 generates a test scenario 307A based on the acquired test case information 306A.
 このことにより、試験装置1は、ホワイトボックステストに加えてブラックボックステストも効率化することができる。さらに、本発明の試験装置は、過去の試験情報を利用することにより、過去の試験において生じた誤りと同様の誤りを予測してテストシナリオを生成することができる。 With this, the test apparatus 1 can improve the efficiency of the black box test in addition to the white box test. Furthermore, the test apparatus of the present invention can predict an error similar to an error that occurred in a past test and generate a test scenario by using the past test information.
 テスト項目生成部108は、リスク評価値が、所定の閾値よりも高いテストケース全てについて、テスト項目情報313を生成する。 The test item generation unit 108 generates test item information 313 for all test cases whose risk evaluation value is higher than a predetermined threshold value.
 テスト項目生成部108は、リスク評価値が、所定の閾値よりも高いテストケースのうちの、重要なテストケースについて、テスト項目情報313を生成する。 The test item generation unit 108 generates the test item information 313 for an important test case out of the test cases whose risk evaluation value is higher than a predetermined threshold value.
 このことにより、試験装置1は、予め設定されたテスト戦略に従って、テスト項目情報313を生成することができる。 With this, the test apparatus 1 can generate the test item information 313 according to a preset test strategy.
 テストケース情報取得/生成部109は、テストケースDB306に、テスト項目情報313の各テスト項目が示す試験の内容を示すテストケース情報がないと判断した場合、テストケース情報を生成する。 The test case information acquisition/generation unit 109 generates test case information when it is determined that the test case DB 306 does not have the test case information indicating the content of the test indicated by each test item in the test item information 313.
 このことにより、試験装置1は、テストケースDB306に存在しないテストケース情報を、新たに生成することができる。 With this, the test apparatus 1 can newly generate test case information that does not exist in the test case DB 306.
 プロセッサ10はさらに、テストシナリオ307Aに従って、対象装置の試験を実行する、テストシナリオ実行部111を含む。 The processor 10 further includes a test scenario execution unit 111 that executes a test of the target device according to the test scenario 307A.
 このことにより、試験装置1は、テストシナリオ307Aに従って、試験対象装置の試験を実行することができる。また、試験装置1は、ソースコードとドキュメントとの両面から対象工程で実施すべきテストケースについてテストケース間の依存関係及びリスクを明確にしつつ、過去の類似しているプロジェクトのプログラムのテスト結果を考慮して、総合的に誤りの予測を行うことができる。 With this, the test apparatus 1 can execute the test of the test target apparatus according to the test scenario 307A. In addition, the test apparatus 1 clarifies the dependency relationship between test cases and risks regarding the test cases to be executed in the target process from both the source code and the document, and also displays the test results of the programs of similar past projects. In consideration of this, the error can be predicted comprehensively.
実施の形態2.
 実施の形態1の試験装置1は、テスト結果比較情報315等を生成して動作を終了した。実施の形態2の試験装置1は、テスト結果比較情報315に基づいて、誤り予測モデル32を学習させて、再度試験を実行する。
Embodiment 2.
The test apparatus 1 according to the first embodiment generated the test result comparison information 315 and the like and ended the operation. The test apparatus 1 according to the second embodiment trains the error prediction model 32 based on the test result comparison information 315 and executes the test again.
 図8及び図9は、本実施の形態の試験装置1の構成要素と各構成要素に対する入出力データとを示す図である。図10は、本実施の形態の試験装置1のシナリオ生成及び実行動作を示すフローチャートである。 8 and 9 are diagrams showing the components of the test apparatus 1 of the present embodiment and the input/output data for each component. FIG. 10 is a flowchart showing the scenario generation and execution operation of the test apparatus 1 according to this embodiment.
 図8に示すように、本実施の形態のプロセッサ10は、実施の形態1の機能的構成に加えて、テスト結果比較情報投入部114を備える。テスト結果比較情報投入部114は、テスト結果比較部112からのテスト結果比較情報315に基づいて、誤り予測モデル32を学習させる。 As shown in FIG. 8, the processor 10 of the present embodiment includes a test result comparison information input unit 114 in addition to the functional configuration of the first embodiment. The test result comparison information input unit 114 trains the error prediction model 32 based on the test result comparison information 315 from the test result comparison unit 112.
 図10に示すように、本実施の形態のプロセッサ10は、実施の形態1の図7のステップS200に変えて、ステップS200Aを実行する。また、本実施の形態のプロセッサ10は、ステップS210,S211の処理も実行する。 As shown in FIG. 10, the processor 10 of the present embodiment executes step S200A instead of step S200 of FIG. 7 of the first embodiment. Further, the processor 10 of the present embodiment also executes the processing of steps S210 and S211.
 最初に、プロセッサ10は、ユーザ操作に従って、テスト戦略及び試験の終了条件を設定する(S200A)。試験の終了条件は、例えば、試験の実行回数である。以下、ステップS201~S209は、図7のものと同様である。 First, the processor 10 sets a test strategy and a test end condition according to a user operation (S200A). The test termination condition is, for example, the number of times the test is executed. Hereinafter, steps S201 to S209 are the same as those in FIG.
 プロセッサ10は、試験の終了条件を満たすまで、ステップS202~S208及びS210の動作を実行する(S210においてNO)。 The processor 10 executes the operations of steps S202 to S208 and S210 until the condition for ending the test is satisfied (NO in S210).
 上述したように、本実施の形態の試験装置1は、ユーザが事前に設定した終了条件を満たすまで自動で試験を実行する。したがって、本実施の形態の試験装置1は、試験を効率化することができる。 As described above, the test apparatus 1 according to the present embodiment automatically executes the test until the end condition preset by the user is satisfied. Therefore, the test apparatus 1 of the present embodiment can improve the efficiency of the test.
 上述したように、本実施の形態の処理部10は、実施の形態1の機能的構成に加えて、テスト結果比較情報投入部114を備える。テスト結果比較情報投入部114は、テストシナリオ307Aに従って対象装置の試験を実行した結果に基づいて、誤り予測モデル32を学習させる。 As described above, the processing unit 10 of the present embodiment includes the test result comparison information input unit 114 in addition to the functional configuration of the first embodiment. The test result comparison information input unit 114 trains the error prediction model 32 based on the result of executing the test of the target device according to the test scenario 307A.
 このことにより、本実施の形態の試験装置1は、誤り予測に基づいてテストケースを自動生成して試験を実行し、そのテスト結果を次ぎの誤り予測に逐次フィードバックすることにより、自動探索的に試験を実行し、誤り予測の精度を向上させることができる。 As a result, the test apparatus 1 according to the present embodiment automatically generates a test case based on the error prediction, executes the test, and sequentially feeds back the test result to the next error prediction. Tests can be performed to improve the accuracy of error prediction.
実施の形態3.
 実施の形態1又は2に係る試験装置は、誤り分析モデルを用いて試験を自動的に実施することができる。但し、当該試験装置は、試験を実施した後、残存すると予測される誤り情報から、リスク分析に基づく最適な対策を自動的に提示することまではできない。
Embodiment 3.
The test apparatus according to the first or second embodiment can automatically perform the test using the error analysis model. However, the test device cannot automatically present the optimum countermeasure based on the risk analysis from the error information that is expected to remain after the test is performed.
 実施の形態3に係る開発支援装置は、残存すると予測される誤り情報から、リスク分析に基づく最適な対策案を自動的にレポートとして生成するために、自動試験実施から試験完了後のプロジェクト診断まで中断なく実施するものである。 The development support apparatus according to the third embodiment automatically generates an optimum countermeasure plan based on risk analysis as a report from error information predicted to remain, from automatic test execution to project diagnosis after test completion. It will be implemented without interruption.
1.構成
 図11は、実施の形態3に係るプロジェクト診断機能装置700の構成を示すブロック図である。プロジェクト診断機能装置700は、実施の形態1及び2に係る試験装置1から取得される、試験対象装置2Aと関連装置2Bとの間の、通信プロトコルに基づく動作に関する試験結果のデータを元に、解析を実行する。ここでのプロジェクトは、ソフトウェア設計を伴った製品(装置)開発やシステム開発等を行うプロジェクトのことをいう。
1. Configuration FIG. 11 is a block diagram showing the configuration of the project diagnosis function device 700 according to the third embodiment. The project diagnosis function device 700 is based on the test result data regarding the operation based on the communication protocol between the test target device 2A and the related device 2B, which is acquired from the test device 1 according to the first and second embodiments. Perform analysis. The project here refers to a project for product (device) development, system development, or the like that involves software design.
 プロジェクト診断機能装置700は、プロセッサ10と、主記憶部20と、補助記憶部30と、表示部40と、操作部50と、通信部60と、を備える試験装置1に付属可能な装置である。試験装置1は、例えば、パーソナルコンピュータである。 The project diagnostic function device 700 is a device that can be attached to the test device 1 including the processor 10, the main storage unit 20, the auxiliary storage unit 30, the display unit 40, the operation unit 50, and the communication unit 60. .. The test apparatus 1 is, for example, a personal computer.
 プロセッサ10は、信号線を介して他のハードウェアと接続される。プロセッサ10は、中央演算処理装置(CPU)、MPU、DSP、GPU、マイコン、FPGA、ASIC等で実現できる。プロセッサ10は、後述する補助記憶部30に記憶されたOS(Operating System)、アプリケーションプログラム、種々のデータを読み込んで演算処理を実行することにより、種々の機能を実現する。プロセッサ10は、後述する機能的構成を含む。当該機能的構成は、ファームウェアにより実現されてもよい。プロセッサ10は、処理部10の一例である。プロセッサ10と、後述する主記憶部20及び補助記憶部30と、をまとめたハードウェアを、「プロセッシングサーキットリ」ともいう。 The processor 10 is connected to other hardware via a signal line. The processor 10 can be realized by a central processing unit (CPU), MPU, DSP, GPU, microcomputer, FPGA, ASIC and the like. The processor 10 realizes various functions by reading an OS (Operating System), application programs, and various data stored in an auxiliary storage unit 30 described later and executing arithmetic processing. The processor 10 includes the functional configuration described below. The functional configuration may be implemented by firmware. The processor 10 is an example of the processing unit 10. The hardware in which the processor 10 and the main storage unit 20 and the auxiliary storage unit 30 described later are integrated is also referred to as a "processing circuit".
 主記憶部20は、揮発性の記憶部である。主記憶部20は、RAM(Random Access Memory)等で実現できる。主記憶部20は、試験装置1において使用され、生成され、入出力され、或いは送受信されるデータを一時的に記憶する。 The main storage unit 20 is a volatile storage unit. The main storage unit 20 can be realized by a RAM (Random Access Memory) or the like. The main storage unit 20 temporarily stores the data used, generated, input/output, or transmitted/received in the test apparatus 1.
 補助記憶部30は、不揮発性の記憶部である。補助記憶部30は、ROM(Read Only Memory)、HDD(Hard Disk Drive)、フラッシュメモリ等で実現できる。補助記憶部30は、OS、アプリケーションプログラム、種々のデータを記憶している。OSの少なくとも一部は、主記憶部20にロードされて、プロセッサ10によって実行される。 The auxiliary storage unit 30 is a non-volatile storage unit. The auxiliary storage unit 30 can be realized by a ROM (Read Only Memory), a HDD (Hard Disk Drive), a flash memory, or the like. The auxiliary storage unit 30 stores the OS, application programs, and various data. At least a part of the OS is loaded into the main storage unit 20 and executed by the processor 10.
 補助記憶部30はさらに、機械学習DB620と、機械学習DB620に基づいて生成される、誤り予測モデル32及びプロジェクト診断モデル800を記憶する。誤り予測モデル32及びプロジェクト診断モデル800は、機械学習モデルのうちの一つを具体化したものである。誤り予測モデル32については、実施の形態1及び2において詳しく説明した。プロジェクト診断モデル800は、例えば、ディープニューラルネットワーク(深層学習器)により実現される。
 なお、機械学習DB620は、例えば、外部サーバに格納されて、外部ネットワーク、及び、プロジェクト診断機能装置700のインタフェース部を介して、プロジェクト診断機能装置700からアクセスされるように、プロジェクト診断機能装置700及び機械学習DB31がプロジェクト診断機能システムとして構成されてもよい。
The auxiliary storage unit 30 further stores a machine learning DB 620, and an error prediction model 32 and a project diagnosis model 800 generated based on the machine learning DB 620. The error prediction model 32 and the project diagnosis model 800 embody one of the machine learning models. The error prediction model 32 has been described in detail in the first and second embodiments. The project diagnosis model 800 is realized by, for example, a deep neural network (deep learning device).
The machine learning DB 620 is stored in, for example, an external server, and is accessed from the project diagnosis function device 700 via the external network and the interface unit of the project diagnosis function device 700 so that the project diagnosis function device 700 can be accessed. The machine learning DB 31 may be configured as a project diagnosis function system.
 プロジェクト診断モデル800は、試験対象のプロジェクトについての残存誤り情報を含む入力データが入力されると、試験対象のプロジェクトにて発生し得るリスクとその対策に関する情報であるリスク及び対策案情報550を出力する。プロジェクト診断モデル800は、機械学習DB620に格納される情報によって生成されて、更新される。機械学習DB620の詳細は後述する。 The project diagnosis model 800 outputs risk and countermeasure plan information 550, which is information on a risk that can occur in the test target project and countermeasures when input data including residual error information about the test target project is input. To do. The project diagnosis model 800 is generated and updated by the information stored in the machine learning DB 620. Details of the machine learning DB 620 will be described later.
 表示部40は、ユーザ操作に従って文字列や画像を表示する。表示部40は、液晶ディスプレイ、有機ELディスプレイなどで構成される。 The display unit 40 displays character strings and images according to user operations. The display unit 40 is composed of a liquid crystal display, an organic EL display, or the like.
 操作部50は、キーボード、マウス、テンキーなどで構成される。ユーザは、操作部50を介してプロジェクト診断機能装置700を操作する。また、操作部50は、表示部40に重畳して配置されて、ユーザのタッチ操作を受け付け可能なタッチパネルを含んでもよい。 The operation unit 50 is composed of a keyboard, a mouse, a numeric keypad, and the like. The user operates the project diagnosis function device 700 via the operation unit 50. In addition, the operation unit 50 may include a touch panel that is arranged so as to be superimposed on the display unit 40 and that can accept a touch operation by the user.
 通信部60は、試験対象装置2Aと各種データを送受信する。通信部60は、レシーバと、トランスミッタと、を備える。レシーバは、試験対象装置2Aからの各種データを受信する。トランスミッタは、プロセッサ10からの各種データを、試験対象装置2Aに送信する。通信部60は、通信チップ、NIC(Network Interface Card)等で実現できる。 The communication unit 60 transmits/receives various data to/from the device under test 2A. The communication unit 60 includes a receiver and a transmitter. The receiver receives various data from the device under test 2A. The transmitter transmits various data from the processor 10 to the device under test 2A. The communication unit 60 can be realized by a communication chip, a NIC (Network Interface Card), or the like.
 図12は、補助記憶部30の機械学習DB620に格納される情報を示す図である。機械学習DB620は、テスト実績DB301と、出荷後不具合DB302と、テスト技術DB303と、ドメイン知識DB304と、プロジェクトDB305と、を含む。機械学習DB31は更に、テストケースDB306と、テストシナリオDB307と、ドキュメントDB308と、構造化DB309と、教師ありDB310と、第2の教師ありDB440と、教師なしDB311と、第2の教師なし450と、残存誤り・リスク・効果情報DB400と、リスク登録簿DB410と、設計技術DB420と、トレーサビリティ情報DB430とを含む。機械学習DB620に格納される情報は、試験データ及び残存誤り予測情報が既に存在する、既存のプロジェクトに関する情報の一例である。
 前述のように、機械学習DB620は、例えば、外部サーバに格納されて、外部ネットワーク、及び、プロジェクト診断機能装置700のインタフェース部を介して、プロジェクト診断機能装置700からアクセスされるように、プロジェクト診断機能装置700及び機械学習DB31がプロジェクト診断機能システムとして構成されてもよい。
FIG. 12 is a diagram showing information stored in the machine learning DB 620 of the auxiliary storage unit 30. The machine learning DB 620 includes a test record DB 301, a post-shipment defect DB 302, a test technology DB 303, a domain knowledge DB 304, and a project DB 305. The machine learning DB 31 further includes a test case DB 306, a test scenario DB 307, a document DB 308, a structured DB 309, a supervised DB 310, a second supervised DB 440, an unsupervised DB 311, and a second unsupervised 450. , Residual error/risk/effect information DB 400, risk register DB 410, design technology DB 420, and traceability information DB 430. The information stored in the machine learning DB 620 is an example of information about an existing project in which test data and residual error prediction information already exist.
As described above, the machine learning DB 620 is stored in, for example, an external server, and is accessed from the project diagnosis function device 700 via the external network and the interface unit of the project diagnosis function device 700 so that the project diagnosis function device 700 can access the project diagnosis function. The functional device 700 and the machine learning DB 31 may be configured as a project diagnosis function system.
 教師ありDB310は、誤り予測モデル32の機械学習に用いられる教師ありデータ310Aを格納する。教師ありデータ310Aは、実施の形態1で説明した教師ありデータ生成部102により、構造化DB309の情報に基づいて生成される。教師ありデータ310Aは、例えば、テストケース番号、及びテストケースに対応する機能名に対して、各テストケースにて誤りが検出されたか否か、及び当該誤りが後の工程や出荷後の不具合として流出したか否かを示す正解ラベルが付されたデータである。 The supervised DB 310 stores supervised data 310A used for machine learning of the error prediction model 32. The supervised data 310A is generated based on the information in the structured DB 309 by the supervised data generation unit 102 described in the first embodiment. The supervised data 310A is, for example, for the test case number and the function name corresponding to the test case, whether or not an error is detected in each test case, and the error indicates a defect in a subsequent process or after shipping. It is data with a correct answer label indicating whether or not the data has leaked.
 教師なしDB311は、誤り予測モデル32の機械学習に用いられる教師なしデータ311Aを格納する。教師なしデータ311Aは、実施の形態1で説明した非構造化データ解析部103により、ドキュメント情報308A(例えば、要求仕様書)に基づいて生成される。教師なしデータ311Aは、例えば、テストケース番号、及びテストケースに対応する機能名を含む。教師なしデータ311Aは、各テストケースにて誤りが検出されたか否か、及び当該誤りが後の工程や出荷後の不具合として流出したか否かを示す正解ラベルを含まない。 The unsupervised DB 311 stores unsupervised data 311A used for machine learning of the error prediction model 32. The unsupervised data 311A is generated by the unstructured data analysis unit 103 described in the first embodiment based on the document information 308A (for example, required specifications). The unsupervised data 311A includes, for example, a test case number and a function name corresponding to the test case. The unsupervised data 311A does not include a correct answer label indicating whether or not an error is detected in each test case and whether or not the error is leaked as a defect in a subsequent process or after shipping.
 第2の教師ありDB440は、プロジェクト診断モデル800の機械学習に用いられる第2の教師ありデータ440Aを格納する。第2の教師ありデータ440Aは、後述する第2の教師ありデータ作成部52により、構造化DB309の情報に基づいて生成される。第2の教師ありデータ440Aは、例えば、システム試験結果に基づく残存誤り予測情報(件数、分類、発生個所など)に対して、どのようなリスクが検出されたか、それらのリスクの対策として実施した対策案の効果があったか否かを示す正解ラベルが付されたデータである。 The second supervised DB 440 stores second supervised data 440A used for machine learning of the project diagnosis model 800. The second supervised data 440A is generated by the second supervised data creation unit 52, which will be described later, based on the information in the structured DB 309. The second supervised data 440A was implemented as a countermeasure against what kind of risk was detected in the residual error prediction information (number of cases, classification, occurrence place, etc.) based on the system test result, for example. This is data with a correct answer label indicating whether or not the countermeasure plan was effective.
 第2の教師なしDB450は、プロジェクト診断モデル800の機械学習に用いられる第2の教師なしデータ450Aを格納する。第2の教師なしデータ450Aは、後述する第2の非構造化データ解析部55により、ドキュメント情報308A(例えば、要求仕様書やソースコード)に基づいて生成される。第2の教師なしデータ450Aは、例えば、残存誤り予測情報(件数、分類、発生個所など)に対応するリスク及びそれらのリスク対策を含む。第2の教師なしデータ450Aは、残存誤り予測情報(件数、分類、発生個所など)に対応するリスク及びそれらのリスク対策の効果があったか否かを示す正解ラベルを含まない。 The second unsupervised DB 450 stores second unsupervised data 450A used for machine learning of the project diagnosis model 800. The second unsupervised data 450A is generated by the second unstructured data analysis unit 55, which will be described later, based on the document information 308A (for example, required specifications or source code). The second unsupervised data 450A includes, for example, risks corresponding to the residual error prediction information (number of cases, classification, occurrence place, etc.) and their risk countermeasures. The second unsupervised data 450A does not include the risk corresponding to the residual error prediction information (the number of cases, the classification, the occurrence location, etc.) and the correct answer label indicating whether or not the risk countermeasures are effective.
 誤り予測モデル構築及び運用を実行する試験装置1のプロセッサ10の機能的構成により、システム試験以降の工程で実施すべきと予測される誤り情報(テストケース、誤り検出有無、誤りの後工程流出有無)を予測可能なモデル「誤り予測モデル」が構築され、試験が実施される。 Due to the functional configuration of the processor 10 of the test apparatus 1 that executes construction and operation of the error prediction model, error information (test case, error detection presence, error post-process outflow presence or absence that is predicted to be performed in the process after the system test is performed. ) Is constructed and a test is carried out.
 試験実施後のシステム試験結果に基づく残存誤り予測情報(件数、分類、発生個所など)より、プロジェクト診断モデル学習部790によって、プロジェクト診断モデル800が作成される。 The project diagnosis model 800 is created by the project diagnosis model learning unit 790 from the residual error prediction information (number of cases, classification, occurrence location, etc.) based on the system test result after the test execution.
 図13は、プロセッサ10の機能的構成と各機能的構成に対する入出力データとを示すと共に、プロジェクト診断モデル800を構築するためのプロジェクト診断モデル学習部790の構成例を示すブロック図である。 FIG. 13 is a block diagram showing a functional configuration of the processor 10 and input/output data for each functional configuration, and a configuration example of a project diagnostic model learning unit 790 for constructing the project diagnostic model 800.
 構造化データ統合部51は、複数の既存プロジェクトのテスト実績DB301、出荷後不具合DB302、テスト技術DB303、ドメイン知識DB304、プロジェクトDB305、残存誤り・リスク・効果情報DB400、リスク登録簿DB410、設計技術DB420、トレーサビリティ情報DB430からのデータを、一つの構造化データに統合する。 The structured data integration unit 51 includes a test result DB 301 of a plurality of existing projects, a post-shipment defect DB 302, a test technology DB 303, a domain knowledge DB 304, a project DB 305, a residual error/risk/effect information DB 400, a risk register DB 410, and a design technology DB 420. , The data from the traceability information DB 430 is integrated into one structured data.
 初期教師データを作成する第2の教師ありデータ作成部52は、残存誤り予測情報(件数、分類、発生個所など)に対して、どのようなリスクが検出されたか、それらのリスクの対策として実施した対策案の効果があったか否かを示す正解ラベルが付されたデータを用いて、初期教師データである第2の教師ありデータ440Aを作成する。 The second teacher-existing data creation unit 52 that creates the initial teacher data performs what kind of risk is detected in the residual error prediction information (number of cases, classification, occurrence place, etc.) and measures for those risks. The data with the correct answer label indicating whether or not the countermeasure plan is effective is used to create the second supervised data 440A that is initial teacher data.
 非構造化データ統合部54は、(要求仕様書、ソースコード、試験書、シーケンス図、状態遷移図、仕様変更情報、静的・動的解析結果などの)ソフトウエアプロダクトに関する非構造化データであるドキュメント情報308Aを統合する。 The unstructured data integration unit 54 is unstructured data related to software products (such as requirement specifications, source code, test documents, sequence diagrams, state transition diagrams, specification change information, and static/dynamic analysis results). The certain document information 308A is integrated.
 第2の非構造化データ解析部55は、統合された非構造化データであるドキュメント情報308Aを解析し、初期教師データである第2の教師ありデータ440Aとの関連性を解析し、第2の教師なしデータ450Aを生成する。 The second unstructured data analysis unit 55 analyzes the document information 308A that is integrated unstructured data, analyzes the relevance to the second supervised data 440A that is initial teacher data, and Unsupervised data 450A is generated.
 特徴要素抽出部56は、構造化データと非構造化データとにおける残存誤り予測情報(件数、分類、発生個所など)から、関連性のあるリスク及び効果のある対策案を出力するための特徴要素(例えば、コーディング担当者、流用率、要件の変更率など)を抽出する。 The characteristic element extracting unit 56 is a characteristic element for outputting a related risk and an effective countermeasure plan from the residual error prediction information (number of cases, classification, occurrence location, etc.) in the structured data and the unstructured data. (For example, coding personnel, diversion rate, requirement change rate, etc.) are extracted.
 非構造化データ変換部57は、非構造化データの解析結果と抽出された特徴要素とを、規則性のある構造化されたデータ(例えば、XMLファイル)へ変換する。 The unstructured data conversion unit 57 converts the analysis result of the unstructured data and the extracted feature element into structured data having regularity (for example, XML file).
 教師ありデータ投入部53aは、第2の教師ありデータ440Aをプロジェクト診断モデル800のための学習アルゴリズムへ投入する。 The supervised data input unit 53a inputs the second supervised data 440A into the learning algorithm for the project diagnosis model 800.
 検証評価基準抽出部58は、残存誤り予測情報(件数、分類、発生個所など)に対して、どの程度の関連性のあるリスク及び効果のある対策案が出されればどう評価(フィードバック)されるか、の検証基準を抽出する。 The verification evaluation criterion extraction unit 58 evaluates (feedback) the residual error prediction information (number of cases, classification, occurrence location, etc.) to what extent relevant risks and effective countermeasures are proposed. Or verify the verification criteria.
 モデル作成部59は、学習アルゴリズムへ投入された第2の教師ありデータ440Aを基に、プロジェクト診断モデル800を構築する。 The model creating unit 59 builds a project diagnosis model 800 based on the second supervised data 440A input to the learning algorithm.
 教師なしデータ投入部53bは、残存誤り予測情報(件数、分類、発生個所など)に対して、どの程度の関連性のあるリスク及び効果のある対策案が出されたか評価され得ていない、第2の教師なしデータ450Aをプロジェクト診断モデル800へ投入する。 The unsupervised data input unit 53b has not been able to evaluate to what extent relevant risks and effective countermeasures have been taken for residual error prediction information (number of cases, classification, occurrence location, etc.), The unsupervised data 450A of 2 is input to the project diagnosis model 800.
 モデル再作成部61は、投入された第2の教師なしデータ450Aを基に、誤り予測モデル32を再構築する。 The model recreating unit 61 reconstructs the error prediction model 32 based on the input second unsupervised data 450A.
 検証評価出力部62は、残存誤り予測情報(件数、分類、発生個所など)に対して、どの程度の関連性のあるリスク及び効果のある対策案であったかを実際に評価(フィードバック)する出力を行う。 The verification evaluation output unit 62 outputs an output for actually evaluating (feeding back) the degree of related risk and the effective countermeasure plan for the residual error prediction information (number of cases, classification, occurrence location, etc.). To do.
 プロジェクト診断モデル800の構築及び学習に必要となる情報データは、様々なものが想定される。プロジェクト診断モデル800の構築及び学習に必要とされ得る情報データの例を以下に示す。
Figure JPOXMLDOC01-appb-T000001
Various types of information data are expected for the construction and learning of the project diagnosis model 800. The following is an example of information data that may be needed to build and learn the project diagnostic model 800.
Figure JPOXMLDOC01-appb-T000001
2.動作
 次に、本実施の形態に係る試験装置1及びプロジェクト診断機能装置700の動作について、図14に示すフローチャートを参照して説明する。プロジェクト診断モデル800の構築は、半教師あり学習アルゴリズムを用いて行う。
2. Operation Next, the operation of the test apparatus 1 and the project diagnosis function apparatus 700 according to this embodiment will be described with reference to the flowchart shown in FIG. The project diagnosis model 800 is constructed using a semi-supervised learning algorithm.
 開始(S500)後、既存の複数のプロジェクトの実施結果が格納されているテスト実績DB301、出荷後不具合DB302、テスト技術DB303、ドメイン知識DB304、残存誤り・リスク・効果情報DB400、リスク登録簿DB410、設計技術DB420、トレーサビリティ情報DB430の各データベースからデータを統合し、残存誤り予測情報(件数、分類、発生個所など)に対する関連性のあるリスク及び効果のある対策案のデータの対応関係を整理する(S501)。 After the start (S500), the test result DB 301 in which the execution results of the existing projects are stored, the post-shipment defect DB 302, the test technology DB 303, the domain knowledge DB 304, the residual error/risk/effect information DB 400, the risk register DB 410, The data is integrated from each database of the design technology DB 420 and the traceability information DB 430, and the correspondence relationship between the data of the risk associated with the residual error prediction information (the number of cases, the classification, the occurrence place, etc.) and the effective countermeasure plan data is arranged ( S501).
 統合されたデータの残存誤り予測情報(件数、分類、発生個所など)の入力に対する関連性のあるリスク及び効果のある対策案の出力の対応関係から、初期学習用の教師ありデータである第2の教師ありデータ440Aの作成を実施する(S502)。 It is the supervised data for initial learning based on the correspondence relationship between the input of residual error prediction information (number of cases, classification, occurrence location, etc.) of integrated data and the output of effective countermeasures. The supervised data 440A is created (S502).
 一方、同一の機種毎に、仕様書・ソース、静的・動的解析結果などの非構造化データをNoSQLのデータベースへ格納する(S503)。 On the other hand, for each same model, unstructured data such as specifications/sources and static/dynamic analysis results are stored in the NoSQL database (S503).
 初期学習用の教師ありデータである第2の教師ありデータ440Aに記載されている内容を検索キーワードとして、設計書、テスト仕様書、ソース、仕様変更情報、静的・動的解析結果などの非構造化データも含めてテキスト解析を行う(S504)。 By using the contents described in the second supervised data 440A, which is supervised data for initial learning, as search keywords, non-specifications such as design documents, test specifications, sources, specification change information, static/dynamic analysis results, etc. The text analysis is performed including the structured data (S504).
 テキスト解析結果から、初期学習用の教師ありデータである第2の教師ありデータ440Aに対する類似性を分析し、残存誤りの内容が仕様書のどこに記載がある内容か、仕様変更による影響の可能性があるかどうか、機種間の相関など、実際に対策が必要となった対応の特徴を抽出する(S505)。 From the text analysis result, the similarity to the second supervised data 440A, which is supervised data for initial learning, is analyzed, and where the content of the residual error is described in the specification, and there is a possibility that the specification is changed. Whether or not there is any, and the corresponding features such as the correlation between models that actually require countermeasures are extracted (S505).
 構造化データと非構造化データとを、抽出された特徴量において整合性がとれるように、トレーサビリティ情報よりマッピングする(S506)。 The structured data and the unstructured data are mapped from the traceability information so that the extracted feature quantities are consistent (S506).
 作成された残存誤り情報(予測件数、残存誤り予測発生器機能及びソース箇所、残存誤り種類)、残存誤り情報に対するリスク及びリスク対策の効果結果が判明している、第2の教師ありデータ440Aを投入する(S507)。 Generated residual error information (predicted number, residual error prediction generator function and source location, residual error type), the risk of residual error information and the effect result of the risk countermeasure are known. Input (S507).
 フィードバックする効果に対する検証評価の基準を抽出する(S508)。  Extract the verification evaluation criteria for the effect of feedback (S508).
 第2の教師ありデータ440Aの学習アルゴリズムを使用して、残存誤り情報を基に、関連するリスクの発生とその対策案を予測するプロジェクト診断モデル800を生成する(S509)。 Using the learning algorithm of the second supervised data 440A, a project diagnosis model 800 that predicts the occurrence of related risks and countermeasures for them is generated based on the residual error information (S509).
 残存誤り情報(予測件数、残存誤り予測発生器機能及びソース箇所、残存誤り種類)、残存誤り情報に対するリスク及びリスク対応策の実施した効果が不明な教師なしデータ(第2の教師なしデータ450A)を投入する(S510)。 Residual error information (predicted number, residual error prediction generator function and source location, residual error type), unsupervised data whose risk to residual error information and the effect of risk countermeasures are unknown (second unsupervised data 450A) Is input (S510).
 投入された教師なしデータ(第2の教師なしデータ450A)の特徴量を抽出し、非構造化データの解析結果の情報も含めて、教師なしデータの学習アルゴリズムを用いて、残存誤り情報(予測件数、残存誤り予測発生器機能及びソース箇所、残存誤り種類)、残存誤り情報に対するリスク及びリスク対応策を予測するプロジェクト診断モデル800を生成する(S511)。 The residual error information (prediction) is extracted using the learning algorithm of the unsupervised data, including the information of the analysis result of the unstructured data, by extracting the feature amount of the input unsupervised data (the second unsupervised data 450A). A project diagnosis model 800 for predicting the number of cases, residual error prediction generator function and source location, residual error type), risk for residual error information and risk countermeasures is generated (S511).
 残存誤り情報(予測件数、残存誤り予測発生器機能及びソース箇所、残存誤り種類)、残存誤り情報に対するリスク及びリスク対応策の効果が記録されていない試験仕様書、テスト結果、不具合情報等の不明なデータからもリスク及びリスク対応策の効果の規則性を予測して、その評価を行うことにより、誤り予測モデル32を再構築する(S512)。 Residual error information (predicted number, residual error prediction generator function and source location, residual error type), risk to residual error information and effect of risk countermeasures are not recorded Test specifications, test results, defect information, etc. The risk prediction model 32 is reconstructed by predicting the regularity of the risk and the effect of the risk countermeasures from such data and evaluating the regularity (S512).
3.まとめ
 本実施の形態に係る開発支援装置は、実施の形態1又は2に係る試験装置1を含む開発支援装置である。開発支援装置は、試験完了時に残存している残存誤り予測情報から、関連するリスク及びその対策を提示するプロジェクト診断モデル800を備える。更に、開発支援装置は、試験完了時に残存している残存誤り予測情報に基づいて、プロジェクト診断モデル800を学習させるプロジェクト診断モデル学習部790を含む。
3. Summary The development support device according to the present embodiment is a development support device including the test device 1 according to the first or second embodiment. The development support device includes a project diagnosis model 800 that presents related risks and countermeasures based on the residual error prediction information remaining at the time of completion of the test. Furthermore, the development support device includes a project diagnosis model learning unit 790 that learns the project diagnosis model 800 based on the residual error prediction information remaining when the test is completed.
 プロジェクト診断モデル学習部790は、更に、
 非構造化データのテキスト解析結果に基づく、非構造化データと、残存誤りに対するリスク及びその対策が明確である教師ありデータとの類似性の分析や、トレーサビリティ情報を活用することにより、実際に対策が必要となった対応の特徴要素を抽出し、
 更に、プロジェクト診断モデル学習部790は、非構造化データのテキスト解析結果と、抽出された特徴要素とを、規則性のある構造化データに変換する非構造化データ変換部57を含む。
The project diagnosis model learning unit 790 further includes
Measures based on the results of text analysis of unstructured data, analysis of the similarity between unstructured data and supervised data with clear risk and countermeasures against residual errors, and by utilizing traceability information Extract the corresponding feature elements that required
Further, the project diagnosis model learning unit 790 includes an unstructured data conversion unit 57 that converts the text analysis result of the unstructured data and the extracted feature element into structured data having regularity.
 このようにすることにより、開発支援装置は、自動試験実施完了後に各プロジェクト診断のリスクに応じて最適な対策案を効率的に提示することができる。 By doing so, the development support device can efficiently present the optimal countermeasure plan according to the risk of each project diagnosis after the completion of the automatic test execution.
実施の形態4.
 本実施の形態4に係る開発支援装置は、被試験装置と試験装置とを接続した実施の形態1又は2による試験の実施完了後に、実施の形態3で構築したプロジェクト診断モデル800を運用しつつ、被テスト対象物に潜在する残存誤り予測情報(件数、分類、発生個所)を基に、関連するリスク及びその効果的なリスク対応策を自動的に抽出し、レポートを提示する。
Fourth Embodiment
The development support device according to the fourth embodiment operates the project diagnosis model 800 constructed in the third embodiment after the completion of the test according to the first or second embodiment in which the device under test and the test device are connected to each other. , Related risks and their effective risk countermeasures are automatically extracted based on the residual error prediction information (number of cases, classification, occurrence location) that is latent in the test object, and a report is presented.
1.構成
 図15は、本実施の形態に係る試験装置1及びプロジェクト診断機能装置700、即ち、開発支援装置の構成例を示すブロック図である。本実施の形態に係る開発支援装置は、装置間の通信に基づいて動作する被試験装置に対して、試験実施完了後に残存誤り予測情報(件数、分類、発生個所)に基づくリスク分析を自動的に行い、対策案を提示する。即ち、当該開発支援装置は、試験完了後にリスク重点入力部510により、診断対象とするプロジェクト及び機能、モジュールを入力し、リスク(QCDRS)の重点度合いを設定する。
1. Configuration FIG. 15 is a block diagram showing a configuration example of the test apparatus 1 and the project diagnosis function apparatus 700 according to the present embodiment, that is, the development support apparatus. The development support apparatus according to the present embodiment automatically performs risk analysis based on residual error prediction information (number of cases, classification, occurrence location) after completion of test execution for a device under test that operates based on communication between devices. And present a measure plan. That is, the development support apparatus inputs the project, the function, and the module to be diagnosed by the risk priority input unit 510 after the test is completed, and sets the risk (QCDRS) priority level.
 試験結果データ記憶部520により、試験が完了した当該プロジェクトのテスト実績情報をロードする。ロードされたテスト実績情報から、残存誤り予測情報取得部530により、試験が完了した時点の残存誤り予測情報(件数、分類、発生個所)540を抽出する。 The test result data storage unit 520 loads the test result information of the project for which the test is completed. From the loaded test result information, the residual error prediction information acquisition unit 530 extracts residual error prediction information (number of cases, classification, occurrence location) 540 at the time when the test is completed.
 プロジェクト診断モデル800は、試験が完了した時点の残存誤り予測情報(件数、分類、発生個所)を入力とし、残存誤り予測情報に関連するリスク及びその対策案情報550を出力する。 The project diagnosis model 800 inputs the residual error prediction information (number of cases, classification, occurrence location) at the time when the test is completed, and outputs the risk related to the residual error prediction information and countermeasure plan information 550 thereof.
 リスク評価部560は、影響度・発生確率により、抽出されたリスクを評価する。 The risk evaluation unit 560 evaluates the extracted risk based on the degree of influence and the probability of occurrence.
 優先度調整部570は、リスク重点入力部510に基づいて設定したQCDRSの重点度合いを反映して、抽出したリスクの優先度を調整する。 The priority adjustment unit 570 adjusts the priority of the extracted risk by reflecting the priority level of the QCDRS set based on the risk priority input unit 510.
 対策項目選出部580は、調整済みのリスクに基づいて対策案を選出する。 The countermeasure item selection unit 580 selects a countermeasure plan based on the adjusted risk.
 レポート作成部590は、残存誤り予測情報(件数、分類、発生個所)に基づくリスク分析を行い、対策案をそれぞれ帳票形式で作成する。 The report creation unit 590 performs risk analysis based on the residual error prediction information (number of cases, classification, occurrence location) and creates countermeasure plans in the form of reports.
 残存誤り・リスク情報表示部600は、レポート作成部590により作成されたレポートを表示部40の画面に表示する。残存誤り予測情報(件数、分類、発生個所)については、開発プロセス別に誤りの発生実績に基づく予測情報をグラフ化して表示される。対策案レポート610については、残存誤り・リスク・効果情報DB400へ残存誤り及びリスクと共に登録される。 The residual error/risk information display unit 600 displays the report created by the report creation unit 590 on the screen of the display unit 40. The residual error prediction information (number of cases, classification, occurrence location) is displayed as a graph of prediction information based on the error occurrence record for each development process. The countermeasure plan report 610 is registered in the residual error/risk/effect information DB 400 together with the residual error and risk.
 リスク対策効果判定入力部615は、リスク対策案を適用した結果の効果の度合いを残存誤り・リスク・効果情報DB400へフィードバック入力する。フィードバック入力されたリスク対策案を適用した結果の効果の度合いは、プロジェクト診断モデル学習部790を介して、プロジェクト診断モデル800を更新する。 The risk countermeasure effect judgment input unit 615 feeds back the effect degree as a result of applying the risk countermeasure plan to the residual error/risk/effect information DB 400 as feedback. The project diagnosis model 800 is updated through the project diagnosis model learning unit 790 with respect to the degree of the effect as a result of applying the feedback input risk countermeasure plan.
2.動作
 次に、本実施の形態に係るプロジェクト診断機能装置700の動作について、図16a及び図16bに示すフローチャートを参照して説明する。プロジェクト診断機能装置700は、プロジェクト診断モデル800を運用するに当たり、自動探索的に試験を実施した後の残存誤り情報より、リスク及びその対策案を抽出し、リスクに対する利用者による優先度に応じて結果を自動的に提示する。従って、効率的に残存誤りに対する対策を実施することが可能となる。
2. Operation Next, the operation of the project diagnosis function device 700 according to the present embodiment will be described with reference to the flowcharts shown in FIGS. 16a and 16b. In operating the project diagnosis model 800, the project diagnosis function device 700 extracts risks and countermeasures from the residual error information after the test is performed automatically and according to the priority given to the risk by the user. Present results automatically. Therefore, it is possible to efficiently implement the countermeasure against the residual error.
 開始(S700)後、当該プロジェクトの実施の形態1又は2に基づく探索的な自動試験のテスト結果を試験結果データ記憶部520へロードすると共に、テスト実績DB301へ保存する(S701)。 After the start (S700), the test result of the exploratory automatic test based on the embodiment 1 or 2 of the project is loaded into the test result data storage unit 520 and saved in the test result DB 301 (S701).
 実施の形態1又は2に係る試験装置1に基づく誤り予測モデル32適用結果より、残存誤り予測情報540を取得する(S702)。 Residual error prediction information 540 is acquired from the application result of the error prediction model 32 based on the test apparatus 1 according to the first or second embodiment (S702).
 残存誤り予測情報540の件数、種類、発生個所をグラフ化して表示する(S703)。 The number, type, and occurrence location of the residual error prediction information 540 are displayed in a graph (S703).
 以下の当該プロジェクトのデータを、プロジェクト診断モデル学習部790にて統合する(S704)。
・残存誤り・リスク・効果情報DB400
・テスト実績DB301
・プロジェクトDB305
・出荷後不具合DB302
・リスク登録簿DB410
・設計技術DB420
・トレーサビリティ情報DB430
The project diagnosis model learning unit 790 integrates the following project data (S704).
・Residual error/risk/effect information DB 400
・Test result DB301
・Project DB305
Post-shipment defect DB 302
・Risk register DB410
Design technology DB 420
・Traceability information DB430
 プロジェクト診断モデル学習部790にて、ステップS704にて生成された統合データより、プロジェクト診断モデル800への投入データを作成する(S705)。 The project diagnosis model learning unit 790 creates input data to the project diagnosis model 800 from the integrated data generated in step S704 (S705).
 プロジェクト診断モデル学習部790にて、当該プロジェクトの設計書、テスト仕様書、ソース、仕様変更情報などの非構造化データも含めてテキスト解析を行う(S706)。 The project diagnosis model learning unit 790 performs text analysis including unstructured data such as design documents, test specifications, sources, and specification change information of the project (S706).
 プロジェクト診断モデル学習部790によるテキスト解析結果から、当該プロジェクトの残存誤りに対する特徴を分析し、誤りが仕様書に記載がある内容か、仕様変更による影響の可能性があるかどうか、機種間の相関などの情報を、XMLファイル等の構造化データとして出力する(S707)。 From the text analysis result by the project diagnosis model learning unit 790, the characteristics of the residual error of the project are analyzed, whether the error is described in the specifications, whether there is a possibility of being affected by the specification change, and correlation between models. Information such as is output as structured data such as an XML file (S707).
 当該プロジェクトの試験結果に基づく残存誤り予測情報540のデータをプロジェクト診断モデル800へ投入する(S708)。 The data of the residual error prediction information 540 based on the test result of the project is input to the project diagnosis model 800 (S708).
 プロジェクト診断モデル800より当該プロジェクトで残存していると予測される誤り情報から関連するリスク及びその対策案情報550を抽出する(S709)。 The related risk and its countermeasure plan information 550 are extracted from the error information predicted to remain in the project from the project diagnosis model 800 (S709).
 プロジェクトのどのリスクに対してどの程度重点を置くか優先度を調整する必要がある場合は、リスク重点入力部510にて、QCDRS等の、パラメータが入力される(S710)。 When it is necessary to adjust the priority of which risk of the project and how much the priority is given, parameters such as QCDRS are input in the risk priority input section 510 (S710).
 リスク評価部560にて、予測されるリスクの発生確率と影響度を分析し、リスク評価する(S711)。 The risk evaluation unit 560 analyzes the probability of occurrence and the degree of impact of the predicted risk and evaluates the risk (S711).
 優先度調整部570にて、優先度を調整する必要がある場合かどうか判定する(S712)。 The priority adjustment unit 570 determines whether it is necessary to adjust the priority (S712).
 優先度調整部570にて、対象を限定する必要がある場合かどうか判定する(S713)。 The priority adjustment unit 570 determines whether or not it is necessary to limit the target (S713).
 対象を限定する必要がある場合は、優先度調整部570にて、誤り種類や箇所を限定抽出する(S714)。 If it is necessary to limit the target, the priority adjusting unit 570 limits and extracts error types and locations (S714).
 優先度調整部570にて、対象のリスクが既知の未知であるか判定する(S715)。 The priority adjustment unit 570 determines whether the target risk is known or unknown (S715).
 対象のリスクが既知の未知である場合は、優先度調整部570にて、リスク登録簿DB410からコンティンジェンシープランをロードして対策案とする(S716)。 If the target risk is known and unknown, the priority adjusting unit 570 loads the contingency plan from the risk register DB 410 and sets it as a countermeasure (S716).
 優先度調整部570にて、対象のリスクが未知の未知であるか判定する(S717)。 The priority adjustment unit 570 determines whether the target risk is unknown (S717).
 対象のリスクが未知の未知である場合は、優先度調整部570にて、プロジェクト診断モデル800より予測される残存誤り情報に対するリスクから想定される対策案をロードする(S718)。 If the target risk is unknown, the priority adjustment unit 570 loads a countermeasure plan assumed from the risk of the residual error information predicted by the project diagnosis model 800 (S718).
 優先度調整部570にて残存する予測誤りに対するリスクとその対策を優先度で整理し、対策項目選出部580にて対策項目を選出する(S719)。 The priority adjustment unit 570 sorts the remaining risks of prediction errors and their countermeasures by priority, and the countermeasure item selection unit 580 selects countermeasure items (S719).
 レポート作成部590にて、QCDRSの観点でリスクとその対策に関するレポートを作成する(S720)。 The report creation unit 590 creates a report on risks and countermeasures from the viewpoint of QCDRS (S720).
 対策案を実施後、リスク対策効果判定入力部615にて、リスク対策効果の判定入力を実施する(S721)。 After implementing the countermeasure plan, the risk countermeasure effect determination input unit 615 inputs the determination of the risk countermeasure effect (S721).
 テスト実績DB301の更新情報(実施したテスト項目とその残存誤り予測情報及びリスクとその対策及び効果の度合い)をプロジェクト診断モデル800へフィードバックする(S722)。 The update information of the test result DB 301 (test items that have been executed, residual error prediction information thereof, risks, their countermeasures, and degree of effectiveness) is fed back to the project diagnosis model 800 (S722).
3.まとめ
 本実施の形態に係る開発支援装置は、実施の形態3に係る開発支援装置であることに加えて、残存していると予測される誤りからプロジェクト診断モデル800で出力されるリスクが既知もしくは未知なのかを判別した上で、優先度を判断して対策項目を提示する優先度調整部570を含む。
3. Summary In addition to the development support apparatus according to the third embodiment, the development support apparatus according to the present embodiment has a known risk that the project diagnosis model 800 outputs a risk due to an error that is expected to remain. It includes a priority adjustment unit 570 that determines the priority after determining whether it is unknown and presents a countermeasure item.
 また、プロジェクト診断モデル800に対して、既存のプロジェクトに関する情報に基づいて、各プロジェクトにおける試験完了時点での残存している残存誤り予測情報から、関連するリスクに基づく対策案がどの程度効果があったか否かを入力して、学習させるリスク対策効果判定入力部615を含む。 Further, to the project diagnosis model 800, based on the information about the existing project, from the residual error prediction information remaining at the time of completion of the test in each project, how effective is the countermeasure plan based on the related risk. A risk countermeasure effect determination input unit 615 for inputting whether or not to learn is included.
 このようにすることにより、開発支援装置は、自動試験実施完了後に各プロジェクト診断のリスクの優先度に応じて最適な対策案を効率的に提示することができる。 By doing this, the development support device can efficiently present the optimum countermeasure plan according to the risk priority of each project diagnosis after the completion of the automatic test execution.
他の実施の形態.
 以上のように、本発明における技術の例示として、実施の形態を説明した。しかしながら、本発明における技術は、これに限定されず、適宜、変更、置き換え、付加、省略などを行った実施の形態にも適用可能である。また、上記実施の形態で説明した各構成要素を組み合わせて、新たな実施の形態とすることも可能である。
Other embodiments.
As described above, the embodiment has been described as an example of the technique of the present invention. However, the technique of the present invention is not limited to this, and can be applied to the embodiment in which changes, replacements, additions, omissions, etc. are appropriately made. Further, it is also possible to combine the constituent elements described in the above-described embodiment to form a new embodiment.
 本発明のプロジェクトは、ソフトウェア開発を伴う製品(装置)やシステムの開発プロジェクトのことをいうが、これに限定されない。例えば、プロジェクトは、ソフトウェア開発を伴う製品(装置)の所定の機能、或いは、モジュールの開発プロジェクトであってもよい。 The project of the present invention refers to a product (device) or system development project involving software development, but is not limited to this. For example, the project may be a predetermined function of a product (apparatus) accompanied by software development, or a module development project.
 実施の形態1及び2に係るリスク評価部107は、誤り予測情報312の各テストケース番号に対して、誤りが検出される見込み度合いと、当該誤りの影響度合いとを積算した値を、リスク評価値として付加するが、これに限定されない(図7のS203)。例えば、試験装置1は、誤りに対する改修履歴情報を記憶しており、リスク評価部107は、改修履歴情報を有するテストケースに、改修履歴情報を有しないテストケースよりも高いリスク評価値を付加してもよい。 The risk evaluation unit 107 according to the first and second embodiments calculates the risk evaluation value for each test case number of the error prediction information 312 by adding up the probability that an error is detected and the influence degree of the error. Although added as a value, it is not limited to this (S203 in FIG. 7). For example, the test apparatus 1 stores repair history information for errors, and the risk evaluation unit 107 adds a risk evaluation value higher to a test case having repair history information than a test case having no repair history information. May be.
 実施の形態1及び2に係るテスト項目生成部108は、ユーザ操作に従って、実行するテストケースを選択するが、これに限定されない(図7のS204)。例えば、テスト項目生成部108は、所定のリスク評価値以上のテストケースを、自動で実行するテストケースとして選択してもよい。あるいは、テスト項目生成部108は、所定のテスト観点や重要テスト項目に対応するテストケースを、自動で実行するテストケースとして選択してもよい。例えば、所定のリスク評価値、テスト観点、及び重要テスト項目は、ユーザ操作により事前に設定される。 The test item generation unit 108 according to the first and second embodiments selects a test case to be executed according to a user operation, but is not limited to this (S204 in FIG. 7). For example, the test item generation unit 108 may select a test case having a predetermined risk evaluation value or more as a test case to be automatically executed. Alternatively, the test item generation unit 108 may select a test case corresponding to a predetermined test viewpoint or an important test item as a test case to be automatically executed. For example, the predetermined risk evaluation value, test viewpoint, and important test item are set in advance by a user operation.
 実施の形態1及び2に係るテスト項目生成部108は、ユーザ操作に従って、テストケースの実行する順番を変更するが、これに限定されない(図7のS204)。例えば、テスト項目生成部108は、リスク評価値が高い順番に、テストケースの実行する順番を変更してもよい。 The test item generation unit 108 according to the first and second embodiments changes the order in which the test cases are executed according to the user operation, but is not limited to this (S204 in FIG. 7). For example, the test item generation unit 108 may change the order in which the test cases are executed in order from the highest risk evaluation value.
 実施の形態1及び2に係る学習部104は、ブートストラップ法(Boot Strap Method)によって、誤り予測モデル32を学習させるが、これに限定されない。学習部104は、他の公知の方法、例えば、深層学習(Deep Learning)によって、誤り予測モデル32を学習させてもよい。また、実施の形態3及び4に係るプロジェクト診断モデル800は、ディープニューラルネットワーク(深層学習器)により実現され得るが、他の人工知能に係る技術により実現されてもよい。 The learning unit 104 according to the first and second embodiments trains the error prediction model 32 by the bootstrap method (BootStrap Method), but is not limited to this. The learning unit 104 may train the error prediction model 32 by another known method, for example, deep learning. Further, the project diagnosis model 800 according to the third and fourth embodiments can be realized by a deep neural network (deep learning device), but may be realized by a technique related to other artificial intelligence.
 実施の形態1及び2に係るプロセッサ10はテスト項目調整部をさらに備え、テスト項目調整部は、ユーザ操作に従って、テスト項目生成部108が生成したテスト項目を変更したり、テスト項目の順番を変更したりしてもよい。このとき、ユーザは、操作部50を介して、例えば、重要度の高いモジュールや機能についてのテスト項目を網羅するようにテスト項目情報313を変更したり、既存のプロジェクトと共通するモジュールや機能についての試験を優先するようにテスト項目の順番を変更したりすることができる。 The processor 10 according to the first and second embodiments further includes a test item adjustment unit, and the test item adjustment unit changes the test items generated by the test item generation unit 108 or changes the order of the test items according to a user operation. You may do it. At this time, the user changes the test item information 313 via the operation unit 50, for example, so as to cover the test items for the modules and functions of high importance, and for the modules and functions common to the existing projects. You can change the order of the test items to give priority to the test.
1 試験装置
10 プロセッサ
20 主記憶部
30 補助記憶部
31 機械学習DB
32 誤り予測モデル
40 表示部
50 操作部
60 通信部
101 データ構造化部
102 教師ありデータ生成部
103 非構造化データ解析部
104 学習部
105 入力データ生成部
106 誤り予測情報取得部
107 リスク評価部
108 テスト項目生成部
109 テストケース情報取得/生成部
110 テストシナリオ生成部
111 テストシナリオ実行部
112 テスト結果比較部
113 シーケンス図生成部
114 テスト結果比較情報投入部
301 テスト実績情報DB
302 出荷後不具合情報DB
303 テスト技術DB
304 ドメイン知識情報DB
305 プロジェクト情報DB
306 テストケースDB
307 テストシナリオDB
308 ドキュメントDB
309 構造化DB
310 教師ありDB
311 教師なしDB
400 残存誤り・リスク・効果情報DB
410 リスク登録簿DB
420 設計技術DB
430 トレーサビリティ情報DB
620 機械学習DB
700 プロジェクト診断機能装置
790 プロジェクト診断モデル学習部
800 プロジェクト診断モデル
51  構造化データ統合部
52  第2の教師ありデータ作成部
53a  教師ありデータ投入部
53b 教師なしデータ投入部
54 非構造化データ統合部
55 第2の非構造化データ解析部
56 特徴要素抽出部
57 非構造化データ変換部
58 検証評価基準抽出部
59 モデル作成部
61 モデル再作成部
62 検証評価出力部
510 リスク重点入力部
520 試験結果データ記憶部
530 残存誤り予測情報取得部
540 残存誤り予測情報
550 リスク及びその対策案情報
560 リスク評価部
570 優先度調整部
580 対策項目選出部
590 レポート作成部
600 残存誤り・リスク情報表示部
610 対策案レポート
615 リスク対策効果判定入力部
1 Test Device 10 Processor 20 Main Storage Unit 30 Auxiliary Storage Unit 31 Machine Learning DB
32 error prediction model 40 display unit 50 operation unit 60 communication unit 101 data structuring unit 102 supervised data generation unit 103 unstructured data analysis unit 104 learning unit 105 input data generation unit 106 error prediction information acquisition unit 107 risk evaluation unit 108 Test item generation unit 109 Test case information acquisition/generation unit 110 Test scenario generation unit 111 Test scenario execution unit 112 Test result comparison unit 113 Sequence diagram generation unit 114 Test result comparison information input unit 301 Test result information DB
302 Post-shipment defect information DB
303 Test Technology DB
304 domain knowledge information DB
305 Project Information DB
306 Test Case DB
307 Test scenario DB
308 Document DB
309 structured DB
310 Supervised DB
311 Unsupervised DB
400 Residual error/risk/effect information DB
410 Risk Register DB
420 Design Technology DB
430 Traceability Information DB
620 Machine learning DB
700 project diagnosis function device 790 project diagnosis model learning unit 800 project diagnosis model 51 structured data integration unit 52 second supervised data creation unit 53a supervised data input unit 53b unsupervised data input unit 54 unstructured data integration unit 55 Second unstructured data analysis unit 56 Feature element extraction unit 57 Unstructured data conversion unit 58 Verification evaluation criterion extraction unit 59 Model creation unit 61 Model re-creation unit 62 Verification evaluation output unit 510 Risk emphasis input unit 520 Test result data Storage unit 530 Residual error prediction information acquisition unit 540 Residual error prediction information 550 Risk and countermeasure plan information 560 Risk evaluation unit 570 Priority adjustment unit 580 Countermeasure item selection unit 590 Report creation unit 600 Residual error/risk information display unit 610 Countermeasure plan Report 615 Risk Countermeasure Effect Judgment Input Section

Claims (14)

  1.  試験対象のプロジェクトに関する情報から、前記プロジェクトにおいて試験を行うためのテストシナリオを生成する処理部を備え、
     前記処理部は、
     入力されたテストケースが誤りを検出する見込み度合いを示す情報を含む誤り予測情報を出力する誤り予測モデルに対し、機械学習で用いられる種々の情報を記憶するデータベースである機械学習データベースに格納された、少なくとも既存のプロジェクトに関する情報および前記テストシナリオにおける個々の処理を規定する複数のテストケースに関する情報に基づいて、各プロジェクトにおける試験での各前記テストケースが誤りを検出したか否かを学習させる学習部と、
     前記試験対象のプロジェクトに関する情報を、前記誤り予測モデルに入力して、各前記テストケースと対応する各誤り予測情報を取得する誤り予測情報取得部と、
     取得された前記誤り予測情報と対応する前記テストケースによって誤りが検出される見込み度合い及び当該誤りの影響度合いを示すリスク評価値を算出して、前記誤り予測情報に付加するリスク評価部と、
     前記リスク評価値が付加された誤り予測情報に基づいて、実施する試験項目を示すテスト項目情報を生成するテスト項目生成部と、
     前記機械学習データベースに含まれるテストケースデータベースから、前記テスト項目情報が含む前記試験の項目に対応するテストケース情報を取得するテストケース情報取得/生成部と、
     取得された前記テストケース情報に基づいて、前記テストシナリオを生成するテストシナリオ生成部と、を含む、
    試験装置。
    A processing unit for generating a test scenario for performing a test in the project from information on the project to be tested,
    The processing unit is
    It is stored in a machine learning database, which is a database that stores various information used in machine learning, for an error prediction model that outputs error prediction information including information indicating the likelihood that an input test case will detect an error. , Learning to learn whether or not each test case in the test in each project detected an error, based on at least information about existing projects and information about multiple test cases that define individual treatments in the test scenario Department,
    Information about the project to be tested is input to the error prediction model, and an error prediction information acquisition unit that acquires each error prediction information corresponding to each test case,
    A risk evaluation unit that calculates a risk evaluation value indicating a probability that an error is detected by the test case corresponding to the acquired error prediction information and a degree of influence of the error, and adds the error prediction information to the error prediction information.
    Based on the error prediction information to which the risk evaluation value is added, a test item generation unit that generates test item information indicating a test item to be executed,
    A test case information acquisition/generation unit that acquires test case information corresponding to the item of the test included in the test item information from a test case database included in the machine learning database;
    A test scenario generation unit that generates the test scenario based on the acquired test case information,
    Test equipment.
  2.  前記テスト項目生成部は、前記リスク評価値が、所定の閾値よりも高いテストケース全てについて、前記テスト項目情報を生成する、請求項1に記載の試験装置。 The test apparatus according to claim 1, wherein the test item generation unit generates the test item information for all test cases in which the risk evaluation value is higher than a predetermined threshold value.
  3.  前記テスト項目生成部は、前記リスク評価値が、所定の閾値よりも高いテストケースのうちの、重要なテストケースについて、前記テスト項目情報を生成する、請求項1に記載の試験装置。 The test apparatus according to claim 1, wherein the test item generation unit generates the test item information for an important test case out of the test cases in which the risk evaluation value is higher than a predetermined threshold value.
  4.  前記テストケース情報取得/生成部は、前記テストケースデータベースに、各前記テスト項目情報が示す試験の内容を示すテストケース情報がないと判断した場合、テストケース情報を生成する、請求項1~3のうちのいずれか一項に記載の試験装置。 The test case information acquisition/generation unit generates test case information when it is determined that the test case database does not include test case information indicating the content of the test indicated by each of the test item information. The test apparatus according to any one of 1.
  5.  前記処理部はさらに、前記テストシナリオに従って、対象装置の試験を実行するテストシナリオ実行部を含む、請求項1~4のうちのいずれか一項に記載の試験装置。 The test device according to any one of claims 1 to 4, wherein the processing unit further includes a test scenario execution unit that executes a test of the target device according to the test scenario.
  6.  前記処理部はさらに、前記テストシナリオに従って対象装置の試験を実行した結果に基づいて、前記誤り予測モデルを学習させる、テスト結果比較情報投入部を含む、請求項5に記載の試験装置。 The test apparatus according to claim 5, wherein the processing unit further includes a test result comparison information input unit that trains the error prediction model based on a result of executing a test of the target device according to the test scenario.
  7.  請求項1に記載の試験装置を含む開発支援装置であって、
     試験完了時に残存している残存誤り予測情報から、関連するリスク及びその対策を提示するプロジェクト診断モデルを備え、
     更に、試験完了時に残存している残存誤り予測情報に基づいて、プロジェクト診断モデルを学習させるプロジェクト診断モデル学習部を含む、
    開発支援装置。
    A development support apparatus including the test apparatus according to claim 1,
    Equipped with a project diagnosis model that presents related risks and countermeasures from residual error prediction information remaining at the time of test completion,
    Further, based on the residual error prediction information remaining at the time of test completion, including a project diagnostic model learning unit for learning the project diagnostic model,
    Development support device.
  8.  前記プロジェクト診断モデル学習部は、更に、
     非構造化データのテキスト解析結果に基づく、非構造化データと、残存誤りに対するリスク及びその対策が明確である教師ありデータとの類似性の分析や、トレーサビリティ情報を活用することにより、実際に対策が必要となった対応の特徴要素を抽出し、
     更に、前記プロジェクト診断モデル学習部は、非構造化データのテキスト解析結果と、抽出された特徴要素とを、規則性のある構造化データに変換する非構造化データ変換部を含む、請求項7に記載の開発支援装置。
    The project diagnosis model learning unit further includes
    Measures based on the results of text analysis of unstructured data, analysis of the similarity between unstructured data and supervised data with clear risk and countermeasures against residual errors, and by utilizing traceability information Extract the corresponding feature elements that required
    Furthermore, the project diagnosis model learning unit includes an unstructured data conversion unit that converts the text analysis result of the unstructured data and the extracted feature element into structured data having regularity. Development support device described in.
  9.  更に、残存していると予測される誤りからプロジェクト診断モデルで出力されるリスクが既知もしくは未知なのかを判別した上で、優先度を判断して対策項目を提示する優先度調整部を含む、
    請求項8に記載の開発支援装置。
    Furthermore, after determining whether the risk output by the project diagnosis model is known or unknown from the error that is predicted to remain, the priority adjustment unit that determines the priority and presents the countermeasure items is included.
    The development support device according to claim 8.
  10.  前記プロジェクト診断モデルに対して、既存のプロジェクトに関する情報に基づいて、各プロジェクトにおける試験完了時点での残存している残存誤り予測情報から、関連するリスクに基づく対策案がどの程度効果があったか否かを入力して、学習させるリスク対策効果判定入力部を含む、
    請求項9に記載の開発支援装置。
    To the project diagnosis model, based on the information about the existing project, from the residual error prediction information remaining at the time of completion of the test in each project, how effective is the countermeasure plan based on the related risk? Including a risk countermeasure effect determination input section to input and learn,
    The development support device according to claim 9.
  11.  機械学習で用いられる種々の情報を記憶するデータベースである機械学習データベースに含まれ、テストシナリオにおける個々の処理を規定する複数のテストケース情報が格納される、テストケースデータベースと、通信可能であり、
     試験対象のプロジェクトに関する情報から、前記プロジェクトにおいて試験を行うためのテストシナリオを生成する処理部を備える、
    試験システムであって、
     前記処理部は、
     入力されたテストケースが誤りを検出する見込み度合いを示す情報を含む誤り予測情報を出力する誤り予測モデルに対し、既存のプロジェクトに関する情報に基づいて、各プロジェクトにおける試験での各前記テストケースが誤りを検出したか否かを学習させる学習部と、
     前記試験対象のプロジェクトに関する情報を、前記誤り予測モデルに入力して、各前記テストケースと対応する各誤り予測情報を取得する誤り予測情報取得部と、
     取得された前記誤り予測情報と対応する前記テストケースによって誤りが検出される見込み度合い及び当該誤りの影響度合いを示すリスク評価値を算出して、前記誤り予測情報に付加するリスク評価部と、
     前記リスク評価値が付加された誤り予測情報に基づいて、実施する試験項目を示すテスト項目情報を生成するテスト項目生成部と、
     前記テストケースデータベースから、前記テスト項目情報が含む前記試験の項目に対応するテストケース情報を取得するテストケース情報取得/生成部と、
     取得された前記テストケース情報に基づいて、前記テストシナリオを生成するテストシナリオ生成部と、を含む、
    試験システム。
    Communicatable with a test case database, which is included in a machine learning database that is a database that stores various information used in machine learning, and stores a plurality of test case information that defines individual processing in a test scenario,
    A processing unit that generates a test scenario for performing a test in the project from information about the project to be tested,
    A test system,
    The processing unit is
    For the error prediction model that outputs the error prediction information that includes the information indicating the probability that the input test case will detect an error, each test case in the test in each project is incorrect based on the information about the existing project. A learning unit for learning whether or not
    Information about the project to be tested is input to the error prediction model, and an error prediction information acquisition unit that acquires each error prediction information corresponding to each test case,
    A risk evaluation unit that calculates a risk evaluation value indicating a probability that an error is detected by the test case corresponding to the acquired error prediction information and a degree of influence of the error, and adds the error prediction information to the error prediction information.
    Based on the error prediction information to which the risk evaluation value is added, a test item generation unit that generates test item information indicating a test item to be executed,
    A test case information acquisition/generation unit that acquires test case information corresponding to the test item included in the test item information from the test case database;
    A test scenario generation unit that generates the test scenario based on the acquired test case information,
    Testing system.
  12.  請求項11に記載の試験システムを含む開発支援システムであって、
     試験完了時に残存している残存誤り予測情報から、関連するリスク及びその対策を提示するプロジェクト診断モデルを備え、
     更に、試験完了時に残存している残存誤り予測情報に基づいて、プロジェクト診断モデルを学習させるプロジェクト診断モデル学習部を含む、
    開発支援システム。
    A development support system including the test system according to claim 11,
    Equipped with a project diagnosis model that presents related risks and countermeasures from residual error prediction information remaining at the time of test completion,
    Further, based on the residual error prediction information remaining at the time of test completion, including a project diagnostic model learning unit for learning the project diagnostic model,
    Development support system.
  13.  試験対象のプロジェクトに関する情報から、前記プロジェクトにおいて試験を行うためのテストシナリオを生成する、試験のためのコンピュータプログラムにおいて、
     機械学習で用いられる種々の情報を記憶するデータベースである機械学習データベースに含まれ、テストシナリオにおける個々の処理を規定する複数のテストケース情報が格納される、テストケースデータベースを、設定するステップと、
     入力されたテストケースが誤りを検出する見込み度合いを示す情報を含む誤り予測情報を出力する誤り予測モデルに対し、既存のプロジェクトに関する情報に基づいて、各プロジェクトにおける試験での各前記テストケースが誤りを検出したか否かを学習させるステップと、
     前記試験対象のプロジェクトに関する情報を、前記誤り予測モデルに入力して、各前記テストケースと対応する各誤り予測情報を取得するステップと、
     取得された前記誤り予測情報と対応する前記テストケースによって誤りが検出される見込み度合い及び当該誤りの影響度合いを示すリスク評価値を算出して、前記誤り予測情報に付加するステップと、
     前記リスク評価値が付加された誤り予測情報に基づいて、実施する試験項目を示すテスト項目情報を生成するステップと、
     前記テストケースデータベースから、前記テスト項目情報が含む前記試験の項目に対応するテストケース情報を取得するステップと、及び、
     取得された前記テストケース情報に基づいて、前記テストシナリオを生成するステップと、
    を含む、試験のためのコンピュータプログラム。
    A computer program for a test, which generates a test scenario for performing a test in the project from information about a project to be tested,
    A step of setting a test case database, which is included in a machine learning database that is a database that stores various information used in machine learning and in which a plurality of test case information defining individual processing in a test scenario is stored,
    For the error prediction model that outputs the error prediction information that includes the information indicating the probability that the input test case will detect an error, each test case in the test in each project is incorrect based on the information about the existing project. To learn whether or not is detected,
    Inputting information about the project to be tested into the error prediction model, and obtaining each error prediction information corresponding to each test case,
    A step of calculating a risk evaluation value indicating a probability that an error is detected by the test case corresponding to the acquired error prediction information and a degree of influence of the error, and adding the risk evaluation value to the error prediction information;
    Generating test item information indicating a test item to be performed, based on the error prediction information to which the risk evaluation value is added,
    Acquiring test case information corresponding to the items of the test included in the test item information from the test case database, and
    Generating the test scenario based on the acquired test case information,
    A computer program for testing, including.
  14.  請求項13に記載の試験のためのコンピュータプログラムを含む、開発支援のためのコンピュータプログラムであって、
     試験完了時に残存している残存誤り予測情報から、関連するリスク及びその対策を提示するプロジェクト診断モデルに対して、試験完了時に残存している残存誤り予測情報に基づいて、学習させるステップを、更に含む、
    開発支援のためのコンピュータプログラム。 
    A computer program for supporting development, comprising the computer program for the test according to claim 13.
    From the residual error prediction information remaining at the time of test completion, to the project diagnostic model that presents related risks and countermeasures, based on the residual error prediction information remaining at the time of test completion, a step of learning Including,
    Computer program for development support.
PCT/JP2019/040551 2018-12-27 2019-10-16 Test device, and development support device WO2020137096A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020562382A JP7034334B2 (en) 2018-12-27 2019-10-16 Test equipment and development support equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-244253 2018-12-27
JP2018244253 2018-12-27

Publications (1)

Publication Number Publication Date
WO2020137096A1 true WO2020137096A1 (en) 2020-07-02

Family

ID=71126258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/040551 WO2020137096A1 (en) 2018-12-27 2019-10-16 Test device, and development support device

Country Status (2)

Country Link
JP (1) JP7034334B2 (en)
WO (1) WO2020137096A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210406144A1 (en) * 2020-06-30 2021-12-30 Tektronix, Inc. Test and measurement system for analyzing devices under test
CN113902296A (en) * 2021-10-09 2022-01-07 鹤山市民强五金机电有限公司 Intelligent test method and system for single-phase asynchronous motor
CN117651289A (en) * 2024-01-26 2024-03-05 中国人民解放军军事科学院系统工程研究院 Data processing method and device for radio communication equipment test

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004125670A (en) * 2002-10-03 2004-04-22 Toshiba Corp Test pattern selection apparatus, test pattern selection means, and test pattern selecting program
JP2013125420A (en) * 2011-12-14 2013-06-24 Shift Inc Apparatus and program for creating test specification of computer program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011257947A (en) * 2010-06-08 2011-12-22 Clarion Co Ltd Method, apparatus and program for preparing test plan
US9619363B1 (en) * 2015-09-25 2017-04-11 International Business Machines Corporation Predicting software product quality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004125670A (en) * 2002-10-03 2004-04-22 Toshiba Corp Test pattern selection apparatus, test pattern selection means, and test pattern selecting program
JP2013125420A (en) * 2011-12-14 2013-06-24 Shift Inc Apparatus and program for creating test specification of computer program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210406144A1 (en) * 2020-06-30 2021-12-30 Tektronix, Inc. Test and measurement system for analyzing devices under test
US11782809B2 (en) * 2020-06-30 2023-10-10 Tektronix, Inc. Test and measurement system for analyzing devices under test
CN113902296A (en) * 2021-10-09 2022-01-07 鹤山市民强五金机电有限公司 Intelligent test method and system for single-phase asynchronous motor
CN117651289A (en) * 2024-01-26 2024-03-05 中国人民解放军军事科学院系统工程研究院 Data processing method and device for radio communication equipment test
CN117651289B (en) * 2024-01-26 2024-04-05 中国人民解放军军事科学院系统工程研究院 Data processing method and device for radio communication equipment test

Also Published As

Publication number Publication date
JP7034334B2 (en) 2022-03-11
JPWO2020137096A1 (en) 2021-09-09

Similar Documents

Publication Publication Date Title
Xia et al. Collective personalized change classification with multiobjective search
WO2020137096A1 (en) Test device, and development support device
EP3321865A1 (en) Methods and systems for capturing analytic model authoring knowledge
US10839314B2 (en) Automated system for development and deployment of heterogeneous predictive models
US20180137424A1 (en) Methods and systems for identifying gaps in predictive model ontology
EP3740906A1 (en) Data-driven automatic code review
KR20150046088A (en) Predicting software build errors
Frank et al. A performance evaluation framework for building fault detection and diagnosis algorithms
CN110909758A (en) Computer-readable recording medium, learning method, and learning apparatus
Boubekeur et al. Automatic assessment of students' software models using a simple heuristic and machine learning
US20210398020A1 (en) Machine learning model training checkpoints
JPWO2018079225A1 (en) Automatic prediction system, automatic prediction method, and automatic prediction program
Thomas et al. Real-time prediction of severe influenza epidemics using Extreme Value Statistics
US20210286706A1 (en) Graph-based method for inductive bug localization
US20210279608A1 (en) Prediction rationale analysis apparatus and prediction rationale analysis method
Michael et al. Quantifying the value of surveillance data for improving model predictions of lymphatic filariasis elimination
CN111858386A (en) Data testing method and device, computer equipment and storage medium
EP3743826A1 (en) Autonomous hybrid analytics modeling platform
Çağıltay et al. Abstract conceptual database model approach
US20240127214A1 (en) Systems and methods for improving machine learning models
US20230297880A1 (en) Cognitive advisory agent
CN111352840B (en) Online behavior risk assessment method, device, equipment and readable storage medium
US20230029851A1 (en) Machine learning model generating system, machine learning model generating method
WO2022044221A1 (en) Information processing device, information processing method, and recording medium
Paterson Improvements to Test Case Prioritisation considering Efficiency and Effectiveness on Real Faults

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19903091

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020562382

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19903091

Country of ref document: EP

Kind code of ref document: A1