WO2020194000A1 - Method of detecting and removing defects - Google Patents

Method of detecting and removing defects Download PDF

Info

Publication number
WO2020194000A1
WO2020194000A1 PCT/GR2019/000025 GR2019000025W WO2020194000A1 WO 2020194000 A1 WO2020194000 A1 WO 2020194000A1 GR 2019000025 W GR2019000025 W GR 2019000025W WO 2020194000 A1 WO2020194000 A1 WO 2020194000A1
Authority
WO
WIPO (PCT)
Prior art keywords
program product
defect
data
application
code
Prior art date
Application number
PCT/GR2019/000025
Other languages
French (fr)
Inventor
Vaios VAITSIS
Original Assignee
Validata Holdings Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Validata Holdings Limited filed Critical Validata Holdings Limited
Priority to PCT/GR2019/000025 priority Critical patent/WO2020194000A1/en
Publication of WO2020194000A1 publication Critical patent/WO2020194000A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software

Definitions

  • the present invention relates to a method of detecting and removing defects from a computer program product and computer program products applying the method.
  • Modern computer software can be a highly complex artefact made up of several hundreds of thousands of lines of codes, representing data and instructions that, when installed on a general-purpose computer, interact with the hardware and software elements of the computer system to cause the whole to behave in a useful manner as designed by the software engineer.
  • Computer software can be regarded at many different levels, from an application suite composed of a number of separate applications dedicated to different general tasks, but having a degree of commonality and integration, through the modules of each one of those applications, through smaller elements (objects, libraries, routines to be called) , and so on down to individual lines of code and ultimately the octets and digits stored as binary logical states in permanent or temporary random access memory. These states are interpreted when the software is running to control the operation of the machine that is the programmed computer by interacting with other software, whether BIOS, operating system software or other applications, or with hardware such as the CPU, system memory, and numerous peripheral devices.
  • a program product is composed of a unique organization these stored binary states on or in a carrier, and is adapted to perform well-defined actions when loaded into the working memory of a suitable compatible computer system and interacting with an environment that includes input devices (by no means necessarily human-controlled) and output devices (by no means necessarily interpretable by humans) , it is a unique machine with well-defined functions, albeit a machine which operates with invisible components that are made and unmade, as the momentary activity of the machine requires, during its own operation .
  • testing makes up a large proportion of software creation time.
  • the elementary steps once an application is planned are entering lines of code at a user interface, normally in a high-level programming language, storing the entered code, compiling it to object code and storing that, normally in non-volatile memory.
  • defects will be found and will need correction. These corrections are made throughout the development process.
  • testing can be organized in various ways. Further, it can be performed manually or with automation.
  • the present invention aims to offer a fully automated and valid solution to the gap described above.
  • the present invention provides a method of making a replicable computer program product with reduced defect content which provides means for testing an early stage product to identify defects and repair identified defects before a final build is fixed and copies thereof are replicated for use or commercial distribution.
  • the invention concerns the process of
  • a method of detecting and removing defects from a computer program product which is adapted to run with an objective purpose on a computer system and contains interface elements that interface with an environment outside the computer program product, data elements that contain data with which the program product
  • the objective purpose of the program product may be of many kinds.
  • the product may be wholly dedicated, for example as a control program for an industrial process in manufacturing industry, may be dedicated but flexible, for example as a generic booking system for travel or entertainment, a business accounts program in a specific field, or an intellectual property portfolio management system, or it may be widely generic, for example as a general purpose database program with query and reporting tools. It may be a complete application or a simple utility, for example a
  • the computer system may be of any general kind, ranging from personal computers and laptops, down through smaller handheld devices and mobile telephones, and up through local or wide area networked systems, Internet servers, and dedicated commercial, industrial or administrative installations.
  • the program product will have interface elements to interact with its environment.
  • steps (a) (b) and (c) the product to be tested is installed and initiated in a conventional manner.
  • the program code is read into memory as required, and the program begins to run.
  • step (d) the various functionalities are analyzed.
  • the general process of determining the functionality of the data elements of the program product comprises monitoring the principal data components of the program product.
  • the general process of determining the functionality of the process elements of the program product comprises monitoring the business processes of the product.
  • the application intelligence analysis also includes the pathways by which data flows through the application.
  • An example may be a name (or a number or the like) which flows through a booking system from module to module, being accepted at the start of the process, recorded as a ticket purchaser and ending up by being printed on an issued ticket.
  • the application intelligence analysis deals with the interfaces between modules within the application, for example transfers of information from the booking system to the payment system, and from the booking system to a fax or email system to issue a receipt acknowledgement and later a booking confirmation to the customer, and to a printing system to produce an actual ticket .
  • the acquired information obtained by the analysis of the various functionalities is stored in the computer system memory as data representative of said functionalities, in a convenient manner.
  • the step (e) of detecting that a performance error has occurred or is about to occur may be accomplished in a variety of ways.
  • One way which will be straightforward for a user is simple observation during normal use of the application under test. Within the application, expected results do not appear in response to proper actions. More dramatically the application may crash, or the system may crash. Indications that a performance error is likely to occur may be a slowing down of responsiveness, or excessive disk read/write attempts, or excessive or abnormal CPU
  • step (e) is augmented by the further step of providing a test library containing a repertoire of test routines for stimulating performance errors.
  • a test library may be installed on the computer system and made available during the process.
  • Specific test routines may be selected from among the library test routines as may be appropriate to the functionalities of the interface, data and process elements of the products analyzed in step (d) .
  • a suitable breadth may extend over a full operating range of data towards the limits of what may be acceptable data, or may extend over a narrow range if defects are anticipated in such narrow range .
  • a further example includes the step of running at least one of the selected test routines with a corresponding data set on the computer program product prior to detecting a
  • step (e) This may include the step of running at least one of the selected test routines with a
  • test routines can be of different classes. Specific types of routine favored in the practice of the present invention include unit testing routines, integration testing routines, functional testing routines, and performance testing routines, and routines elected from these types are preferred.
  • step of running at least one of the test routines will include running more than one of the test routines, performance errors are detected in step (e) , repaired program code is
  • step (i) a defect-corrected iteration of the program product is yielded in step (j) .
  • the step (h) of examining the program product code is performed by accessing the code installed in step (b) , although the examination may be done in another time or on another instance of the program product.
  • the invention specifically contemplates replacing the program product code installed in step (b) with the defect-corrected iteration of the program product yielded in step (j), and in such a case the step (h) of examining the program product code may be performed by accessing the defect-corrected iteration of the program product that has replaced the code installed in step (b) .
  • the disclosure provides a method of operating a software application, which comprises
  • the reference to transferring the objective purpose is a reference to the process being performed by the application as seen by the external environment, be it a user operating a word processor and industrial process, or anything else.
  • the transfers of objective purpose between the parallel systems are made without effective interruption of the purpose of the application, so that the application appears to run continuously, from an external view point .
  • the disclosure also comprises storing the defect-corrected iteration resulting from any method of the disclosure, on or in a carrier.
  • the description further provides a computer program product defect-corrected by any method of the disclosure, on or in a carrier.
  • the disclosure further extends to computer program product means for carrying out any method of the description, including means for carrying out the individual steps of the methods, and as further described herein, on or in a carrier.
  • Suitable carriers include a magnetic, electronic and optical digital storage media, and electromagnetic including optical transmission media. Suitable carriers are machine-reachable, whereby the program products may be copied, extracted, or run on a computer system.
  • Figure 1 is an architecture diagram followed by the disclosure
  • Figure 2 shows a diagrammatic illustration of the components of a program product testing system utilizing a method in accordance with this invention
  • Figure 3 is a flow chart illustrating a first example of the method.
  • Figure 4 is a flow chart illustrating a second example of the method.
  • the disclosure is a unique concept solution that automatically generates testing workflows, test cases and data fast and
  • the description acquires appropriate system, application, process and database knowledge from the software application or software product under test by using Artificial Intelligence (AI)
  • the acquired intelligence is used to perform automatic testing and defect fixing on the software application under test.
  • the analytic built-in engine analyses the data and makes recommendations and suggestions on "next best action", test and test data coverage against the best requirements (workflow) paths in order to ensure that the project QA team will have the best coverage with the minimum number of test cases and data depending and the project nature, upgrade, new
  • the analytic engine analyses also the defects and fixes of the defects dynamically, using error adoption and classification techniques. It categorizes the defect based on the root cause, the criticality, the severity and the risk after correcting via AI possible human errors inputs, by enriching the defect details for better handling. It predicts the Time to fix by identifying all defect correlations and similarities in the built - in
  • the solution for automated testing and defect analysis is derived internally through its solution analysis engine.
  • This analysis engine uses fuzzy logic for deriving solutions.
  • the invention can be used in live or production environments.
  • the "Defect Intelligence” unit captures defects dynamically and passes to the analytic engine.
  • the "Defect analytics” module dynamically runs AI techniques on the defect raw data, corrects them and reshapes them into actual analytics and insights for better defect management and decision making.
  • AI Application Intelligence
  • AI gathers the knowledge, in real-time, of various application components, GUI (graphical user interfaces) , images, files, metadata and data associations, actions that derive from the configurations and associations between the data and the metadata, user activities, monitoring information etc.
  • GUI graphical user interfaces
  • This module interfaces with the process, testing and defect intelligence engines. It has its own model and model association visualization module that shows all the data and metadata correlations with the ability to drill down in the schema and visualize your application structure and associative model knowledge base in a user friendly and business readable way.
  • Application intelligence engine that is able to extract and transform the needed information from the production environment into Json format, with all the client sensitive data, masked.
  • This technology is written in Microsoft C# programming language, and it is wrapped in a Windows Service responsible of all application data intelligence "INPUT DATA".
  • PI Process Intelligence
  • the computer assisted method is AI supported, and the built - in intelligence library drives the user in the next best action/step, with the appropriate justification. This is a real time interaction between the User and the Process
  • the process intelligence engine is working based on API/ Json call/requests to the invention' s backend and analytic engine, from where it collects in real-time the recommendations and suggestions for the Process design, updates/amendments and justifications on the best workflow path or next best step /action.
  • This module interfaces with the testing and defect intelligence engines .
  • DI Data Intelligence
  • DI takes input from the database and user logs for the prdduction environment and the under-test environment. DI acquires the knowledge of data, data types and relation between data. DI can dynamically capture updates by itself, if any changes to the database are made. It recommends the best data set to be used in a testing scenario for appropriate test data coverage, against Risk and business needs. For this we use particular algorithms that identify the data "frequent episodes" and suggest the needful.
  • the disclosure's data intelligence module offers a built-in synthetic data generation solution, which created the data based on
  • Data intelligence collects the data through Json- API calls to the invention's backend, where all the data, metadata, associations, monitoring data and application intelligence exist.
  • This module interfaces with the Auto testing and defect fixing engine .
  • Test case intelligence (TI) module derives the application business process and data knowledge from the Production
  • TI autogenerates automatically the test cases and test data based on this
  • This module interfaces with Process, data and defect intelligence engines
  • Test case intelligence collects the data through Json- API calls to the invention's backend, where all the data, metadata,
  • Test cases can be exported either in XML format so they can be published in other execution engines or they can be exported in an Automation intelligence platform, where they can be further automated and executed.
  • test cases and test data in the system are mostly in 'read mode' as further changes from the autogenerated objects can occur in the target test management system.
  • Performance Intelligence (PI) module derives from data collected over performance testing and monitoring, providing insights and information on issues that are related to performance degradation and other challenges for suggestion on corrective actions on the infrastructure front.
  • Defect Intelligence (DDI) module derives from the defect data, files and logs. DDI autogenerates automatically the Defect
  • DDI communicates and collects the information via API / Json format calls, in real time. It uses a built-in defect clustering method, continuously learning, which auto-corrects the defects information, root causes and associates each defect to each possible peer for analyzing and recommending the time to fix.
  • DDI consumes data from flat files, images, xml and other formats. This module interfaces with
  • Automation intelligence derives the information from the Analytic engine, test intelligence and data intelligence. It helps the users automatically execute the necessary test cases, over various testing environment in a performant manner, collects the results and allows creation and management of defects within the same repository.
  • Automation intelligence allows the user to update the
  • Automation intelligence and the Analytic Engine, Test Intelligence and Data Intelligence is taking place via XML files communication.
  • the Analytic engine is based on an amalgamation of algorithms that will generate recommendations and suggestions on Test coverage and Defects analytics. Each recommendation has each own support.
  • the AE is collecting the data from the application intelligence module and process them in an autolearning way that adjusts the results accordingly and dynamically based on the new inputs, changes .
  • the monitoring and production data along with the knowledge base of the metadata and their associations allow the Analytic engine to auto detect in real-time the configuration changes and the latest user frequent episodes and alert the user on those updates via immediate coverage recommendations and further best actions to be added or removed.
  • the clusters are the result of a continuous
  • test coverage and defect analytics work through a bug hunting algorithm which is autolearning as well and ensures all
  • Figure 2 shows an overall model of means for carrying out the methods of the disclosure.
  • a program product believed to have possible defects may be referred to as the application under test (AUT) and is represented in Figure 2 by module 12.
  • AUT application under test
  • This is a computer program product which is adapted to run with an objective purpose on a computer system and contains interface elements that interface with an environment outside the computer system, data elements that contain data with which the program product
  • Interface elements may be the environment external to the specific computer system on which the program product runs, or may be the internal system environment. Interfaces with the external
  • keyboard pointing device (mouse, trackball) and microphone
  • other input devices with user involvement such as digital camera input
  • storage medium readers such as removable disk drives, card readers, bar-code scanners, currency note acceptance slots
  • dedicated input devices such as transducers and counters of various kinds, as well as network connections to other data processing systems.
  • Interface elements include output devices such as video displays, printers for general and dedicated purposes, audio devises for general and dedicated purposes,
  • the interfaces with these input and output devices are important sites for controlling the software product and determining its response to stimuli during the testing process.
  • an application under test may be provided with a set of inputs that simulate an expected operating condition, a specific response is expected, and the actual outputs are monitored and compared with the expected response in order to detect any performance error.
  • the interfaces may also be software -software interfaces for interaction with other software modules running on the system or on external systems. Examples are interfaces with a graphical user interface, application component interface, and business rules and databases interfaces.
  • the AUT 12 includes a database submodule 14 containing at least some of the data elements of the application.
  • the remaining elements of Figure 2 constitute a model example of development software 10 for performing the methods of the description
  • the database submodule 14 provides information to a data
  • DI intelligence
  • the process of determining the functionality of the data elements of the product looks at databases maintained in the product and how data is written to the databases, stored in the databases, and accessed from the databases, and may be called data intelligence.
  • the various data types may be analyzed for example whether integer or floating point numeric, alphanumeric, date, or logical
  • the various data structures may be analyzed to determine how the data is represented in its storage. Properties of and relations between data fields may be analyzed for example, data field properties, maximum and minimum allowed values, numbers of rows and columns in spreadsheets and the logical relations and restrictions between data elements.
  • an expiry date may be entered by a user then stored in a date field in various formats or may not be entered but instead may be calculated before storage or may not be stored at all but be calculated on the fly whenever required.
  • the DI module 18 also obtains information directly from the nondatabase elements of the AUT.
  • the AUT 12 also provides information to- and receives information from an application intelligence (AI) module 16 which analyzes the executing AUT to determine the functionality of the interface elements of the application.
  • AI application intelligence
  • the general process of determining the functionality of the interface elements of the program product comprises monitoring the application components, and any graphical user interface of the product.
  • the process includes an assessment of the available aspects of the front end of the application and may be called application intelligence.
  • These front-end aspects may vary according to the nature of the program product, but may include the user interface generally, user controls, menus, error displays and messages, result of outcome displays and messages, report generation, error logging, error analysis and dynamic defect fixing .
  • the AUT 12 also provides information to- and receives information from a process intelligence (PI) module 20 which analyzes the executing AUT to determine the functionality of the process elements of the application.
  • PI process intelligence
  • the process analyzes the internal logic of the application and may be called application logic or process intelligence.
  • One aspect of this step involves looking at the business processes that the application under test is being called upon to perform. Examples of such business processes may include booking systems, invoice creation, customer assignments and mathematical calculations. An example of this last might include the principles used for calculating rates of interest, the input parameters, the
  • Analysis of the functionality of the interface elements, the data elements and the process elements are performed by an analysis engine (AE) 22, which exchanges data with the AI module 16, the DI module 18 and the PI module 20 respectively.
  • the analysis engine uses artificial intelligence techniques, of kinds known in that art, to build a knowledge base of the design and performance of the AUT. These techniques are better discussed below.
  • the analysis engine 22 also interfaces and exchanges data with a test routine library or test intelligence (TI) module 24.
  • TI test intelligence
  • Integration testing is aimed at the interactions between the modules of the AUT and may trace data integrity.
  • System testing is aimed at the functionality of the whole application, to determine whether it works correctly from end to end; in the example of a ticket booking application, correct behavior is tested from the initial request through reservation, payment and ticket issue.
  • Performance testing is aimed at the application under stress and may cover such
  • performance indicators as the maximum number of concurrent users (in the case of an internet we page, for example, where the potential number of users may be practically unlimited) ,
  • a user of the method is provided with a suitable user interface which includes options for selecting the kind of testing required.
  • the analysis engine 22 is able to derive test sets and test conditions for effectively testing the application under test (AUT) 12. To do this, it uses techniques selected from fuzzy logic set reduction, cause-effect modelling, black box testing with fuzzy networks, automatic GUI (graphical user interface) regression testing using AI planning, test set
  • partitioning techniques test set generation using a risk based approach, defect prediction techniques, test set reduction using orthogonal array techniques, three-group software quality
  • a multilingual intelligence (MI) module 28 also interfaces with the AE 22 and provides for translation between equivalent words and phrases in natural language.
  • the AUT may have an English language version in which standard terms, such as the names of weekdays or months, or menu options and error messages, or data field labels, are in English, it may also have, say, a French language version in which the corresponding terms are used and expected in the French language. Any required number of languages may be provided for.
  • the data is exchanged with the AE and translated as required.
  • the different language versions may also be needed for interfacing with other software, such as websites with multiple language options.
  • the MI module may also include the appropriate character sets and keyboard tables for the different language standards.
  • a further important module is an auto-testing and defect-fixing (AT/DF) module28, which is closely bound with the AE module for information exchange, receives inputs from the AI module 16, the TI module 24 and the ML module 26, and also exchanges data with the PI module 20.
  • This module both runs the tests and repairs the defects that are found, calling on the available information and the analysis procedures of the analysis engine 22.
  • a detection agent runs in the background while the AUT 12 is running. The occurrence of a performance error, or the
  • detection of an impending performance error triggers a process of identifying the error, storing data representing the error in the computer system memory, and combining information derived from the stored data representing the performance error with information derived from the stored data representing the functionalities of the interface elements, the data elements and the process elements in order to successfully identify a cause of the performance error .
  • the control then passes to the defect fixing component which examines the program product code and identifies a program code defect with the cause of the performance error; the defect fixing component generates a repaired program code in which the performance defect is at least mitigated or removed entirely;
  • defect fixing component stores the repaired code as code output (CO) 30. This can be applied immediately to the application under test or saved for substitution into the application at a later time. Thus, the defects can be fixed dynamically in the system.
  • the remaining module shown in Figure 2 is a report generator (RG) 32 which is driven by the AE module 22 and produces test reports and defect fix reports on demand, in whatever format is required, typically through print or electronic text media.
  • RG report generator
  • Figure 3 illustrates a typical procedure for carrying out the method of the disclosure in a development environment.
  • Start 40 the process is started by installing the application under test on a computer system which meets all the needs of the application.
  • an appropriate operating system there will typically be a keyboard and pointing device for operator input, and a monitor screen on which a graphical user interface is displayed.
  • the operator issues the appropriate command or commands to start the execution of the AUT .
  • the necessary development software 10 for assisting the operator to perform the method of the disclosure has also been installed on the system. This includes all the modules illustrated in Figure 1 and already described.
  • the operator uses keyboard and mouse to issue initial commands, identifying the local or network path to the AUT, which may be in a directory or folder or on an internet web page.
  • standard system monitoring and information utilities are used to identify the process which belong to the AUT and the blocks of stored code or files which make it up.
  • the operator is then able to initiate the process of analyzing the executing program product by means of the AI 16, DI 18, PI 20 and AE22.
  • a knowledge base for the AUT is created, or updated if a previous knowledge base already exists.
  • knowledge base constitutes a library of information covering program design and implementation down to the smallest component level .
  • a further element of the Start procedure is for the operator to initiate the preparation of test procedures, by selecting a "test preparation" command.
  • the AE22 and the TI 24 interact and generate the necessary test procedures adapted to the AUT or update the existing test procedures to reflect changes if the AUT has previously been tested and repaired on the system.
  • the test procedures amount to fixing a set of conditions defining a particular state of the AUT which it is likely to reach, in use; selecting suitable kinds of input to the AUT in that condition; evaluating the appropriate subsequent steps; and identifying an expected outcome.
  • the operator is in a position to control the development software 10 which assists him in carrying out the method of the disclosure on an application under test 12 which is also running on the system.
  • the operator issues a "start testing" command and is presented with a menu from which the type of testing required can be selected. The selection is made at step 50.
  • test procedures is loaded and populated with data sets on the basis of information derived from the AUT database 14 by the data intelligence module 18.
  • the next step 54 is to check the configurations of the test sets and of the application to be tested, following which decision 56 determines whether everything is in order to commence the testing procedure. If the configurations are not correct, corrections will be made to the configuration file. Once these corrections are made, the configuration will be rechecked at 54 to confirm that the changed configuration is correct. If the application is ready for testing, the testing is carried out at step 60, where the tests are run through the required number of data sets.
  • Decision 62 asks whether a performance error has been detected by the AT/DF module 28. If no errors are reported during this set of tests, the tests are completed and reports are sent to the report generator 32 in step 64, the process reaches completion and terminates at step 66.
  • step 64 in which testing is completed may involve reversion to step 52, where a new set of tests and test data are loaded for the next series of tests to be carried out in steps 54 onward.
  • the defectfixing component of the AT/DF module 28 is invoked at 70 and the performance error is logged in a defect logging system which records all necessary information about the test run, the test conditions, the expected output, and the actual output, among other relevant information.
  • the immediate next step 72 is to avoid an application or system crash, by taking the necessary remedial action.
  • the AT/DF module 28 cooperates with the analysis engine 22, as previously described, to identify a program code defect with the cause of the performance error, and to generated repaired program code in which the identified performance defect is mitigated.
  • the repaired program code is preferably not applied to the AUT until after the current set of tests has been completed.
  • a defect fix report is generated by report generator 32, and at step 78 the AUT can be resumed, optionally with repaired program code applied to the program product to yield a defect-corrected iteration of the AUT for further testing.
  • Applying the repaired program code to the program product is done by replacing the source file which contains the code defect with an updated source file containing an appropriate fix for the code to remove the defect which contributed to the detected performance error.
  • the process of identifying the program code defect involves the AE 11 analyzing the program code of the AUT, looking for specific code errors in the libraries used by the AUT, identifying standard and language-specific errors, while in parallel it analyzes the code for logical defects.
  • step 78 the process moves back to step 54 with repaired program code.
  • the application and test configurations are rechecked, and if everything is in order
  • a consolidated test report is generated at step 64 listing details of the numbers of the tests, numbers of passes, numbers of failures and the like, together with a traceability report mapping the functional or performance specifications of the AUT to the results, and a defect fix report.
  • FIG. 4 This represents a method for monitoring the performance of a computer program product in a live environment.
  • the development testing period has been completed and the application has been issued for use. It will apparently be running satisfactorily, but it is likely that defects do exist in the program code which have not yet been detected and fixed.
  • all the elements of the development software illustrated in Figure 2 are still present on the computer system while the application is in full use, but there is no procedure as was done in the development phase illustrated in Figure 3.
  • the detection agent component of the AT/DF module 28 continues to run in the background. This is the situation at Start 80.
  • the detection agent continues to monitor the application for the occurrence of a performance error at 82, and while no error is found the monitoring continues.
  • a repair process is initiated and follows the procedure of steps 70 to 78 on Figure 3.
  • the defect-fixing component of the AT/DF module 28 is invoked at 84; avoidance of a crash occurs at step 86; analysis and repair of the defective code occurs at step 88; the defect fix report is generated at step 90; and the application is resumed at step 92.
  • the method is terminated.
  • the control of the live application may be handed over to a parallel system running on a failover server, while the original application is halted. If necessary, the AT/DF module 28 accesses a remote server and examines the source code for the application in its master copy in order to assist in the analysis and repair operations.
  • repair After repair, it is desirable to invoke test procedures to test the repaired code at the unit, integration, system and performance levels, in order to verify the robustness of the repair.
  • the repaired code is then applied to the original program product to yield the required defect-corrected iteration of it.
  • the original system on which the application was run is then updated with the new program iteration, which is started running, and takes over the activity of the old version of the application running on the failover server on the parallel system. Then the application on the failover served can be stopped and the new build applied and restarted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

A method of detecting and removing defects from a computer program product comprises among others the steps of: analyzing the executing program product to determine the functionality of the interface elements of the product, the functionality of the data elements of the product and the functionality of the process elements of the product, and storing data representing said respective functionalities, detecting that a performance error has occurred or is about to occur, identifying the performance error and storing data representing the error, combining information derived from all the stored data and identifying a cause of the performance error, examining the program product code and identifying a program code defect with the cause of the performance error, generating repaired program code, and applying said repaired program code to the program product.

Description

METHOD OF DETECTING AND REMOVING DEFECTS
The present invention relates to a method of detecting and removing defects from a computer program product and computer program products applying the method.
Background
Modern computer software can be a highly complex artefact made up of several hundreds of thousands of lines of codes, representing data and instructions that, when installed on a general-purpose computer, interact with the hardware and software elements of the computer system to cause the whole to behave in a useful manner as designed by the software engineer.
Computer software can be regarded at many different levels, from an application suite composed of a number of separate applications dedicated to different general tasks, but having a degree of commonality and integration, through the modules of each one of those applications, through smaller elements (objects, libraries, routines to be called) , and so on down to individual lines of code and ultimately the octets and digits stored as binary logical states in permanent or temporary random access memory. These states are interpreted when the software is running to control the operation of the machine that is the programmed computer by interacting with other software, whether BIOS, operating system software or other applications, or with hardware such as the CPU, system memory, and numerous peripheral devices.
Insofar as, in its physical manifestation, a program product is composed of a unique organization these stored binary states on or in a carrier, and is adapted to perform well-defined actions when loaded into the working memory of a suitable compatible computer system and interacting with an environment that includes input devices (by no means necessarily human-controlled) and output devices (by no means necessarily interpretable by humans) , it is a unique machine with well-defined functions, albeit a machine which operates with invisible components that are made and unmade, as the momentary activity of the machine requires, during its own operation .
Because these digital components, once designed and working as intended, are effectively free of further manufacturing cost, software products may be designed with far more components than a machine in hardware. Further, the components may be re-used and be subject to interaction with many other components in the course of the operation of the product. The result is that the design and manufacture of a software engineering product has a complexity, due to the numbers of components and the need to foresee and control all their interactions, that presents great problems in finding and eliminating defects. Software product designers may take full advantage of the flexibility of program products, but the results are then so potentially complex that the behavior of the product under stress can be unexpected and unpredicted.
The manufacture of a computer program product requires
extraordinary steps in the realm of testing in order to eliminate, as far as may be possible or economic, performance errors in the behavior of the programmed machine. Testing makes up a large proportion of software creation time. The elementary steps once an application is planned are entering lines of code at a user interface, normally in a high-level programming language, storing the entered code, compiling it to object code and storing that, normally in non-volatile memory. On running the code by a human, defects will be found and will need correction. These corrections are made throughout the development process.
No matter how much testing takes place, several defects are likely to remain undetected, or detected but not repaired, up to
commercial release of the software. Depending on the criticality of the objective purpose of the software, certain defects may be tolerable if the resulting performance errors are rare or benign.
It is already known that testing can be organized in various ways. Further, it can be performed manually or with automation.
Automated testing tends to be costly, but a successful
implementation of automated software testing as part of a program product creation process would be a very desirable achievement.
Many documents, like US 5,708,774 and US 5,754,760 describe systems that automatically detect errors in programs and report them to the user or the developer. Other documents, like US
6,760,908 and US 2013/0339929, take a step further and repair the errors either by using patches provided to them or by suggesting possible repair options to the user and applying the option selected by the user. In that sense, these systems are not totally automated, since they require human interaction. A solution to that was provided by document US 2015/0363294, which describes a system that searches in various databases for possible patches and/or files with solutions to program errors, and, when it detects an error in code, selects the appropriate fix for that and applies it. However, this system is not fully automated because the solutions are provided and stored by humans and not created by the system.
The present invention aims to offer a fully automated and valid solution to the gap described above.
Summary of the invention
The present invention provides a method of making a replicable computer program product with reduced defect content which provides means for testing an early stage product to identify defects and repair identified defects before a final build is fixed and copies thereof are replicated for use or commercial distribution. Thus, the invention concerns the process of
manufacturing a master copy of a software application of the other program with reduced defect levels.
According to one aspect of the disclosure, there is provided a method of detecting and removing defects from a computer program product which is adapted to run with an objective purpose on a computer system and contains interface elements that interface with an environment outside the computer program product, data elements that contain data with which the program product
operates, and process elements that provide a logical process route to achievement of the said purpose by operating with the said data and interfacing with the said environment, which method comprises the steps of (a) providing a computer system of the kind on which the program product is adapted to run, (b) installing the program product on the system, (c) causing the program product to execute on the system, (d) analyzing the executing program product to determine the functionality of the interface elements of the product, the functionality of the data elements of the product, and the functionality of the process elements of the product, and storing data in the computer system memory representing said respective functionalities, (e) detecting that a performance error has occurred or is about to occur, (f) identifying the performance error and storing data representing the error in the computer system memory, (g) combining information derived from the stored data representing the performance error with information derived from the stored data representing the functionalities of the interface elements, the data elements and the process elements and thereby identifying a cause of the performance error, (h)
examining the program product code and identifying a program code defect with the cause of the performance error, (i) generating repaired program code in which the identified performance defect is mitigated, and (j) applying said repaired program code to the program product to yield a defect-corrected iteration of the program product.
The objective purpose of the program product may be of many kinds. The product may be wholly dedicated, for example as a control program for an industrial process in manufacturing industry, may be dedicated but flexible, for example as a generic booking system for travel or entertainment, a business accounts program in a specific field, or an intellectual property portfolio management system, or it may be widely generic, for example as a general purpose database program with query and reporting tools. It may be a complete application or a simple utility, for example a
downstream system such as for sending an automatic fax from fax machine or a module such as a print function for use with the other programs. The use of such a program product is very
important as it may predict failures and protect from idle time or sudden stoppage, especially in case it is implemented in medical or other surgical devices. The computer system may be of any general kind, ranging from personal computers and laptops, down through smaller handheld devices and mobile telephones, and up through local or wide area networked systems, Internet servers, and dedicated commercial, industrial or administrative installations.
For the purposes of the disclosure, the program product will have interface elements to interact with its environment.
In steps (a) (b) and (c) , as mentioned above, the product to be tested is installed and initiated in a conventional manner. The program code is read into memory as required, and the program begins to run.
In step (d) the various functionalities are analyzed.
The general process of determining the functionality of the data elements of the program product comprises monitoring the principal data components of the program product.
The general process of determining the functionality of the process elements of the program product comprises monitoring the business processes of the product.
The application intelligence analysis also includes the pathways by which data flows through the application. An example may be a name (or a number or the like) which flows through a booking system from module to module, being accepted at the start of the process, recorded as a ticket purchaser and ending up by being printed on an issued ticket.
Likewise, the application intelligence analysis deals with the interfaces between modules within the application, for example transfers of information from the booking system to the payment system, and from the booking system to a fax or email system to issue a receipt acknowledgement and later a booking confirmation to the customer, and to a printing system to produce an actual ticket .
There are some existing program products that are intended to carry out some of this functionality, but the output of such products is generally sent to generate reports and documentation about the application being investigated. Other software analysis products are used for gathering metrics, that is to say statistics concerning the software being examined, the number of lines of code included, the number of functions utilized, and the like. The principles of using diagnostic testing software to monitor other running software are known in the art, but they have not been applied to the process of this disclosure in order to achieve the automated repair of a defective computer program product.
During or after process step (d) , the acquired information obtained by the analysis of the various functionalities is stored in the computer system memory as data representative of said functionalities, in a convenient manner.
The step (e) of detecting that a performance error has occurred or is about to occur may be accomplished in a variety of ways. One way which will be straightforward for a user is simple observation during normal use of the application under test. Within the application, expected results do not appear in response to proper actions. More dramatically the application may crash, or the system may crash. Indications that a performance error is likely to occur may be a slowing down of responsiveness, or excessive disk read/write attempts, or excessive or abnormal CPU
utilization .
Preferably, step (e) is augmented by the further step of providing a test library containing a repertoire of test routines for stimulating performance errors. Such a test library may be installed on the computer system and made available during the process. Specific test routines may be selected from among the library test routines as may be appropriate to the functionalities of the interface, data and process elements of the products analyzed in step (d) . After the test routines have been selected, it is advantageous to generate data sets for use by the selected routines. In this way a suitable breadth of test can be run. A suitable breadth may extend over a full operating range of data towards the limits of what may be acceptable data, or may extend over a narrow range if defects are anticipated in such narrow range .
Accordingly, a further example includes the step of running at least one of the selected test routines with a corresponding data set on the computer program product prior to detecting a
performance error in step (e) . This may include the step of running at least one of the selected test routines with a
corresponding data set on the repaired program code generated in step (i) or on the defect-corrected iteration of the program product yielded in step (j) .
As is known in the software testing arts, test routines can be of different classes. Specific types of routine favored in the practice of the present invention include unit testing routines, integration testing routines, functional testing routines, and performance testing routines, and routines elected from these types are preferred.
Often, the step of running at least one of the test routines will include running more than one of the test routines, performance errors are detected in step (e) , repaired program code is
generated in step (i) and a defect-corrected iteration of the program product is yielded in step (j) . It is in general not desirable to switch to the defect-corrected iteration of the program product resulting from a defect arising in one of the test routines while another test routine remains incomplete, as the validity of the results of that other routine would be compromised. Accordingly, the test routines may not run on that repaired program code or defect-corrected iteration of the program product until all test routines are completed on the previous unrepaired code or uncorrected program product.
Suitably, the step (h) of examining the program product code is performed by accessing the code installed in step (b) , although the examination may be done in another time or on another instance of the program product. The invention specifically contemplates replacing the program product code installed in step (b) with the defect-corrected iteration of the program product yielded in step (j), and in such a case the step (h) of examining the program product code may be performed by accessing the defect-corrected iteration of the program product that has replaced the code installed in step (b) .
In a further aspect, the disclosure provides a method of operating a software application, which comprises
running parallel instances of a version of the application on a working system and on an auxiliary system;
performing the described method as discussed above on an instance of the application on the working system, through detecting a performance error and yielding a defect-corrected iteration of the application;
- passing the objective purpose of the application to the instance running on the auxiliary system;
substituting the defect-corrected iteration of the application for the previous version on the working system;
restoring the objective purpose of the application to the working system; and
substituting the defect-corrected iteration of the application for the previous version on the auxiliary system.
The reference to transferring the objective purpose is a reference to the process being performed by the application as seen by the external environment, be it a user operating a word processor and industrial process, or anything else. Desirably, the transfers of objective purpose between the parallel systems are made without effective interruption of the purpose of the application, so that the application appears to run continuously, from an external view point .
The disclosure also comprises storing the defect-corrected iteration resulting from any method of the disclosure, on or in a carrier. The description further provides a computer program product defect-corrected by any method of the disclosure, on or in a carrier. The disclosure further extends to computer program product means for carrying out any method of the description, including means for carrying out the individual steps of the methods, and as further described herein, on or in a carrier.
Suitable carriers include a magnetic, electronic and optical digital storage media, and electromagnetic including optical transmission media. Suitable carriers are machine-reachable, whereby the program products may be copied, extracted, or run on a computer system.
Brief description of the drawings
The accompanying drawings illustrate examples of the disclosure by means of a diagram and flow charts. These are examples only and are non-limiting. In the drawings:
Figure 1 is an architecture diagram followed by the disclosure
Figure 2 shows a diagrammatic illustration of the components of a program product testing system utilizing a method in accordance with this invention;
Figure 3 is a flow chart illustrating a first example of the method; and
Figure 4 is a flow chart illustrating a second example of the method.
Detailed Description
The disclosure is a unique concept solution that automatically generates testing workflows, test cases and data fast and
accurately, and assists the release cycle of a project via defect analytics, with AI technologies embedded.
It tests and fixes defects dynamically in the developed software applications or products with minimal human intervention. The concept has been formulated to overcome the challenges of testing in the software industry.
The description acquires appropriate system, application, process and database knowledge from the software application or software product under test by using Artificial Intelligence (AI)
techniques. The acquired intelligence is used to perform automatic testing and defect fixing on the software application under test.
It automatically derives testing requirements in workflows, test sets and test actions, test data, configurations, and
data/metadata associations and actions from the "Application intelligence" module. The analytic built-in engine, analyses the data and makes recommendations and suggestions on "next best action", test and test data coverage against the best requirements (workflow) paths in order to ensure that the project QA team will have the best coverage with the minimum number of test cases and data depending and the project nature, upgrade, new
implementations, performance testing.
The analytic engine analyses also the defects and fixes of the defects dynamically, using error adoption and classification techniques. It categorizes the defect based on the root cause, the criticality, the severity and the risk after correcting via AI possible human errors inputs, by enriching the defect details for better handling. It predicts the Time to fix by identifying all defect correlations and similarities in the built - in
benchmarking knowledge base, and evaluates the true defects to be solved, by marking the possible duplicates and by reducing the number of defects significantly.
Its multi-language intelligence is provided to support automatic testing and defect analytics in different languages. The solution for automated testing and defect analysis is derived internally through its solution analysis engine. This analysis engine uses fuzzy logic for deriving solutions.
The invention can be used in live or production environments. The "Defect Intelligence "unit captures defects dynamically and passes to the analytic engine. The "Defect analytics" module dynamically runs AI techniques on the defect raw data, corrects them and reshapes them into actual analytics and insights for better defect management and decision making.
The disclosure is intended to follow a modular architecture. Its proposed modules and their inter-relationships are depicted in the architecture diagram in figure 1
Automatic testing and defect Analytics tool architecture is shown in figure 1. To carry out automatic testing it is important to have application, data and process intelligence.
Artificial intelligence and fuzzy logic technology following building blocks can be built to make human independent decisions, defect predictions and defect fixing.
Application Intelligence (AI)
Application Intelligence (AI) module takes all the required input from the production environment, the application under test (AUT) and corresponding database under test. It also collects data from defect reports, test execution results and logs.
It uses "specific modeling" principles to build the application knowledge within. This module captures and adopts all the
application knowledge in the dynamic changes (any changes to the application) and updates itself. AI gathers the knowledge, in real-time, of various application components, GUI (graphical user interfaces) , images, files, metadata and data associations, actions that derive from the configurations and associations between the data and the metadata, user activities, monitoring information etc.
This module interfaces with the process, testing and defect intelligence engines. It has its own model and model association visualization module that shows all the data and metadata correlations with the ability to drill down in the schema and visualize your application structure and associative model knowledge base in a user friendly and business readable way.
To achieve this, we use proprietary technology and logic,
(Application intelligence engine) that is able to extract and transform the needed information from the production environment into Json format, with all the client sensitive data, masked. This technology is written in Microsoft C# programming language, and it is wrapped in a Windows Service responsible of all application data intelligence "INPUT DATA".
Process Intelligence (PI)
Process Intelligence (PI) module derives the application business process, requirements, knowledge from the Production environment and the environment under test. PI captures and updates itself in case of any changes in the application business process.
It provides also recommendations and reports on the Process coverage analytics and suggestions. It allows three types of workflow creation, Manual, Computer assisted and Import from external source. The computer assisted method is AI supported, and the built - in intelligence library drives the user in the next best action/step, with the appropriate justification. This is a real time interaction between the User and the Process
intelligence engine.
Because it captures and reflects in real-time the changes .
occurring in the production environment, into the workflows, it has built-in versioning and time-driven comparison report so the user can see, on demand, the changes occurred on a workflow in a period of time.
The process intelligence engine is working based on API/ Json call/requests to the invention' s backend and analytic engine, from where it collects in real-time the recommendations and suggestions for the Process design, updates/amendments and justifications on the best workflow path or next best step /action.
This module interfaces with the testing and defect intelligence engines .
Data Intelligence (DI)
Data Intelligence (DI) module takes input from the database and user logs for the prdduction environment and the under-test environment. DI acquires the knowledge of data, data types and relation between data. DI can dynamically capture updates by itself, if any changes to the database are made. It recommends the best data set to be used in a testing scenario for appropriate test data coverage, against Risk and business needs. For this we use particular algorithms that identify the data "frequent episodes" and suggest the needful.
In cases where the user wants to create manually synthetic test data, following his own data combinations and permutations, the disclosure's data intelligence module offers a built-in synthetic data generation solution, which created the data based on
equivalence partitioning and boundary analysis algorithms.
Data intelligence collects the data through Json- API calls to the invention's backend, where all the data, metadata, associations, monitoring data and application intelligence exist.
This module interfaces with the Auto testing and defect fixing engine .
Test case Intelligence (TI)
Test case intelligence (TI) module derives the application business process and data knowledge from the Production
environment and the environment under test. TI autogenerates automatically the test cases and test data based on this
intelligence .
It provides also recommendations and reports on Test and Test data coverage analytics and suggestions. This module interfaces with Process, data and defect intelligence engines
Test case intelligence collects the data through Json- API calls to the invention's backend, where all the data, metadata,
associations, monitoring data, application intelligence and workflow associations exist. Test cases can be exported either in XML format so they can be published in other execution engines or they can be exported in an Automation intelligence platform, where they can be further automated and executed.
The test cases and test data in the system are mostly in 'read mode' as further changes from the autogenerated objects can occur in the target test management system.
Performance Intelligence (PI)
Performance Intelligence (PI) module derives from data collected over performance testing and monitoring, providing insights and information on issues that are related to performance degradation and other challenges for suggestion on corrective actions on the infrastructure front.
Defect Detection Intelligence (DDI)
Defect Intelligence (DDI) module derives from the defect data, files and logs. DDI autogenerates automatically the Defect
Analytics for optimal defect handling and defect Fixing. It provides also recommendations and reports on defect coverage, defect root cause analysis and defect time to fix.
DDI communicates and collects the information via API / Json format calls, in real time. It uses a built-in defect clustering method, continuously learning, which auto-corrects the defects information, root causes and associates each defect to each possible peer for analyzing and recommending the time to fix.
It carries a variety of analytics and reports which show the defect density per criticality and root cause, the suggested regrouping and defect correction, the defect ageing and result into insights that drive the actions to be taken for the
resolutions of what matters. DDI, consumes data from flat files, images, xml and other formats. This module interfaces with
Process, data and defect intelligence engines.
Automation Intelligence (AUI)
Automation intelligence derives the information from the Analytic engine, test intelligence and data intelligence. It helps the users automatically execute the necessary test cases, over various testing environment in a performant manner, collects the results and allows creation and management of defects within the same repository.
Automation intelligence allows the user to update the
autogenerated test cases with dynamic data and dynamic
associations that will make the test case reusable across
different testing phases and environments with close to zero maintenance needed.
It can also integrate with other defect management systems if needed.
The communication between Automation intelligence and the Analytic Engine, Test Intelligence and Data Intelligence is taking place via XML files communication.
Analytic Engine (AE)
The Analytic engine is based on an amalgamation of algorithms that will generate recommendations and suggestions on Test coverage and Defects analytics. Each recommendation has each own support.
The AE is collecting the data from the application intelligence module and process them in an autolearning way that adjusts the results accordingly and dynamically based on the new inputs, changes .
The monitoring and production data along with the knowledge base of the metadata and their associations allow the Analytic engine to auto detect in real-time the configuration changes and the latest user frequent episodes and alert the user on those updates via immediate coverage recommendations and further best actions to be added or removed.
It leverages defect data, processes the information and creates clusters under which the defects are grouped for further
processing. The clusters are the result of a continuous
autolearning process, as data can come in real-time on a daily basis. These clusters are creating a geographic view of the defect density per business area, criticality and root cause resulting in defect analytics around Business Risk, root cause analysis, time to fix predictions and insights on what needs to be fixed, reducing the number of defects to be checked, as it finds all similar issues and groups them accordingly.
Both test coverage and defect analytics work through a bug hunting algorithm which is autolearning as well and ensures all
recommendations and justification for defect hunting are exposed and taken under consideration highlighting each time the "value at risk".
Figure 2 shows an overall model of means for carrying out the methods of the disclosure. A program product believed to have possible defects may be referred to as the application under test (AUT) and is represented in Figure 2 by module 12. This is a computer program product which is adapted to run with an objective purpose on a computer system and contains interface elements that interface with an environment outside the computer system, data elements that contain data with which the program product
operates, and process elements that provide a logical process route to achievement of the purpose by operating with the data and interfacing with the environment.
Interface elements may be the environment external to the specific computer system on which the program product runs, or may be the internal system environment. Interfaces with the external
environment may be the typical user input devices such as
keyboard, pointing device (mouse, trackball) and microphone, other input devices with user involvement such as digital camera input, storage medium readers such as removable disk drives, card readers, bar-code scanners, currency note acceptance slots, and dedicated input devices such as transducers and counters of various kinds, as well as network connections to other data processing systems. Interface elements include output devices such as video displays, printers for general and dedicated purposes, audio devises for general and dedicated purposes,
telecommunication signal transmitting devices, as well as control signal connectors for specific machinery. The interfaces with these input and output devices are important sites for controlling the software product and determining its response to stimuli during the testing process. At a simplistic level, an application under test may be provided with a set of inputs that simulate an expected operating condition, a specific response is expected, and the actual outputs are monitored and compared with the expected response in order to detect any performance error. The interfaces may also be software -software interfaces for interaction with other software modules running on the system or on external systems. Examples are interfaces with a graphical user interface, application component interface, and business rules and databases interfaces.
The AUT 12 includes a database submodule 14 containing at least some of the data elements of the application. The remaining elements of Figure 2 constitute a model example of development software 10 for performing the methods of the description
expressed in terms of high level components which are all shown in Figure 2, and engage with the AUT as illustrated and described below.
The database submodule 14 provides information to a data
intelligence (DI) module 18 of the which analyzes the executing program product to determine the functionality of the data elements of the product.
The process of determining the functionality of the data elements of the product looks at databases maintained in the product and how data is written to the databases, stored in the databases, and accessed from the databases, and may be called data intelligence. The various data types may be analyzed for example whether integer or floating point numeric, alphanumeric, date, or logical
(yes/no) . The various data structures may be analyzed to determine how the data is represented in its storage. Properties of and relations between data fields may be analyzed for example, data field properties, maximum and minimum allowed values, numbers of rows and columns in spreadsheets and the logical relations and restrictions between data elements. To give a simple example, in a set of data elements relating to credit card information, an expiry date may be entered by a user then stored in a date field in various formats or may not be entered but instead may be calculated before storage or may not be stored at all but be calculated on the fly whenever required.
The DI module 18 also obtains information directly from the nondatabase elements of the AUT. The AUT 12 also provides information to- and receives information from an application intelligence (AI) module 16 which analyzes the executing AUT to determine the functionality of the interface elements of the application.
The general process of determining the functionality of the interface elements of the program product comprises monitoring the application components, and any graphical user interface of the product. The process includes an assessment of the available aspects of the front end of the application and may be called application intelligence. These front-end aspects may vary according to the nature of the program product, but may include the user interface generally, user controls, menus, error displays and messages, result of outcome displays and messages, report generation, error logging, error analysis and dynamic defect fixing .
The AUT 12 also provides information to- and receives information from a process intelligence (PI) module 20 which analyzes the executing AUT to determine the functionality of the process elements of the application. The PI module also receives
information from the DI module 18.
The process analyzes the internal logic of the application and may be called application logic or process intelligence. One aspect of this step involves looking at the business processes that the application under test is being called upon to perform. Examples of such business processes may include booking systems, invoice creation, customer assignments and mathematical calculations. An example of this last might include the principles used for calculating rates of interest, the input parameters, the
mathematical formulae and in-short an analysis of all the
transactions between the input and the output end of the interest determination process.
Analysis of the functionality of the interface elements, the data elements and the process elements are performed by an analysis engine (AE) 22, which exchanges data with the AI module 16, the DI module 18 and the PI module 20 respectively. The analysis engine uses artificial intelligence techniques, of kinds known in that art, to build a knowledge base of the design and performance of the AUT. These techniques are better discussed below.
The analysis engine 22 also interfaces and exchanges data with a test routine library or test intelligence (TI) module 24. This contains a repository of testing techniques which can be manually configured or automatically selected according to the testing requirements in any given case. Testing techniques are included for unit testing, integration testing, system testing, and performance testing. Such techniques are common in the program testing art. Unit testing is aimed at the smallest program units of the AUT. Unit tests might test boundary values, using
equivalence partitioning to reduce the number of the test data sets required for full code coverage. Integration testing is aimed at the interactions between the modules of the AUT and may trace data integrity. System testing is aimed at the functionality of the whole application, to determine whether it works correctly from end to end; in the example of a ticket booking application, correct behavior is tested from the initial request through reservation, payment and ticket issue. Performance testing is aimed at the application under stress and may cover such
performance indicators as the maximum number of concurrent users (in the case of an internet we page, for example, where the potential number of users may be practically unlimited) ,
durability (reliability over time) , and efficiency (response times) . A user of the method is provided with a suitable user interface which includes options for selecting the kind of testing required. With its information exchange with the AI, DI, PI and TI modules already discussed, the analysis engine 22 is able to derive test sets and test conditions for effectively testing the application under test (AUT) 12. To do this, it uses techniques selected from fuzzy logic set reduction, cause-effect modelling, black box testing with fuzzy networks, automatic GUI (graphical user interface) regression testing using AI planning, test set
generation using boundary value analysis and equivalence
partitioning techniques, test set generation using a risk based approach, defect prediction techniques, test set reduction using orthogonal array techniques, three-group software quality
classification modelling using automated reasoning, data mining with resampling in software metrics databases, and error adoption using a normal and autistic fuzzy sets. Other testing and
artificial intelligence techniques may also be applied.
A multilingual intelligence (MI) module 28 also interfaces with the AE 22 and provides for translation between equivalent words and phrases in natural language. For example, while the AUT may have an English language version in which standard terms, such as the names of weekdays or months, or menu options and error messages, or data field labels, are in English, it may also have, say, a French language version in which the corresponding terms are used and expected in the French language. Any required number of languages may be provided for. The data is exchanged with the AE and translated as required. The different language versions may also be needed for interfacing with other software, such as websites with multiple language options. The MI module may also include the appropriate character sets and keyboard tables for the different language standards.
A further important module is an auto-testing and defect-fixing (AT/DF) module28, which is closely bound with the AE module for information exchange, receives inputs from the AI module 16, the TI module 24 and the ML module 26, and also exchanges data with the PI module 20. This module both runs the tests and repairs the defects that are found, calling on the available information and the analysis procedures of the analysis engine 22. For defect fixing, a detection agent runs in the background while the AUT 12 is running. The occurrence of a performance error, or the
detection of an impending performance error, triggers a process of identifying the error, storing data representing the error in the computer system memory, and combining information derived from the stored data representing the performance error with information derived from the stored data representing the functionalities of the interface elements, the data elements and the process elements in order to successfully identify a cause of the performance error .
The control then passes to the defect fixing component which examines the program product code and identifies a program code defect with the cause of the performance error; the defect fixing component generates a repaired program code in which the performance defect is at least mitigated or removed entirely;
defect fixing component stores the repaired code as code output (CO) 30. This can be applied immediately to the application under test or saved for substitution into the application at a later time. Thus, the defects can be fixed dynamically in the system.
The remaining module shown in Figure 2 is a report generator (RG) 32 which is driven by the AE module 22 and produces test reports and defect fix reports on demand, in whatever format is required, typically through print or electronic text media.
Figure 3 illustrates a typical procedure for carrying out the method of the disclosure in a development environment. At the Start 40, the process is started by installing the application under test on a computer system which meets all the needs of the application. As well as an appropriate operating system, there will typically be a keyboard and pointing device for operator input, and a monitor screen on which a graphical user interface is displayed. The operator issues the appropriate command or commands to start the execution of the AUT .
Previously, the necessary development software 10 for assisting the operator to perform the method of the disclosure has also been installed on the system. This includes all the modules illustrated in Figure 1 and already described. The operator uses keyboard and mouse to issue initial commands, identifying the local or network path to the AUT, which may be in a directory or folder or on an internet web page. Once the location and identity of the AUT has been conveyed to the software, standard system monitoring and information utilities are used to identify the process which belong to the AUT and the blocks of stored code or files which make it up.
The operator is then able to initiate the process of analyzing the executing program product by means of the AI 16, DI 18, PI 20 and AE22. In this process, a knowledge base for the AUT is created, or updated if a previous knowledge base already exists. This
knowledge base constitutes a library of information covering program design and implementation down to the smallest component level .
A further element of the Start procedure is for the operator to initiate the preparation of test procedures, by selecting a "test preparation" command. When this is done, the AE22 and the TI 24 interact and generate the necessary test procedures adapted to the AUT or update the existing test procedures to reflect changes if the AUT has previously been tested and repaired on the system. The test procedures amount to fixing a set of conditions defining a particular state of the AUT which it is likely to reach, in use; selecting suitable kinds of input to the AUT in that condition; evaluating the appropriate subsequent steps; and identifying an expected outcome. Following this preparation, the operator is in a position to control the development software 10 which assists him in carrying out the method of the disclosure on an application under test 12 which is also running on the system. The operator issues a "start testing" command and is presented with a menu from which the type of testing required can be selected. The selection is made at step 50.
Following selection, an appropriate set of test procedures is loaded and populated with data sets on the basis of information derived from the AUT database 14 by the data intelligence module 18.
The next step 54 is to check the configurations of the test sets and of the application to be tested, following which decision 56 determines whether everything is in order to commence the testing procedure. If the configurations are not correct, corrections will be made to the configuration file. Once these corrections are made, the configuration will be rechecked at 54 to confirm that the changed configuration is correct. If the application is ready for testing, the testing is carried out at step 60, where the tests are run through the required number of data sets.
Decision 62 asks whether a performance error has been detected by the AT/DF module 28. If no errors are reported during this set of tests, the tests are completed and reports are sent to the report generator 32 in step 64, the process reaches completion and terminates at step 66.
In the case of several sets of test procedures, step 64 in which testing is completed may involve reversion to step 52, where a new set of tests and test data are loaded for the next series of tests to be carried out in steps 54 onward.
If a performance error is reported or anticipated, the defectfixing component of the AT/DF module 28 is invoked at 70 and the performance error is logged in a defect logging system which records all necessary information about the test run, the test conditions, the expected output, and the actual output, among other relevant information. The immediate next step 72 is to avoid an application or system crash, by taking the necessary remedial action. At the next step 74, the AT/DF module 28 cooperates with the analysis engine 22, as previously described, to identify a program code defect with the cause of the performance error, and to generated repaired program code in which the identified performance defect is mitigated. As previously discussed, the repaired program code is preferably not applied to the AUT until after the current set of tests has been completed.
At step 76, a defect fix report is generated by report generator 32, and at step 78 the AUT can be resumed, optionally with repaired program code applied to the program product to yield a defect-corrected iteration of the AUT for further testing. Applying the repaired program code to the program product is done by replacing the source file which contains the code defect with an updated source file containing an appropriate fix for the code to remove the defect which contributed to the detected performance error. The process of identifying the program code defect involves the AE 11 analyzing the program code of the AUT, looking for specific code errors in the libraries used by the AUT, identifying standard and language-specific errors, while in parallel it analyzes the code for logical defects.
At the resumption of the AUT in step 78, the process moves back to step 54 with repaired program code. The application and test configurations are rechecked, and if everything is in order
(decision 56), the testing is carried out again (60). The cycle of testing and repair can continue for as long as it is necessary, depending on the observed failure rate.
As described, when a set of tests has been passed, the next set of tests will be run. However, if any errors of great severity and criticality are found, the whole test program will maybe re-run completely to ensure that all the fixes work successfully, and the repaired code passes all tests.
When the whole process is complete, including testing successive defect-corrected iterations in the AUT, a consolidated test report is generated at step 64 listing details of the numbers of the tests, numbers of passes, numbers of failures and the like, together with a traceability report mapping the functional or performance specifications of the AUT to the results, and a defect fix report.
A further example of the disclosure is illustrated in the
flowchart appearing in Figure 4. This represents a method for monitoring the performance of a computer program product in a live environment. In this case, the development testing period has been completed and the application has been issued for use. It will apparently be running satisfactorily, but it is likely that defects do exist in the program code which have not yet been detected and fixed. In this method, all the elements of the development software illustrated in Figure 2 are still present on the computer system while the application is in full use, but there is no procedure as was done in the development phase illustrated in Figure 3. Instead, the detection agent component of the AT/DF module 28 continues to run in the background. This is the situation at Start 80. The detection agent continues to monitor the application for the occurrence of a performance error at 82, and while no error is found the monitoring continues.
However, on detection of a performance error, a repair process is initiated and follows the procedure of steps 70 to 78 on Figure 3. In the Figure 4 model, the defect-fixing component of the AT/DF module 28 is invoked at 84; avoidance of a crash occurs at step 86; analysis and repair of the defective code occurs at step 88; the defect fix report is generated at step 90; and the application is resumed at step 92. At 96, the method is terminated. The following optional features are not illustrated in Figure 4. When a performance error is detected, the control of the live application may be handed over to a parallel system running on a failover server, while the original application is halted. If necessary, the AT/DF module 28 accesses a remote server and examines the source code for the application in its master copy in order to assist in the analysis and repair operations.
After repair, it is desirable to invoke test procedures to test the repaired code at the unit, integration, system and performance levels, in order to verify the robustness of the repair. The repaired code is then applied to the original program product to yield the required defect-corrected iteration of it.
The original system on which the application was run is then updated with the new program iteration, which is started running, and takes over the activity of the old version of the application running on the failover server on the parallel system. Then the application on the failover served can be stopped and the new build applied and restarted.
The terms used herein are well-known to a person skilled in the art and are used for the purpose of this description. Also, the examples that have been disclosed should not be considered as limiting, and other alternatives, modifications and/or equivalents are possible. Where the steps of a method are described, a person skilled in the art should understand that the mentioning order is not binding, and a step may be performed previously or after its mentioned order. For example, where appropriate, the step (c) may be performed in the described order, or it may be performed before step (b) or after step (d) .

Claims

1. A method of detecting and removing defects from a computer program product which is adapted to run with an objective purpose on a computer system and contains interface elements that
interface with an environment outside the computer program product, data elements that contain data with which the program product operates, and process elements that provide a logical process route to achievement of the said purpose by operating with the said data and interfacing with the said environment; which method comprises the steps of:
a) Providing a computer system of the kind on which the program product is adapted to run
b) Installing the program product on the system
c) Causing the program product to execute on the system
d) Analyzing the executing program product to determine the functionality of the interface elements of the product, the functionality of the data elements of the product and the
functionality of the process elements of the product and storing data in the computer system memory representing said respective functionalities .
e) Detecting that a performance error has occurred or is about to occur
f) Identifying the performance error and storing data
representing the error in the computer system memory
g) Combining information derived from the stored data
representing the performance error with information derived from the stored data representing the functionalities of the interface elements, the data elements and the process elements and thereby identifying a cause of the performance error
h) Examining the program product code and identifying a program code defect with the cause of the performance error
i) Generating repaired program code in which the identified performance defect is mitigated
j) Applying said repaired program code to the program product to yield a defect-corrected iteration of the program product.
2. A method according to claim 1 wherein the process of
determining the functionality of the interface elements of the program product comprises monitoring the application components and any graphical user interface of the product.
3. A method according to claim 1 or claim 2 wherein the process of determining the functionality of the data elements of the program product comprises monitoring the principal data components of the product.
4. A method according to any one of the preceding claims wherein the process of determining the functionality of the process elements of the program product comprises monitoring the business processes of the product.
5. A method according to any one of the preceding claims
including the step of providing a test library containing a repertoire of test routines for stimulating performance errors
6. A method according to claim 5 including the step of selecting from the library test routines appropriate to the functionalities of the interface, data and process elements of the product analyzed in step (d) and generating data sets for use by the selected test routines.
7. A method according to claim 6 including the step of running at least one of the selected test routines with a corresponding data set on the computer program product prior to detecting a
performance error in step (e) .
8. A method according to claim 6 or claim 7 including the step of running at least one of the selected test routines with a
corresponding data set on the repaired program code generated in step (i) or on the defect-corrected iteration of the program product yielded in step (j) .
9. A method according to claim 7 or claim 8 wherein the step of running at least one of the test routines includes running more than one of the test routines; performance errors are detected in step (e) , repaired program code is generated in step (i) , and a defect-corrected iteration of the program product is yielded in step (j); and the test routines are not run on that repaired program code or defect-corrected iteration of the program product until all said more than one test routines are completed on the previous unrepaired code or uncorrected program product.
10. A method according to any of claims 5 to 9 wherein the test routines include routines selected from unit testing routines, integration testing routines, functional testing routines, and performance testing routines.
11. A method according to any one of the preceding claims wherein the step (h) of examining the program product code is performed by accessing the code installed in step (d) .
12. A method according to any one of the preceding claims
including the step of replacing the program product code installed in step (b) with the defect-corrected iteration of the program product yielded in step (j).
13. A method according to claim 12 wherein the step (h) of examining the program product code is performed by accessing the defect-corrected iteration of the program product that has previously replaced the code installed in step (b) .
14. A method of operating a software application, which comprises running parallel instances of a version of the application on a working system and on an auxiliary system; performing the method of claim 1 on an instance of the application on the working system, through detecting a performance error and yielding a defect-corrected iteration of the application; passing the objective purpose of the application to the instance running on the auxiliary system; substituting the defect-corrected iteration of the application for the previous version on the working system; restoring the objective purpose of the application to the working system; and substituting the defect-corrected iteration of the application for the previous version on the auxiliary system.
15. Computer program product characterized in that it is designed to function according to any of the previous claims.
PCT/GR2019/000025 2019-03-28 2019-03-28 Method of detecting and removing defects WO2020194000A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GR2019/000025 WO2020194000A1 (en) 2019-03-28 2019-03-28 Method of detecting and removing defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GR2019/000025 WO2020194000A1 (en) 2019-03-28 2019-03-28 Method of detecting and removing defects

Publications (1)

Publication Number Publication Date
WO2020194000A1 true WO2020194000A1 (en) 2020-10-01

Family

ID=67688794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GR2019/000025 WO2020194000A1 (en) 2019-03-28 2019-03-28 Method of detecting and removing defects

Country Status (1)

Country Link
WO (1) WO2020194000A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117984024A (en) * 2024-04-03 2024-05-07 中国水利水电第十工程局有限公司 Welding data management method and system based on automatic production of ship lock lambdoidal doors

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5708774A (en) 1996-07-23 1998-01-13 International Business Machines Corporation Automated testing of software application interfaces, object methods and commands
US5754760A (en) 1996-05-30 1998-05-19 Integrity Qa Software, Inc. Automatic software testing tool
US6760908B2 (en) 2001-07-16 2004-07-06 Namodigit Corporation Embedded software update system
US20130227521A1 (en) * 2012-02-27 2013-08-29 Qualcomm Incorporated Validation of applications for graphics processing unit
US20130339929A1 (en) 2012-06-14 2013-12-19 Microsoft Corporation Program repair
US20150363294A1 (en) 2014-06-13 2015-12-17 The Charles Stark Draper Laboratory Inc. Systems And Methods For Software Analysis
US20170024312A1 (en) * 2012-03-13 2017-01-26 Truemetrics Llc System and methods for automated testing of functionally complex systems
US20170046246A1 (en) * 2015-08-10 2017-02-16 Accenture Global Services Limited Multi-data analysis based proactive defect detection and resolution
US20180314576A1 (en) * 2017-04-29 2018-11-01 Appdynamics Llc Automatic application repair by network device agent
US20190079734A1 (en) * 2017-09-12 2019-03-14 Devfactory Fz-Llc Library Upgrade Method, Apparatus, and System

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754760A (en) 1996-05-30 1998-05-19 Integrity Qa Software, Inc. Automatic software testing tool
US5708774A (en) 1996-07-23 1998-01-13 International Business Machines Corporation Automated testing of software application interfaces, object methods and commands
US6760908B2 (en) 2001-07-16 2004-07-06 Namodigit Corporation Embedded software update system
US20130227521A1 (en) * 2012-02-27 2013-08-29 Qualcomm Incorporated Validation of applications for graphics processing unit
US20170024312A1 (en) * 2012-03-13 2017-01-26 Truemetrics Llc System and methods for automated testing of functionally complex systems
US20130339929A1 (en) 2012-06-14 2013-12-19 Microsoft Corporation Program repair
US20150363294A1 (en) 2014-06-13 2015-12-17 The Charles Stark Draper Laboratory Inc. Systems And Methods For Software Analysis
US20170046246A1 (en) * 2015-08-10 2017-02-16 Accenture Global Services Limited Multi-data analysis based proactive defect detection and resolution
US20180314576A1 (en) * 2017-04-29 2018-11-01 Appdynamics Llc Automatic application repair by network device agent
US20190079734A1 (en) * 2017-09-12 2019-03-14 Devfactory Fz-Llc Library Upgrade Method, Apparatus, and System

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117984024A (en) * 2024-04-03 2024-05-07 中国水利水电第十工程局有限公司 Welding data management method and system based on automatic production of ship lock lambdoidal doors

Similar Documents

Publication Publication Date Title
Decan et al. What do package dependencies tell us about semantic versioning?
CN110309071B (en) Test code generation method and module, and test method and system
US7895470B2 (en) Collecting and representing knowledge
US11386154B2 (en) Method for generating a graph model for monitoring machinery health
US7051243B2 (en) Rules-based configuration problem detection
US8539282B1 (en) Managing quality testing
US7917897B2 (en) Defect resolution methodology and target assessment process with a software system
Zhao et al. Identifying bad software changes via multimodal anomaly detection for online service systems
US9075544B2 (en) Integration and user story generation and requirements management
US20210191845A1 (en) Unit testing of components of dataflow graphs
Kim et al. The secret life of test smells-an empirical study on test smell evolution and maintenance
Lwakatare et al. On the experiences of adopting automated data validation in an industrial machine learning project
Felderer et al. A multiple case study on risk-based testing in industry
An et al. An empirical study of crash-inducing commits in mozilla firefox
Caglayan et al. Predicting defective modules in different test phases
Jin et al. Hybridcisave: A combined build and test selection approach in continuous integration
Barbour et al. An investigation of the fault-proneness of clone evolutionary patterns
Ostrand et al. A Tool for Mining Defect-Tracking Systems to Predict Fault-Prone Files.
Abdeen et al. An approach for performance requirements verification and test environments generation
WO2020194000A1 (en) Method of detecting and removing defects
CN116756021A (en) Fault positioning method and device based on event analysis, electronic equipment and medium
Zolfagharinia et al. A study of build inflation in 30 million CPAN builds on 13 Perl versions and 10 operating systems
Zhang et al. Quality assurance technologies of big data applications: A systematic literature review
Lasynskyi et al. Extending the space of software test monitoring: practical experience
Kozlov et al. Evaluating the impact of adaptive maintenance process on open source software quality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19756229

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19756229

Country of ref document: EP

Kind code of ref document: A1