GB2458201A - Creating a program problem signature data base during program testing to diagnose problems during program use - Google Patents

Creating a program problem signature data base during program testing to diagnose problems during program use Download PDF

Info

Publication number
GB2458201A
GB2458201A GB0902360A GB0902360A GB2458201A GB 2458201 A GB2458201 A GB 2458201A GB 0902360 A GB0902360 A GB 0902360A GB 0902360 A GB0902360 A GB 0902360A GB 2458201 A GB2458201 A GB 2458201A
Authority
GB
United Kingdom
Prior art keywords
test case
computer
computer program
program
system information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0902360A
Other versions
GB0902360D0 (en
Inventor
Andreas Arning
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of GB0902360D0 publication Critical patent/GB0902360D0/en
Publication of GB2458201A publication Critical patent/GB2458201A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Debugging And Monitoring (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

When test cases cause a computer program to fail, system information, such as a stack trace, is collected. The information is stored in a problem signature database along with a description of the test case that caused the failure. If the program fails when being used normally, the equivalent system information is collected and compared with the system information in the database. The entries in the database are ranked depending on how similar they are to the collected data and the test case descriptions are presented to the user.

Description

-1-2458201
DESCRIPTION
Method and System for Problem Determination in a Computer Program
FIELD OF THE INVENTION
The invention relates to a method and a system for problem determination in a computer program according to the preambles of the independent claims.
BACKGROUND OF THE INVENTION
Problem determination in software systems, particularly comprising IT hardware as well as software, is a costly area: Problem Determination (PD) can consume between 25% and 50% of the IT budget of a company. One reason for the complexity is the fact that when malfunction occurs, in many cases the error symptoms do not give the user an idea what the root cause might be. In other words, for the user in those cases the error symptoms seem not to be correlated (i.e. random) to the cause of the problem.
A typical scenario is that a user works with a software system, runs into an error situation and can determine that this is an error situation. However, the error symptoms do not include a hint for the user to get rid of this error situation.
There are other scenarios where users run into an error situation, get a helpful diagnostics message and can easily recover from that error situation, and these scenarios are the majority. The scenario described above typically occurs in a minority of the error cases but these error cases can, nevertheless, be very time consuming ones and thus the expensive ones.
One approach known in the art is to rely upon the computer operating system to collect system and/or application execution information within a user's computer system. Upon detecting a fault condition, collected information is sent to a specified location. A typical implementation of this technique is when an application unexpectedly quits: the user is asked to send information about the fault condition. If the user wishes to send information the fault information is sent as an electronic message to the manufacturer of the operating system.
As the complexity of software-based systems increases, so too does the difficulty of identifying the source of faults, referred to as "crashes" or other anomalous behavior, within such systems. Often, when a particular software application is used across an entire organization, the same fault within the software application may be experienced by more than one user.
This can lead to a significant amount of wasted time as users cope with "crashing" software applications. The possibility of data loss or corruption also exists. Presently, however, there is no reliable way of correlating software faults across an organization or to diagnose and solve the problem.
US 2007/0038896 Al discloses a call-stack pattern matching for problem solution within software. For detecting and diagnosing software faults within an organization and/or across multiple organizations the call-stack information is used for purposes of matching a fault condition with prior fault conditions. A call stack is a component of the runtime system which stores temporary data. A "snapshot" of an actual state of a call stack is called stack trace. Often, a call-stack snapshot is abbreviated as call stack. One embodiment in the prior art document can include a method of diagnosing a fault condition within software. The method can include, as a response to a fault condition within a computing system belonging to an organization, automatically sending call-stack information for the fault condition to a first server within the organization.
Within the first server, the call-stack information for the fault condition can be compared with call-stack information from prior fault conditions that occurred within the organization to determine whether the call-stack information for the fault condition matches call-stack information from one of the prior fault conditions. The method further can include sending the call-stack information to a second server for comparison with call-stack information from prior fault conditions that occurred within at least one different organization if the call-stack information for the fault condition does not match.
SUMMARY OF THE INVENTION
It is an object of the invention to provide an improved method for detecting and diagnosing software error conditions. Another object is to provide a system for detecting and diagnosing software error conditions.
These objects are achieved by the features of the independent claims. The other claims and the specification disclose advantageous embodiments of the invention.
A method is proposed for problem determination in connection with a computer program, said method comprising the following steps during testing the computer program, creating an annotated problem-signature database encompassing one or more test cases, each test case causing an attempted operation to fail when the computer program is executed with the respective test case; running at least one such test case; collecting first system information produced by the test case which is run; generating an object consisting of a pair containing the first system information and a description of the test case which was run; and storing the object in the annotated problem-signature database for use in an analysis of a failure in an attempted operation which may occur when the computer program is executed.
The annotated problem-signature database consists of annotated problem-signatures of pairs of the format [problem-signature test case description], wherein a test case description is assigned to the corresponding problem signature.
A preferred embodiment of the annotated problem-signature database is an annotated stack trace database. Other machine readable descriptions of a fault condition can be used of course.
Favorably, the proposed method allows for enumerative cartography of error signatures and error mapping in complex software (i.e. computer program) systems. A user is provided with helpful information for solving the problem which caused the failure of the software when executed.
The software error conditions particularly can not only comprise software faults but also user errors as well as an erroneous configuration.
According to another aspect of the invention, a program product is proposed, comprising a computer useable medium including a computer readable program, wherein the computer readable program when executed on a computer causes the computer to run at least one such test case for a further computer program; collect first system information produced by the test case which is run; generate an object consisting of a pair containing the first system information and a description of the test case which was run; and store the object in the problem-signature database for use in an analysis of a failure in an attempted operation which may occur when the further computer program is executed.
According to another aspect of the invention a data processing system is proposed for creating an annotated problem-signature database for use in analysis of a failure of a computer program.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention together with the above-mentioned and other objects and advantages may best be understood from the following detailed description of the embodiment, but not restricted to the embodiment, wherein is shown in: Fig. 1 a flow chart of a preferred method according to the invention; and Fig. 2 a preferred computer system for performing a preferred method according to the invention.
The drawings are merely schematic representations, not intended to portray specific parameters of the invention. Moreover, the drawings are intended to depict only a typical embodiment of the invention and therefore should not be considered as limiting the scope of the invention.
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENT
In the following, a preferred proactive method for problem determination in complex software systems is proposed.
According to the preferred embodiment of the invention the method for problem determination in a computer program is characterized by a two-step procedure, a load phase during testing the software and a lookup phase in which a user uses the software, as depicted in Fig. 1.
Typically, the load phase will be performed by the software provider. However, it is also possible that a load phase is done within an organization before the tested software is accessible for members of the organization. For instance, an IT department of a firm can perform the load phase and releases the software for employees of the firm.
During testing the computer program in the load phase the user, who can be particularly a person testing the software, a programmer and the like, tries to get hold of appropriate test case suites for the given computer program (step 10), e.g. by a lookup in a repository or a database, and creates an annotated problem-signature database, by way of example an annotated stack trace database, encompassing one or more test cases which will each cause an attempted operation to fail when the computer program is executed with the test case. Preferably the test case suite contains "good" as well as "bad" cases as well as a description for each of the cases. A "bad" case is herein called a trouble case. A "good" test case is herein called a regular test case.
For instance a test case verifies that a user without administrative privileges cannot delete document that was created by a different userid: the delete operation must fail.
It should be understood that if the trouble test case is tested and the intended failure is provoked, the trouble test case is tested successfully. If no failure occurs or another failure occurs than expected, the test of the trouble test case has failed. The regular test case are tested successfully if no failure occurs.
Besides testing successful operations, computer program test suites known in the art usually cover as well many tests for enforced error situations, i.e. where attempted operations fail.
However, the available system information generated during those tests is not fully exploited yet. To ensure robustness of a computer program and/or a computer system, it is not sufficient to test its behavior in the regular test case. A regular case means that the configuration was done without errors, user input is complete and consistent, authorizations are sufficient, system resources are sufficient and thus the attempted operation is supposed to succeed. But to get some evidence that a computer program and/or a computer system are robust, it must as well be tested in the "bad" case. Trouble cases can include user errors, for example, operation with insufficient authorizations. Here, the operation must fail to avoid security risks. Or, in another trouble case test, the system is run with insufficient resources. In this case, it is verified that the operation will fail, but without damage, i.e. no data may be accidentally lost.
Usually, trouble case tests are part of the test suite, and it is verified that the behavior of the system does not violate security requirements and as well do not bring down the computer program and/or the computer system. However, the output generated during those trouble test cases is usually not carefully analyzed or even carefully preserved.
According to the preferred embodiment of the invention, during testing only the trouble test cases are selected in step 12, as well as the description for these trouble test cases.
Note that in existing computer programs and/or computer systems, error situations as they are enforced by trouble test cases are often not handled as careful as regular test cases are. There are many possible reasons for this: First of all, it is much easier to imagine those cases for which a system is designed, as the specification document does have to describe these cases.
However, it is by definition much harder to imagine and foresee all kinds of abuses, all kinds of wrong uses, all kinds of possible misunderstandings and misconceptions by the user -and it needs a lot of creativity to enumerate the possible user errors. Secondly, the programmer cannot afford to neglect the regular cases, because a computer program unable to cover the regular test cases is definitely not working properly. In contrast, a system which does not handle all trouble cases properly may still be considered as working properly. Finally, for the programmer it is more satisfying to enable the system to run the regular cases first, and see the system work successfully during the first tests rather than focusing on the trouble cases first and seeing the system fail in the first tests. This is why diagnostic message dialogues are often not perfectly designed, but the user is often left alone with error diagnostic information that is hard to understand, for example a stack trace.
According to the embodiment of the invention for each trouble case (step 14) the case is run in step 16 and first detailed system information for the case is collected when an error occurs, i.e. when the attempted operation fails. Such detailed system information can preferably be a stack trace.
A call stack is a dynamic stack data structure which can store information about active subroutines of a computer program. The call stack is often used for several related purposes but one main task is to keep track of the point to which each active subroutine, i.e. subroutines which have been called but have not yet completed execution by returning, should return control when it finishes executing. Since the call stack is organized as a stack, the caller pushes the return address onto the stack and the called subroutine, when it finishes, pops the return address of f the call stack and transfers control to that address. If a called subroutine calls on to yet another subroutine it will push its return address Onto the call stack and so on, with the information stacking up and unstacking as the program dictates.
The expression "call stack" is often used as a synonym for the actual status of the call stack at a particular moment.
Another synonym for the actual status of the call stack is "stack trace". The well-known concept of the stack trace is an example for the detailed description of an error situation which is a snapshot of the computer's runtime stack, expressed as symbolic names (nested structure which includes involved source file names, classes, method calls, exact line numbers, etc.) Such a stack trace is usually very helpful for software developers who have access to the source code and can understand what the source code is supposed to do, and less helpful for other users. Other examples for detailed descriptions of an error situation are error message identifiers and/or specific return codes. The stack trace is a report of the active stack frames instantiated by the execution of the computer program.
In step 18 an object consisting of a pair containing the first system information and a description of the case which was run as trouble case is generated with e.g. the format
[first system information test case description]
Each object is then stored in a repository in step 20. The repository can preferably contain an annotated problem-signature database, such as an annotated stack trace database, in which the object is stored. This repository can be used in an analysis of a failure in an attempted operation which may occur when the computer program is executed during the lookup phase.
There is usually exactly one call stack associated with a running computer program, i.e. with each task or thread of a process, although additional stacks may be created for signal -10 -handling or cooperative multitasking. Since there is only one in this important context, it can be referred to as the stack or the stack of the task, respectively.
In high-level programming languages, the specifics of the annotated problem-signature database such as the call stack details are usually hidden from the programmer who is given access only to the list of functions, and not the memory on the stack itself. Most assemily languages on the other hand require programmers to be involved with manipulating the stack. The actual details of the stack in a programming language depend upon the compiler, operating system, and the available instruction set.
Favorably, when e.g. two instances of detailed system information (e.g. a stack trace) are given, each describing a problem situation, a sufficient estimate can be computed whether the user ran into these problem situations because of the same root cause. There are methods known in the art to find out whether these stack traces describe identical situations or not when two stack traces are given and each is representing an error situation. In addition, those methods are successfully used for computing an estimate whether for two stack traces the root cause was the same for two given stack traces.
When the computer program is executed under normal use conditions by a user, in particular when an operation is accidentally attempted to be used in a wrong way, the execution of the computer program may fail and an error information, e.g. a stack trace, is generated and presented to the user. For many users, however, the detailed system information (e.g. the stack trace) describing the problem situation is not helpful. The user cannot see any correlation between the error symptoms and a solution to the problem. For the user the system's behavior looks like random behavior although in the majority of the -11 -cases, the behavior is fully deterministic rather than random.
In addition, in many cases the system can provide many details of the error situation, i.e. why the attempted operation failed.
According to the preferred embodiment of the invention, on failure of the attempted operation when executing a case which turns out to be a problem case, second system information is collected for description of the problem or failure occurring in the situation where the attempted operation fails during execution of the computer program (step 30).
In step 32, the collected second system information is compared to the objects stored in the repository, e.g. the annotated stack trace database. Particularly, the first system information of the stored objects is compared to the collected second system information. The matching results between the second system information and the objects in the annotated problem-signature database are ranked, preferably the best matching system information first, thus assembling a sorted list which contains objects with best matching results in the comparison (step 34) The results, i.e. the associated test case descriptions are then rendered to the user doing the lookup. The test case description can be the original description provided by the person who has created the test case, or can be generated from said description otherwise, e.g. by an automated summarizing method, or by manual editing. According to the invention, the preferred proposed approach allows for improving problem determination for a given software system, as it does not only focus on the symptom (which may be hard to interpret) but rather uses the symptom description as opaque problem signature, and a matching function rather looks for typical usage errors that result in the same or a similar problem signature. Thus, the user is presented with concrete examples how in the test phase for this product, an identical or similar error condition were produced by invalid steps. This makes it easier for the user to understand the types -12 -of root cases that are known to lead to a similar error Situation.
Other known solutions to this problem often do not provide the information needed by the user. For instance, an error message catalog has the drawback that it focuses on the symptom (e.g. "cannot create object") but does not contain detailed information about the typical and/or most frequent usage errors.
Concrete example for this: due to a suboptimal design of some user interface, that user interface suggests to pushing a particular button twice; however, this button causes creation of a particular object and this creation fails at the second attempt. In addition, granularity is Error message ID, thus the actual context of an error condition is neglected although it may be vital to suggest an appropriate user action to get rid of an error situation. Attempts to improve error messages have the drawback that a developer has to imagine what cases are causing a certain error, which is often incomplete. A symptom database is usually a manually assembled, hand crafted description (rather than leveraging test case descriptions). Sometimes stack traces are available but no matching engine or ranking is available. "Technotes" are manually assembled, hand crafted descriptions (rather than leveraging test case descriptions) which sometimes provide also stack traces but no matching engine, being just a string search. A forum is usually a manually maintained, hand crafted description (rather than leveraging test case descriptions). Sometimes stack traces are available but no matching engine, providing just a string search. Using a hotline exhibits the problem that the error description is lost and must be restored first during the phone call. So called S.W.A.T. teams do not scale; as well, even those users do not know all of the problems.
The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment -13 -containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc. Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer readable medium providing program code for use by or in connection with a computer or any instruction execution system.
For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by on in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAN), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read-only memory (CD-ROM), compact disk-read/write (CD-R/w) and DVD.
A preferred data processing system 200 as schematically depicted in Fig. 2 suitable for storing and/or executing program code will include at least one processor 202 coupled directly or indirectly to memory elements 204 through a system bus 206. The memory elements 204 can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
-14 -Input/output or I/O-devices 208, 210 (including, but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system 200 either directly of through intervening I/O controllers 212.
Network adapters 214 may also be coupled to the system 200 to enable the data processing system or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
While the foregoing has been with reference to particular embodiments of the invention, it will be appreciated by those skilled in the art that changes in these embodiments may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.

Claims (10)

  1. -15 -CLAIMS1. A method for problem determination in connection with a computer program, said method comprising the following steps during testing of the computer program: creating an annotated problem-signature database encompassing one or more test cases, each test case causing an attempted operation to fail when the computer program is executed with the respective test case; running at least one such test case; collecting first system information produced by the test case which is run; generating an object consisting of a pair containing the first system information and a description of the test case which was run; and storing the object in the annotated problem-signature database for use in an analysis of a failure in an attempted operation which may occur when the computer program is executed.
  2. 2. The method according to claim 1, comprising, when executing the computer program under normal use conditions, collecting second system information for description of a problem occurring in a situation where an attempted operation fails during execution of the computer program.
  3. 3. The method according to claim 2, comprising making a comparison between the collected second system information and the objects stored in the annotated problem-signature database.
  4. 4. The method according to claim 3, comprising ranking the matching results between the second system information and the objects in the annotated problem-signature database.
    -16 -
  5. 5. The method according to claim 4, comprising assembling a sorted list which contains objects with best matching results in the comparison.
  6. 6. The method according to claim 5, comprising presenting to the user at least one of the following for the test cases on the sorted list: the test case descriptions, and information derived from said test case descriptions.
  7. 7. A program product comprising a computer useable medium having a computer readable program, wherein the computer readable program when executed on a computer causes the computer to perform a method to anyone of claims 1 to 6.
  8. 8. A program product comprising a computer useable medium including a computer readable program, wherein the computer readable program when executed on a computer causes the computer to run at least one such test case for a further computer program; collect first system information produced by the test case which is run; generate an object consisting of a pair containing the first system information and a description of the test case which was run; and store the object in an annotated problem-signature database for use in an analysis of a failure in an attempted operation which may occur when the further computer program is executed.
  9. 9. A data processing system for performing a method according to anyone of the claims 1 to 6 when said program is run on said computer.-17 -
  10. 10. A data processing system configured to carry out the following steps during testing of a computer program: creating an annotated problem-signature database encompassing one or more test cases, each test case causing an attempted operation to fail when the computer program is executed with the respective test case; running at least one such test case; collecting first system information produced by the test case which is run; generating an object consisting of a pair containing the first system information and a description of the test case which was run; and storing the object in said annotated problem-signature database for use in an analysis of a failure in an attempted operation which may occur when the computer program is executed.
GB0902360A 2008-03-12 2009-02-13 Creating a program problem signature data base during program testing to diagnose problems during program use Withdrawn GB2458201A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP08152625 2008-03-12

Publications (2)

Publication Number Publication Date
GB0902360D0 GB0902360D0 (en) 2009-04-01
GB2458201A true GB2458201A (en) 2009-09-16

Family

ID=40548105

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0902360A Withdrawn GB2458201A (en) 2008-03-12 2009-02-13 Creating a program problem signature data base during program testing to diagnose problems during program use

Country Status (1)

Country Link
GB (1) GB2458201A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346283A (en) * 2016-05-05 2017-11-14 中兴通讯股份有限公司 Script processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5928369A (en) * 1996-06-28 1999-07-27 Synopsys, Inc. Automatic support system and method based on user submitted stack trace
US20030005414A1 (en) * 2001-05-24 2003-01-02 Elliott Scott Clementson Program execution stack signatures
US6629267B1 (en) * 2000-05-15 2003-09-30 Microsoft Corporation Method and system for reporting a program failure
US20040078689A1 (en) * 2002-10-21 2004-04-22 I2 Technologies Us, Inc. Automatically identifying a program error in a computer program
US20070038896A1 (en) * 2005-08-12 2007-02-15 International Business Machines Corporation Call-stack pattern matching for problem resolution within software
US20070256114A1 (en) * 2006-04-28 2007-11-01 Johnson Lee R Automated analysis of collected field data for error detection
US20070283338A1 (en) * 2006-06-02 2007-12-06 Rajeev Gupta System and method for matching a plurality of ordered sequences with applications to call stack analysis to identify known software problems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5928369A (en) * 1996-06-28 1999-07-27 Synopsys, Inc. Automatic support system and method based on user submitted stack trace
US6629267B1 (en) * 2000-05-15 2003-09-30 Microsoft Corporation Method and system for reporting a program failure
US20030005414A1 (en) * 2001-05-24 2003-01-02 Elliott Scott Clementson Program execution stack signatures
US20040078689A1 (en) * 2002-10-21 2004-04-22 I2 Technologies Us, Inc. Automatically identifying a program error in a computer program
US20070038896A1 (en) * 2005-08-12 2007-02-15 International Business Machines Corporation Call-stack pattern matching for problem resolution within software
US20070256114A1 (en) * 2006-04-28 2007-11-01 Johnson Lee R Automated analysis of collected field data for error detection
US20070283338A1 (en) * 2006-06-02 2007-12-06 Rajeev Gupta System and method for matching a plurality of ordered sequences with applications to call stack analysis to identify known software problems

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346283A (en) * 2016-05-05 2017-11-14 中兴通讯股份有限公司 Script processing method and device

Also Published As

Publication number Publication date
GB0902360D0 (en) 2009-04-01

Similar Documents

Publication Publication Date Title
Xu et al. Early detection of configuration errors to reduce failure damage
Choudhary et al. Automated test input generation for android: Are we there yet?(e)
US7882495B2 (en) Bounded program failure analysis and correction
US8839201B2 (en) Capturing test data associated with error conditions in software item testing
KR102268355B1 (en) Cloud deployment infrastructure validation engine
Hu et al. Automating GUI testing for Android applications
US9292416B2 (en) Software development kit testing
US10067858B2 (en) Cloud-based software testing
US8756460B2 (en) Test selection based on an N-wise combinations coverage
US8839202B2 (en) Test environment managed within tests
US9684587B2 (en) Test creation with execution
US9069902B2 (en) Software test automation
US8949794B2 (en) Binding a software item to a plain english control name
US20070220370A1 (en) Mechanism to generate functional test cases for service oriented architecture (SOA) applications from errors encountered in development and runtime
US20040194063A1 (en) System and method for automated testing of a software module
US10387294B2 (en) Altering a test
US9183122B2 (en) Automated program testing to facilitate recreation of test failure
Zheng et al. Towards understanding bugs in an open source cloud management stack: An empirical study of OpenStack software bugs
US20080141225A1 (en) Method for migrating files
US9292422B2 (en) Scheduled software item testing
Carzaniga et al. Self-healing by means of automatic workarounds
US8201151B2 (en) Method and system for providing post-mortem service level debugging
GB2458201A (en) Creating a program problem signature data base during program testing to diagnose problems during program use
US8739130B2 (en) Quality assurance testing
Saha et al. Finding resource-release omission faults in linux

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)