EP3616066B1 - Production de résumé de trace de pile indépendant du langage lisible par l'utilisateur - Google Patents

Production de résumé de trace de pile indépendant du langage lisible par l'utilisateur Download PDF

Info

Publication number
EP3616066B1
EP3616066B1 EP18766472.7A EP18766472A EP3616066B1 EP 3616066 B1 EP3616066 B1 EP 3616066B1 EP 18766472 A EP18766472 A EP 18766472A EP 3616066 B1 EP3616066 B1 EP 3616066B1
Authority
EP
European Patent Office
Prior art keywords
stack trace
language
frames
independent
summaries
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP18766472.7A
Other languages
German (de)
English (en)
Other versions
EP3616066A1 (fr
Inventor
Dominic HAMON
Ruixue LI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of EP3616066A1 publication Critical patent/EP3616066A1/fr
Application granted granted Critical
Publication of EP3616066B1 publication Critical patent/EP3616066B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/362Software debugging
    • G06F11/3636Software debugging by tracing the execution of the program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0766Error or fault reporting or storing
    • G06F11/0769Readable error formats, e.g. cross-platform generic formats, human understandable formats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations

Definitions

  • This invention relates to testing.
  • test failures In a software development and release environments, detecting and fixing software test failures, (often called "bugs"), as early as possible ensures that the software being integrated or released will be error-free, production-quality software code. In these environments, it can be difficult to determine the root cause of a particular test failure, as well as the change in code that may have introduced the test failure. This problem is exacerbated in larger software development organizations where many individuals and/or teams of individuals are contributing code changes for integration into software builds for testing and product release.
  • Test failure data may include metadata identifying a specific priority, the rate of failure, and/or the configuration of the integration or build environment in which the test failure was discovered.
  • Stack traces are generated and output by a compiler when the compiler cannot interpret an active stack frame at a certain point in time during the execution of a program. Stack traces may also be displayed to users as part of an error message. Stack traces identify the active stack frames that are associated with the failure. Stack traces are difficult to read and interpret because the format of the stack trace is language-dependent and typically includes a sequenced list of the functions that were called up to the point at which the error occurred and the stack trace was generated. Stack traces can also be retrieved via language specific built-in support, such as system calls, to return the current stack trace. The generated stack traces are commonly used in software development during interactive testing and debugging activities to determine the portions of code that are specifically associated with a particular test failure.
  • US 2017/185467 A1 discloses a method and an apparatus for a computing device.
  • the computing device may generate stacks for crash dump in response to failures, each of the stacks may include a plurality of stack frames from bottom to top, and each of the stack frames may include function information associated with a corresponding failure.
  • the method may include: extracting corresponding function name information from the stack frames in the stacks; generating simplified stack frames based on the corresponding function name information to obtain simplified stacks for the stacks; and determining a similarity between the failures based on a similarity between the simplified stacks of the failures.
  • US 9 009 539 B1 discloses a method for identifying and grouping program run time errors, where a stack trace associated with an application program is received and at least one recognizable term is searched for in the stack trace. A digital signature is generated from at least a portion of the stack trace that includes the at least one recognizable term. If the digital signature matches a known digital signature among a plurality of known digital signatures, the stack trace is grouped with other stack traces associated with the known digital signature. Method call graphs in grouped stack traces may be analyzed to determine common pathways leading to an error.
  • US 9 213 622 B1 discloses a method of receiving a stack trace, where the stack trace refers to executed code that crashed; identifying one or more lines of the executed code that caused the executed code to crash; identifying, from a code repository, contact information of a developer from a plurality of developers that are responsible for the executed code, where the developer is responsible for a code commit that refers to the one or more lines of the executed code; and notifying, through the contact information, the developer that the one or more lines caused the executed code to crash.
  • Ghafoor Maryam Abdul et al proposes a method to identify cross platform bug correlation to detect faulty functions, using function call stack information given in bug reports.
  • Ghafoor Maryam Abdul et al collects and processes bug reports from multiple platforms to compute similarity based on our similarity metric between occurrence sequences of the function calls within different bug reports.
  • For the solved bug information about faulty function is extracted by analyzing its bug report to propose fix of the similar bug with same function within correlated bug report. If there exists fix of one bug in one application, it can be used to resolve similar bug in some other application.
  • the description provided in the background section should not be assumed to be prior art merely because it is mentioned in or associated with the background section.
  • the background section may include information that describes one or more aspects of the subject technology.
  • the disclosure relates to a computer-implemented method for generating a human-readable, language-intendent stack trace summary.
  • the method includes: receiving a plurality of error reports, each error report including a language-dependent stack trace generated in response to a software test failure of a code implemented in a specific programing language and a plurality of metadata, wherein the generated language-dependent stack trace includes one or more frames; generating a language-independent stack trace summary by processing each frame of the language-dependent stack trace regardless of the specific programing language, wherein the processing includes at least two of: removing line number values from each of the one or more frames, removing column number values from each of the one or more frames, collapsing one or more file names identified in each of the one or more frames, removing spaces from each of the one or more frames, and removing special characters from each of the one or more frames; outputting the generated language-independent stack trace summary; generating a cluster of language-independent stack trace summaries, wherein the cluster of stack trace summ
  • the method includes processing each frame of the language-dependent stack trace by at least three of: removing line number values from each of the one or more frames, removing column number values from each of the one or more frames, collapsing one or more file names identified in each of the one or more frames, and removing spaces from each of the one or more frames. In some implementations, the method includes processing each frame of the language-dependent stack trace by all of: removing line number values from each of the one or more frames, removing column number values from each of the one or more frames, collapsing one or more file names identified in each of the one or more frames, and removing spaces from each of the one or more frames.
  • the method includes applying a hash function to the generated language-independent stack trace summary and outputting the hashed language-independent stack trace summary.
  • the plurality of metadata included in the error report identifies one or more of: a unique identifier, a build version, a client name, an HTTP referrer, an error type, or a test configuration description.
  • the cluster is associated with a unique software test failure based on comparing a value in the language-independent stack trace summary to values stored in one or more lookup tables including lists of previously determined unique software test failures.
  • the unique software test failure is assigned to an owner-like individual based on determining change histories associated with one or more file names identified in one or more frames included in the language-independent stack trace summary.
  • the unique software test failure is assigned to a team of individuals based on determining the ownership of code paths associated with one or more file names identified in one or more frames included in the language-independent stack trace summary and assigning the unique software test failure based on the determined code path ownership.
  • a system for generating a human-readable, language-independent stack trace summary includes a memory storing computer-readable instructions and one or more lookup tables.
  • the system also includes a processor configured to execute the computer-readable instructions, which when executed carry out the method comprising: receiving a plurality of error reports, each error report including a language-dependent stack trace generated in response to a software test failure of a code implemented in a specific programing language and a plurality of metadata, wherein the generated language-dependent stack trace includes one or more frames; generating a language-independent stack trace summary by processing each frame of the language-dependent stack trace regardless of the specific programing language, wherein the processing includes at least two of: removing line number values from each of the one or more frames, removing column number values from each of the one or more frames, collapsing one or more file names identified in each of the one or more frames, removing spaces from each of the one or more frames, and removing special characters from each of the one or more frames;
  • the processors are further configured to process each frame of the language-dependent stack trace by at least three of: removing line number values from each of the one or more frames, removing column number values from each of the one or more frames, collapsing one or more file names identified in each of the one or more frames, and removing spaces from each of the one or more frames.
  • the processors are further configured to process each frame of the language-dependent stack trace by all of: removing line number values from each of the one or more frames, removing column number values from each of the one or more frames, collapsing one or more file names identified in each of the one or more frames, and removing spaces from each of the one or more frames.
  • the processors are configured to apply a hash function to the generated language-independent stack trace summary and outputting the hashed language-independent stack trace summary.
  • the plurality of metadata included in the error report identifies one or more of: a unique identifier, a build version, a client name, an HTTP referrer, an error type, or a test configuration description.
  • the cluster is associated with a unique software test failure based on comparing a value in the language-independent stack trace summary to values stored in one or more lookup tables including lists of previously determined unique software test failures.
  • the unique software test failure is assigned to an owner-like individual based on determining change histories associated with one or more file names identified in one or more frames included in the language-independent stack trace summary.
  • the unique software test failure is assigned to a team of individuals based on determining the ownership of code paths associated with one or more file names identified in one or more frames included in the language-independent stack trace summary and assigning the unique software test failure based on the determined code path ownership.
  • not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure.
  • the disclosed system and method relate to processing stack traces associated with test failures and generating a stack trace summary.
  • the stack trace summary includes a stack trace format that is language-independent and more easily read and interpreted by humans, such as the individuals or team of individuals who may be assigned to fix the code which corresponds to the stack trace generated as a result of a test failure.
  • the stack traces that are generated as a result of a test failure are processed by the system and method to create a transformed stack trace as a string that includes concatenated file names of the files that include the code that was being called at the time the failure occurred.
  • the transformed stack trace may exclude other data that is commonly included in a stack trace such as line numbers, column numbers, spaces, and special characters as well as data pertaining to the specific build environment in which the test failure occurred.
  • the transformed stack trace may be included in a stack trace summary which may include a unique identifier and other metadata associated with a specific test failure.
  • the stack trace summaries are aggregated into clusters including similar stack trace summaries and may be stored in a database. Tables may be used to determine which cluster a particular stack trace summary should be included in. Each stack trace summary cluster may represent a single canonical issue such that the stack trace summaries represented by the stack trace summary cluster possess similar error types, similar error messages and similar stack frame data. Tables may also be used to compare data associated with each stack trace summary to data that is associated with known issues in order to determine whether or not a particular stack trace summary corresponds to a known unique software test failure.
  • a unique software test failure record may be created for each unique stack trace summary cluster and assigned to an individual or team of individuals (e.g., a developer or team of developers) possessing the best knowledge of the error and the failing code that is associated with the error.
  • Data sources such as a source code control system and/or an assignment database may be used to assign a particular cluster of stack trace summaries (and the associated unique software test failure record associated with the cluster) to an individual or a team of individuals.
  • the assignment database may be used, in conjunction with other data sources, to identify the code path ownership for the files identified in the cluster of stack trace summaries in order to identify the team of individuals who own the particular files which may have caused the test failure.
  • the unique test failure record can be assigned to that team to address and fix the code which produced test failure.
  • data from a source code control system may be leveraged to determine the specific person most likely to have introduced the code change associated with the test failure so that the test failure record can be assigned to that person.
  • the ownership for the test failure and corresponding test failure record can be assigned to an individual who may have most recently modified one or more of the files (causing the test failure).
  • Providing a system and method to efficiently identify, characterize and assign test failures as new or duplicate issues, in near real-time to the appropriate individual or team of individuals responsible for fixing the failing code, would enable a software development organization to produce error-free, high-quality code more quickly while utilizing development resources more effectively.
  • By utilizing the stack trace information that is routinely generated by software test failures and processing it into a format that can be leveraged for better readability and comparison may enable a software development organization to more accurately identify, cluster, and assign software test failures to be fixed.
  • test failures are new test failures or previously known test failures.
  • testing the software can generate large numbers of test failure records that need to be evaluated and assigned to resources to fix the code which may have produced the test failure.
  • the task of classifying and identifying test failure ownership is a manual task where a test engineer reviews the conditions under which the test failed and the code that was associated with the failure. This manual review process can be very time consuming and require specific knowledge of the test environment, test configurations and the files or code paths which may have caused the test failure.
  • the evaluation and assignment process may be performed automatically but can generate many false positive test failures and duplicate test failure records because these systems typically lack any reference data or processing capabilities to compare newly generated test failures records to known test failure records. While the automated test system may automatically generate test failures and test failure records, the identification and verification of unique, non-duplicative test failures or test failure records still requires manual intervention.
  • a solution to this problem may be achieved by the disclosed system and method whereby error reports associated with test failures generated in a testing environment may be processed to automatically determine, in real-time, the uniqueness of a particular software test failure as well as the assignment of the test failure record to an individual or team of individuals responsible for fixing the code which produced the test failure.
  • stack traces generated as a result of a test failure may be transformed into language-independent formats that are more easily interpreted compared to their originally generated format. Transforming the stack trace not only aids readability, but further enables more efficient processing for comparison with other stack traces.
  • Stack traces and stack trace summaries with similar content are clustered together and associated with a unique software test failure. The cluster of software stack traces and the corresponding unique software test failure record is then be more easily assigned to development resources best suited to review the code which produced the test failure.
  • Figure 1 illustrates an example architecture 100 for generating a human-readable, language-independent stack trace summary.
  • the architecture 100 includes a computing device 105 and a test environment 110.
  • the architecture 100 also includes a plurality of error reports 115 and a stack trace processing server 120.
  • the architecture 100 also includes stack trace summaries 125, such as stack trace summary 125a and stack trace summary 125b.
  • a computing device 105 such as a desktop computer, laptop computer, tablet, or any other similar network-accessible computing device may exchange data with test environment 110.
  • a developer or software engineer may develop code in a development environment, such as a developer sandbox, that may be configured on computing device 105.
  • the developer may transmit or submit software code or changes to portions of software code to test environment 110 for testing.
  • the software code may be new code that the developer seeks to integrate into a new build of a product codebase, such as a code to be incorporated into a production environment for subsequent release in a software product.
  • the software code may be changes to portions of existing code that the developer seeks to integrate into an existing build for testing purposes, such as a code change necessary to fix an error found through testing the software code.
  • the computing device 105 may be configured with or configured with access to a variety of software development resources such as integrated development environments (IDEs), unit test environments, sandbox or integration test environments, source code control systems, version control systems, or any other such environments or tools to provide the developer with a broad range of capabilities for creating, storing, managing, testing, integrating and distributing software code.
  • IDEs integrated development environments
  • unit test environments such as unit test environments, sandbox or integration test environments, source code control systems, version control systems, or any other such environments or tools to provide the developer with a broad range of capabilities for creating, storing, managing, testing, integrating and distributing software code.
  • the computing device 105 may also be utilized by software test engineers.
  • Software test engineers or quality assurance engineers develop software tests configured to evaluate or test specific behaviors of software code.
  • a test engineer may develop tests on computing device 105 and may transmit those tests to test environment 110.
  • a test engineer may interact with a test facilities or resources that are configured on test environment 110 such as an automated test environment, an interactive test environment, a distributed test environment, as well as integration, build and/or production test environments.
  • the computing device 105 may be used to write tests for submission to or integration with a specific test module or testing resource that is included in the test environment 110.
  • the computing device 105 may be used to manage or configure existing testing resources or testing modules that are included in the test environment 110.
  • the test environment 110 receives data from the computing device 105.
  • the test environment 110 may include one or more servers storing instructions and data associated with various resources or tools that may be utilized in a software development organization for testing software.
  • resources can include, but are not limited to, integration testing environments, build testing environments, production testing environments, distributed test environments, automated and interactive test environments, and any similar environment that may be configured for the purpose of testing software.
  • the test environment 110 may be configured to receive and/or collect test failures found in a canary release environment.
  • the test environment 110 may be configured to receive and/or collect test failures found in a production release environment. Additionally, or alternatively, the test environment 110 may also include software configuration management systems such as version control systems or source code control systems.
  • a software development organization may configure the test environment 110 to test code changes associated with a specific product, module, or implemented by a specific group of developers in the organization.
  • the test environment 110 may be configured with a variety of software testing architectures, resources, and functionality depending on the differing needs of the software development organizations operating the test environment 110.
  • the test environment 110 may be configured to record errors found during testing (also known as bugs or defects) and output lists of testing errors and the associated error data to other testing components for subsequent processing or to teams or individuals who are in the best position to fix the software code that is associated with the errors found during testing.
  • the test environment 110 exchanges error data, such as error reports 115, with the stack trace processing server 120.
  • error data such as error reports 115
  • the errors and error data generated by testing software code using the test environment 110 may be collected into a list of error reports, such as the error reports 115 which includes multiple individual error reports.
  • An individual error report may be identified by a unique identifier, such as an error report ID.
  • the error reports 115 include four unique error reports.
  • One specific error report is identified as error report 1022 which is associated with a buffer overflow error that was discovered as a result of testing performed in the test environment 110.
  • the error report 115 includes error report 1023 which is associated with a memory allocation error.
  • an individual error report 115 may also include a language-dependent stack trace including one or more frames (not shown) which is generated in response to a software test failure associated with a test performed in or managed by the test environment 110.
  • An individual error report 115 may also include a plurality of metadata about the test failure.
  • the metadata may include, but is not limited to, an error report identifier and the build version of the code that generated the error.
  • the error report metadata may also include a client name identifying a specific computing device 105 on which the error was initially discovered or can be reproduced on.
  • the error report metadata may also include an HTTP referrer, an error type description, as well as an error message and a test configuration description that are each associated with the specific test for which the software code failed during testing in the test environment 110. Additional details of the error reports 115 will be discussed in more detail later in relation to Figure 4 .
  • the architecture 100 also includes a stack trace processing server 120.
  • the stack trace processing server 120 receives error reports 115 from the test environment 110 and processes the received error reports 115 to generate human-readable, language-independent stack trace summaries.
  • the stack trace processing server 120 may then output the generated language-independent stack trace summaries, for example, the output language-independent stack trace summaries 125, shown as stack trace summaries 125a and 125b.
  • the stack trace processing server 120 may be configured as a server in a network which includes a memory storing error data as well was computer-readable instructions, and one or more processors to execute the computer-readable instructions.
  • the stack trace processing server 120 may include data and/or components to receive error reports and process the error reports to generate the language-independent stack trace summaries 125.
  • the stack trace processing server 120 also includes functionality and/or components to generate a cluster of language-independent stack trace summaries 125 that are similar.
  • the stack trace processing server 120 may include a database storing lookup tables and other components that are used to associate a generated cluster of similar language-independent stack trace summaries 125 to a unique software test failure.
  • the stack trace processing server 120 may include functionality and/or components to assign the cluster of similar language-independent stack trace summaries 125 that have been associated with a unique software test failure to a specific developer or development team who is best suited to fix the code corresponding to the software test failure.
  • the stack trace processing server 120 and the functionality that is configured within the server enables a software development organization to more rapidly process and diagnose errors found during testing in order to provide individual developers and development teams with more precise error data about the software code being developed.
  • the stack trace processing server 120 may generate a language-independent stack trace summary 125 that is easier for a human to interpret because the processing performed by the stack trace processing server 120 has removed or reformatted data that is typically included in a stack trace such as spaces, line number values, column number values, special characters, and file names from the contents of each frame that is included in the language-independent stack trace summary 125.
  • the stack trace processing server 120 may generate the language-independent stack trace summary 125 regardless of the specific language that the software code being tested was implemented in or regardless of the specific build that the software code being tested was included in.
  • the stack trace processing server 120 generates a cluster of similar language-independent stack trace summaries 125, which, when associated with a unique software test failure, better enable members of a software development organization to determine whether the error report and corresponding language-independent stack trace summaries 125 are related to an existing or previously-identified software test failure or whether the error report and corresponding language-independent stack trace summaries 125 are related to a new, canonical software test failure.
  • the processing performed by the stack trace processing server 120 enables more efficient identification and de-duplication of unique software test failures which can save time and resources in software development organizations. Additional details of the functionality and components included in the stack trace processing server 120 will be discussed in more detail later in relation to Figure 2 .
  • the architecture 100 also includes stack trace summaries 125.
  • the stack trace summaries 125 include human-readable, language-independent stack tracesand correspond to one or more error reports 115 that received by the stack trace processing server 120.
  • the stack trace processing server 120 has generated language-independent stack trace summaries 125a and 125b that respectively correspond to the two received error reports (e.g., error report ID1022 and error report ID 1023).
  • the stack trace processing server 120 generates the stack trace summaries 125 by methods that will be described in relation to Figures 3A and 3B .
  • the stack trace that is included in the stack trace summaries 125 provides developers and/or test engineers with a more readable stack trace format than is normally generated by a compiler when there is an error in software code or portions of software code.
  • the stack trace summaries 125 also include metadata (not shown) describing details that are associated with the specific test or test configuration for which the software code failed during testing. Additional details of the stack trace summaries 125 will be discussed in more detail later in relation to Figures 3B and 4 .
  • FIG. 2 is an example block diagram of a system for generating a human-readable, language-independent stack trace summary according to some implementations.
  • System 200 includes a stack trace processing server 120, such as the stack trace processing server 120 shown in Figure 1 .
  • the stack trace processing server 120 includes a communications module 205 and a processor 210.
  • the stack trace processing server 120 also includes a memory 215 and a database 220.
  • the stack trace processing server 120 also includes a stack trace processing module 225 which includes a summarizer 230 and a clusterizer 235.
  • the clusterizer 235 is shown in dashed lines to indicate that, in some implementations, the clusterizer 235 may be included in the stack trace processing module 225 and, in some implementations, the clusterizer 235 may not be included in the stack trace processing module 225.
  • the stack trace processing server 120 includes a source code control system 240 and an assignment module 245.
  • the source code control system 240 and the assignment module 245 are also shown in dashed lines to indicate that one or both components may be located outside of the stack trace processing server 120, for example the source code control system 240 and the assignment module 245 could be located in the test environment 110 or on a different network accessible server that is located remotely from the test environment 110 or the stack trace processing server 120.
  • the system 200 includes a stack trace processing server 120.
  • the stack trace processing server 120 operates to receive, store and process the error reports 115 received from the test environment 110.
  • the stack trace processing server 120 can be any device having an appropriate processor, memory, and communications capability for generating a human-readable, language-independent stack trace summary.
  • the stack trace processing server 120 also generates a cluster of language-independent stack trace summaries 125.
  • the stack trace processing server 120 associates each generated cluster of similar stack trace summaries 125 to a unique software test failure.
  • the stack trace processing server 120 may perform operations to assign the unique software test failure that is associated with the cluster of similar stack trace summaries 125 to an individual developer or a team of individual developers, for example by assignment module 245.
  • one or more stack trace processing servers 120 can be co-located with the test environment 110 and/or the computing device 105.
  • the stack trace processing server 120 may be located remotely from the test environment 110, for example in a cloud computing testing facility or in a remote data center that is connected to the test environment 110 via a network, such as the internet or a private network operated by the software development organization.
  • the stack trace processing server 120 includes a communications module 205.
  • the communication module 205 receives and transmits data and/or computer-executable instructions that are associated with the processing and functionality corresponding to the stack trace processing server 120. For example, the communication module 205 receives the error reports 115 that are transmitted from the test environment 110 and outputs the generated language-independent stack trace summaries 125. In some implementations, the communications module 205 may output a unique software test failure that is associated with a cluster of similar language-independent stack trace summaries 125. In some implementations, the communications module may output a hashed language-independent stack trace summary 125.
  • the communications module 205 receives and transmits data and/or computer-executable instructions that are associated with the processing and functionality corresponding to the stack trace processing server 120 via a network (not shown).
  • the network can include, for example, any one or more of a personal area network (PAN), a local area network (LAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a broadband network (BBN), the Internet, and the like.
  • PAN personal area network
  • LAN local area network
  • CAN campus area network
  • MAN metropolitan area network
  • WAN wide area network
  • BBN broadband network
  • the network can include, but is not limited to, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a starbus network, tree or hierarchical network, and the like.
  • the stack trace processing server 120 also includes one or more processors 210 configured to execute instructions that when executed cause the processors to generate a human-readable, language-independent stack trace summary 125 as well as other processing and functionality corresponding to the stack trace processing server 120 that will be described herein.
  • the processor 210 operates to execute computer-readable instructions and/or data stored in memory 215, as well as the database 220, and transmit the computer-readable instructions and/or data via the communications module 205.
  • the stack trace processing server 120 also includes a memory 215 configured to store the data and/or computer-executable instructions that are associated with the processing and functionality corresponding to the stack trace processing server 120.
  • memory 215 may store one or more error reports 115 that include language-dependent stack trace summaries and metadata to be processed in order to generate one or more human-readable, language independent stack trace summaries 125, as shown in Figure 1 , which correspond to the one or more received error reports 115.
  • the memory 215 may store one or more lookup tables that are used to associate a cluster of similar language-independent stack trace summaries 125 to a unique software test failure.
  • the memory 215 may store hash algorithms or functions that are used to generate a hashed version of the language-independent stack trace summary 125.
  • the stack trace processing server 120 includes a database 220.
  • the database 220 stores data used to generate the language-independent stack trace summaries 125.
  • the database 220 may also store data such as the one or more lookup tables that are used to associate a cluster of similar language-independent stack trace summaries 125 to a unique software test failure.
  • the database 220 may include one or more tables identifying all of the currently known issues that are associated with a particular code base or build version.
  • the database 220 may include a table of known test failures that are associated with a particular code language, test suite, error message, and/or error type.
  • the database 220 may include tables associating a known test failure to an individual developer, or team of individual developers.
  • the database 220 may store data that is used to assign a particular software test failure to an individual developer or a team of individual developers, such as data associated with source code control systems including file change histories, code path and file ownership data.
  • the database 220 may store a variety of data that can be used to generate human-readable, language-independent stack trace summaries 125 and map, assign, or otherwise associate the stack trace summaries 125 to unique software test failures for assignment to an individual developer or team of developers.
  • the stack trace processing server 120 includes stack trace processing module 225.
  • the stack trace processing module 225 functions to process the received error reports 115 and generate stack trace summaries 125 that include human-readable, language-independent stack traces generated as a result of software code failing a test.
  • the stack trace processing module 225 may receive a stack trace independent of an error report 115 (e.g., a stack trace not included in an error report) and may generate a language-independent stack trace based on the received stack trace.
  • the stack trace processing module 225 includes a summarizer 230.
  • the summarizer 230 processes the data that is included in one or more of the received error reports, such as error reports 115 and generates a human-readable stack trace summary, such as the stack trace summaries 125 shown in Figure 1 .
  • the summarizer 230 generates a stack trace summary 125 by processing each frame of the language-dependent stack trace and the associated metadata included in the received error report 115.
  • the summarizer 230 processes each frame of the language-dependent stack trace by performing a combination of processing steps that can include one or more of: removing line number values from each frame, removing column number values from each frame, collapsing file names identified in each frame, removing spaces from each frame, and/or removing special characters from each frame.
  • the language-independent stack trace summary 125 is generated by processing each frame of the language-dependent stack trace included in the error report 115 by any two of the aforementioned processing steps.
  • the language-independent stack trace summary 125 is generated by processing each frame of the language-dependent stack trace included in the error report 115 by any three of the aforementioned processing steps.
  • the language-independent stack trace summary 125 is generated by processing each frame of the language-dependent stack trace included in the error report 115 by all of the aforementioned processing steps. The resulting stack trace that is generated is more easily interpreted by a developer or team of developer due to the removal of these specific data elements or values from the received stack trace.
  • the summarizer 230 also includes a hash algorithm or function that can be applied to the generated stack trace summary 125.
  • the summarizer 230 may apply the hash algorithm to the generated stack trace summary 125 and generate a hashed stack trace summary that can be compared to other hashed stack trace summaries 125 to determine whether or not the stack trace summaries are similar, which may indicate the test failures are associated with similar portions of code or files.
  • the stack trace processing module 225 includes a clusterizer 235.
  • the clusterizer 235 is shown in dashed lines to indicate that, in some implementations, the clusterizer 235 may be included in the stack trace processing module 225, while in some implementations, the clusterizer 235 may be located outside of the stack trace processing module 225 or excluded, all together.
  • the clusterizer 235 receives the stack trace summaries 125 generated by the summarizer 230 and generates a cluster of similar stack trace summaries.
  • the clusterizer 235 determines whether or not the stack trace summaries 125 are similar based on comparing the error types, error messages, hashed stack traces or hashed stack trace summaries, and/or the file names identified in each of the frames included in each of the stack trace summaries 125.
  • the clusterizer 235 generates the cluster of similar stack trace summaries so that a unique software test failure record is associated with each generated cluster.
  • the clusterizer 235 may associate a cluster with a unique software test failure record based on comparing a value in the language-independent stack trace summary 125 to values stored in a lookup tables which identify unique software test failures that have been previously determined.
  • the clusterizer can determine whether or not the similar stack trace summaries 125 are associated with known software test failures (and thereby duplicates) or if the similar stack trace summaries 125 are associated with software test failures that may represent a new, canonical issue or test failure.
  • the stack trace processing server 120 includes a source code control system 240.
  • the source code control system 240 is shown in dashed lines in Figure 2 to indicate that in some implementations, the source code control system 240 may be included in the stack trace processing server 120, while in some implementations, the source code control system 240 may be located outside of the stack trace processing server 120.
  • the source code control system 240 provides data to the assignment module 245 that is used to assign a unique software test failure record (and the associated cluster of similar language-independent stack trace summaries) to an individual developer or a team of individual developers.
  • the unique software test failure record (and the associated cluster of similar language-independent stack trace summaries) may first be assigned to a team of individual developers and subsequently assigned to an individual who is a member of the team of individual developers.
  • the source code control system 240 may receive or store file change histories associated with the files identified in each frame of the stack trace summaries. The file change histories may identify an owner-like individual or team of individuals who made the most recent change to the file.
  • the assignment module 245 may assign the software test failure based on identifying the individual developer or team of developers who recently changed the file or files included in the stack trace summary for which the test failure is associated. Often the individual developer or team of developers who most recently changed the file is the best resource to evaluate the test failure record and implement a fix for the test failure.
  • the stack trace processing server 120 includes an assignment module 245.
  • the assignment module 245 is also shown in dashed lines in Figure 2 to indicate that in some implementations, the assignment module 245 may be included in the stack trace processing server 120, while in some implementations, the assignment module 245 may be located outside of the stack trace processing server 120.
  • the assignment module 245 receives data from the clusterizer 235 that is used to assign a unique software test failure record (and the associated cluster of similar language-independent stack trace summaries) to an individual or to a team of developers.
  • the unique software test failure record (and the associated cluster of similar language-independent stack trace summaries) may first be assigned to a team of individual developers and subsequently assigned to an individual who is a member of the team of individual developers.
  • the assignment module 245 may receive or determine the file ownership or code path ownership of the files identified in each frame of the stack trace summaries 125.
  • the assignment module 245 may assign the software test failure record based on identifying the individual or team of individuals who own the files or code paths identified each frame of the stack trace summaries 125.
  • the individual developer or teams of developer who own the files or code paths that are associated with the software test failure are generally the best resources to evaluate the test failure record and implement a fix for the test failure.
  • FIG 3A illustrates an example method 300a for processing error reports performed by the stack trace processing server 120 shown in Figures 1 and 2 .
  • the method 300a includes receiving error reports (stage 310).
  • the method further includes generating language-independent stack trace summaries (stage 320) and outputting generated language-independent stack trace summaries (stage 330).
  • the method also includes applying a hash function to the generated language-independent stack trace summary (stage 340) and outputting a hashed language-independent stack trace (stage 350).
  • the process 300a begins by receiving error reports, such as error reports 115 shown in Figure 1 .
  • the stack trace processing server 120 may receive error reports from a test environment, such as the test environment 110 shown in Figure 1 .
  • the error reports may be previously generated error reports that resulted from testing performed in the past and are stored on memory located in the test environment 110.
  • the error reports 115 may also be dynamically generated in real-time based on testing that is being executed in the test environment 110.
  • the stack trace processing server 120 may poll the test environment periodically, such as every minute, hour, or day, to receive the error reports 115 that have been generated as a result of the testing being performed in the test environment 110.
  • the stack trace processing server 120 may receive the error reports 115 that have been output by the test environment 110 on a scheduled or ad-hoc basis.
  • the test environment 110 may be configured to transmit error reports 115 that have accumulated as a result of testing to the stack trace processing server 120 on a periodic basis, such as every minute, hour, or day.
  • the test environment may be configured to transmit error reports 115 to be received by the stack trace processing server 120 as a result of instructions or commands provided by a user, such as a test engineer.
  • the stack trace processing server 120 may receive error reports 115 that are individually transmitted.
  • the stack trace processing server 120 may receive error reports 115 in batches of multiple individual error reports 115, for example as a list of error reports 115 as shown in Figure 1 .
  • the stack trace processing server 120 generates language-independent stack trace summaries. Based on receiving the error reports 115, the stack trace processing server 120 implements methods to process the received error reports and generate a language-independent stack trace summary, such as the language-independent stack trace summaries 125 shown in Figure 1 .
  • the received error reports may be processed by the summarizer 230 included in the stack trace processing module 225 as shown in Figure 2 to generate the language-independent stack trace summaries 125.
  • the stack trace summaries 125 are generated using language-dependent methods which correspond to the software language used to implement the failing code for which the error report was generated.
  • the stack trace processing server 120 may utilize Java processing methods to process error reports containing stack traces associated with test failures of code that was written in the Java programming language in order to generate a language-independent stack trace summary 125.
  • the stack trace processing server 120 would process the error reports using methods that are specific to the Python language in order to generate a human-readable, language-independent stack trace summary. Additional detail regarding the specific methods of processing the received error reports 115 to generate language-independent stack trace summaries 125 will be discussed in more detail in relation to Figure 3B .
  • the stack trace processing server 120 outputs the generated language-independent stack trace summaries. For example, based on processing the received error reports, the summarizer 230 included in the stack trace processing module 225 may output the generated language-independent stack trace summaries 125. In some implementations, the stack trace processing server 120 may output the generated language-independent stack trace summaries 125 to a database, such as database 220 shown in Figure 2 . In some implementations, the stack trace processing server may output the generated language-independent stack trace summaries 125 to a computing device, such as computing device 105 shown in Figure 1 . Additionally, or alternatively, the stack trace processing server 120 may store the generated language-independent stack trace summaries 125 that are output in memory 215. In some implementations, the output of the generated language-independent stack trace summaries 125 is transmitted to a clustering component, such as the clusterizer 235 shown in Figure 2 .
  • a clustering component such as the clusterizer 235 shown in Figure 2 .
  • the process 300a applies a hash function to the generated language-independent stack trace summary.
  • a hash algorithm or function may be applied to the stack trace included in the generated language-independent stack trace summary 125.
  • the hash algorithm or function may be applied to the generated language-independent stack trace summary 125a as a whole.
  • a variety of hash functions or algorithms may be selected to generate a hashed version of the language-independent stack trace summary. The selected hash algorithm or function will be best suited for use if the hash algorithm or function generates consistent hashes regardless of the build of software code (or the locations of the code in the build) for which the test failure which generated stack trace is associated.
  • a hash function or algorithm will be best suited for use if the hash algorithm or function generates consistent hashes regardless of the programming language in which the code associated with the test failure which generated the stack trace was implemented in.
  • the hash function or algorithm may be stored in memory 215 of the stack trace processing server 120.
  • the hash function or algorithm may be stored in the database 220 of the stack trace processing server 120.
  • the process 300a outputs the hashed language-independent stack trace.
  • the summarizer 230 of the stack trace processing module 225 may output the hashed language-independent stack trace to clusterizer 235 for use in determining stack traces that are similar based on the hashed stack trace so that the hashed language-independent stack traces may be assigned together in a cluster and further associated with a unique software test failure.
  • the hashed language-independent stack traces may be output to the test environment 110 and/or the computing device 105.
  • Figure 3B illustrates an example method 320 for processing language-dependent stack traces performed by the stack trace processing server 120 shown in Figures 1 and 2 .
  • the method 320 generates a language-independent stack trace summary (e.g., stage 320 of Figure 3A ) by performing the processing steps which include removing line number values (stage 305), removing column number values (stage 315), and collapsing file names (stage 325).
  • the steps of method 320 are performed by the summarizer 230 that is included in the stack trace processing module 225 of the stack trace processing server 120 shown in Figure 2 .
  • all of the method steps 305 through 325 are performed to generate the language-independent stack trace summary 125.
  • two of the method steps 305 through 325 are performed to generate the language-independent stack trace summary 125.
  • three of the method steps 305 through 325 are performed to generate the language-independent stack trace summary 125.
  • the process 320 includes removing line number values from the language-dependent stack trace that is included in the error report received by the stack trace processing server 120.
  • stack traces for certain programming languages commonly include line number values in each frame of the stack trace indicating the specific line of code at which an error was generated during testing.
  • the line number values may provide some benefit for diagnosing the specific location in the code that is associated with a test failure, however for the purposes of comparing stack traces or assisting human readability, the line number values provide less benefit and often make the stack trace more difficult to read.
  • the summarizer 230 shown in Figure 2 , uses language-dependent methods to parse each frame included in the stack trace and removes the line number values from the stack trace frame.
  • the process 320 includes removing column number values from the language-dependent stack trace that is included in the error report received by the stack trace processing server 120.
  • stack traces for some programming languages may include column number values in each frame of the stack trace indicating the column in the code at which an error was generated during testing.
  • the column number values may provide some benefit for diagnosing the specific location in the code that is associated with a test failure, however for the purposes of comparing stack traces or assisting human readability, the column number values provide less benefit and often make the stack trace more difficult to read.
  • the summarizer 230 shown in Figure 2 , uses language-dependent methods to parse each frame included in the stack trace and removes the column number values from the stack trace frame.
  • the process 320 includes collapsing the file names from the language-dependent stack trace that is included in the error report received by the stack trace processing server 120.
  • stack traces commonly include file names in each frame of the stack trace indicating the specific file that contained the code for which an error was generated during testing.
  • the file names may provide some benefit for diagnosing the specific file that is associated with a test failure, however for the purposes of comparing stack traces or assisting human readability, the full file name provides less benefit and often makes the stack trace more difficult to read.
  • the summarizer 230 shown in Figure 2 , uses language-dependent methods to parse each frame included in the stack trace and to collapse the file name into a reformatted file name representation that is easier to read and process for stack trace comparison. For example, the summarizer 230 may collapse the file names by removing spaces from the file names or by removing special characters from the file names.
  • the summarizer 230 may collapse the file names by removing spaces from the file names of the language-dependent stack trace that is included in the error report received by the stack trace processing server 120.
  • stack traces may include file names that have spaces in the file name identified in each frame of the stack trace for which an error was generated during testing.
  • the file name spaces included in file names represent the most accurate syntax of the file name that is associated with a test failure, however for the purposes of comparing stack traces or assisting human readability, the file name spaces are not required and may make the stack trace more difficult to read.
  • the summarizer 230 shown in Figure 2 , uses language-dependent methods to parse each frame included in the stack trace and removes the file name spaces from the file name identified in the stack trace frame.
  • the summarizer 230 may collapse the file names by removing special characters from the language-dependent stack trace that is included in the error report received by the stack trace processing server 120.
  • stack traces may include special characters in the frames of the stack trace such as underscores(_), percentage (%), asterisks (*), dollar signs ($), ampersand (&), left or right floor characters, left or right ceiling characters, back quotes, and/or any non-letter, non-numeral symbol that may be included in a stack trace.
  • the special characters may represent the most accurate syntax of the compilers output that corresponds to the code associated with a test failure, however for the purposes of comparing stack traces or assisting human readability, the special characters are not required and may make the stack trace more difficult to read.
  • the summarizer 230 shown in Figure 2 , uses language-dependent methods to parse each frame included in the stack trace and removes the special characters from the stack trace frame.
  • Figure 4 is an example diagram of generating a human-readable, language-independent stack trace summary according to the method of Figure 3 .
  • an error report 405 includes a unique identifier 410 (e.g., error report 1378) and a plurality of metadata 415.
  • the error report 405 also includes a language-dependent stack trace 420 that includes a number of frames each identifying a file name associated with the particular software test failure for which the error report corresponds.
  • the error report 405 is a more detailed version of the error report 115 that was generated from the test environment 110 and received by the stack trace processing server 120 as shown in Figure 1 .
  • the stack trace processing server 120 may process the received error report 405 and generate a language-independent stack trace summary 425.
  • the language-independent stack trace summary 425 also includes a unique identifier 430 (e.g., STS ID: 012802) and a plurality of metadata 435.
  • the language-independent stack trace summary 425 also includes a human-readable, language-independent stack trace 440.
  • an error report 405 may be generated as a result of a test failure.
  • the error report 405 includes a variety of data that may be useful in characterizing the failure and fixing the code that failed the test and generated the error report.
  • the error report 405 includes the date on which the error report was created as well as the particular test suite that includes the specific test which failed, resulting in generated error report 405.
  • the error report 405 also includes a plurality of metadata 415 which includes, but is not limited to, information describing configuration details of the test environment, the error message and error type, the build of software that was being tested, as well as the computing device 105 on which the test failure was discovered (e.g., Client: Dev_glinuxl2).
  • the HTTP referrer metadata identifies an HTTP header field corresponding to the address of a web page that links to a resource being requested.
  • a test environment may utilize this metadata for statistical purposes to identify the web pages that were being requested when the test failure occurred.
  • Individual metadata elements may be used in conjunction with the stack trace processing described earlier to determine whether or not the language-independent stack trace summaries generated by the aforementioned processing are similar and thus are associated with the same unique software test failure. For example, language-independent stack trace summaries which include similar error messages or error types are indicative of similar software test failures.
  • the error report 405 includes a language-dependent stack trace 420.
  • the language-dependent stack trace 420 includes frames each identifying a specific file name, line number and component or code module that was associated with the software test failure.
  • the summarizer 230 reformats the contents of the language-dependent stack trace 420 into a human-readable, language-independent format by a combination of processing steps described in relation to Figure 3B .
  • the error report 405 is processed by the summarizer 230 to generate the language-independent stack trace summary 425.
  • the language-independent stack trace summary 425 includes a unique identifier 430, a plurality of metadata 435 and a language-independent stack trace 440.
  • the language-independent stack trace summary 425 includes more or less metadata as the error report 405 to which it corresponds.
  • the exact format of the error report 405 and the generated language-independent stack trace summary 425 may vary based on the informational needs or processing requirements of the software development organization.
  • the language-independent stack trace summary 425 includes similar metadata 435 as the metadata 415 that is included in the error report 405.
  • the generated language-independent stack trace 440 has been reformatted by the processing performed by the summarizer 230 into a more readable format.
  • the file names included the frames of language-independent stack trace 440 have been processed to remove the line number values and the column number values.
  • the file names have been collapsed by removing the file name spaces and any special characters from the format of the file names in the language-dependent stack trace 420 that was included in the error report 405.
  • Figure 5 illustrates an example of generating a cluster of language-independent stack trace summaries by stack trace processing server 120.
  • the stack trace processing module 225 includes a clusterizer 235 which generates a cluster of language-independent stack trace summaries which are similar.
  • three language-independent stack trace summaries e.g., stack trace summaries 505, 510, and 515 are clustered into a cluster of language-independent stack trace summaries 520.
  • the clusterizer 235 that is included in the stack trace processing module 225 shown in Figure 2 , receives the generated language-independent stack trace summaries from the summarizer 230 and generates a cluster of language-independent stack trace summaries 520 which includes language independent stack trace summaries deemed to be similar to one another. The clusterizer 235 determines whether or not the language-independent stack trace summaries are similar based on comparing the error types, error messages, and/or file names identified in each of the frames included in the language-independent stack trace summary to one another.
  • the clusterizer 235 determines whether or not the language-independent stack trace summaries are similar based on comparing the hashed language-independent stack trace summary that was generated by applying a hash function or algorithm to the generated language-independent stack trace summary. In some implementations, the clusterizer 235 determines whether or not the language-independent stack trace summaries are similar based on comparing the hashed language-independent stack trace that is included in the hashed language-independent stack trace summary. For the stack traces (hashed and non-hashed), stack trace summaries (hashed and non-hashed), error types, error messages, and/or the file names, the similarity between one or more of these items may be determined based on identical or partial matching of one item to another.
  • two items may be determined to be similar based on identical or partial matching of one or more items included in the language-independent stack trace summary.
  • two language-independent stack trace summaries may be determine to be similar based on partially matching file names, that are identified in each frame of the language-independent stack trace that is included in each stack trace summary, even though the two stack trace summaries may not share a similar error message or error type.
  • the three language-independent stack trace summaries (e.g., stack trace summaries 505, 510, and 515) have been clustered into a cluster of language-independent stack trace summaries 520.
  • the cluster of language-independent stack trace summaries 520 includes metadata such as a cluster summary ID, the date the cluster was created, as well as other metadata that is similar between the three language-independent stack trace summaries for which the cluster is associated.
  • the cluster metadata identifies the error type, error message, and the stack trace file names that were identified as similar in each of the language-independent stack trace summaries for which the cluster corresponds.
  • the cluster may be created based on comparing similar stack trace summaries generated over a specific period of time, such as during the last day, week, or month. In some implementations, the cluster may be created based on comparing similar stack traces generated during a specific time period, such as stack traces summaries generated between two calendar dates.
  • the cluster of language-independent stack trace summaries 520 includes the generated language-independent stack trace identifying, in human-readable format, the file names that are common to or similar among the three language-independent stack trace summaries 505, 510, and 515 (e.g., file names ⁇ nthirdpartypyspitfireruntimeinit and ⁇ nthirdpartypyspitfireruntimetemplate).
  • Figure 6 illustrates an example of associating a cluster of language-independent stack trace summaries with a unique software test failure and assigning the software test failure to an individual and/or team according to some implementations.
  • a cluster of language-independent stack trace summaries 605 has been created based on determining similarities among the language-independent stack trace summaries 505, 510, and 515 described in Figure 5 .
  • the cluster of language-independent stack trace summaries 605 are associated with a unique software test failure record, such as software test failure record 610.
  • the clusterizer 235 associates the cluster of language-independent stack trace summaries 605 with a unique software test failure record based on comparing a value in the language-independent stack trace summaries included in the cluster to values stored one or more lookup tables identifying records of previously determined unique software test failures. For example, the clusterizer 235 may associated cluster 605 with software test failure record 610 based on comparing the file names identified as similar in the cluster 605 with the file names that were previously found to be associated with software test failure record 610. In some implementations, the clusterizer 235 may compare values corresponding to other metadata included in the cluster 605 to values in a lookup table of test failure records identifying previously determined software test failures.
  • the clusterizer 235 may perform operations to create a new, canonical software test failure record and add the record to the lookup tables. If a previously identified software test failure record is identified based on the comparing, the clusterizer 235 may perform operations to mark the cluster as a duplicate of a previously determined software test failure record.
  • the unique software test failure record 610 may be assigned to an owner-like individual 615.
  • the assignment module 245 may assign the unique software test failure record 610 to individual 615 based on determining the change histories associated with the file names identified in the frames included in the generated language independent stack trace summary (from which the cluster 605 and unique software test failure record 610 were created and may also include). By ascertaining the change histories of the file names associated with the test failure, an individual who most recently changed the code included in the named files may be assigned to fix the code. Often the best resource to address a test failure may be the individual who has most recently introduced changes to the code for which the test failure corresponds.
  • the clusterizer 235 may utilize revision or version control data, such as data included in the source code control system 240 shown in Figure 2 , to determine the file change histories in order to identify the individual who has most recently changed the files in order to assign the unique software test failure record 610 to that individual.
  • the assignment module 245 may utilize code path or file ownership data, such as data included in the assignment module 245 itself or data received from the source code control system 240 shown in Figure 2 , in order to determine the individuals who are owners of the code paths or files that were associated with the test failure so that the unique software test failure record 610 may be assigned to the owner-like individual.
  • the unique software test failure 610 may be assigned to a team of individuals 620.
  • the assignment module 245 may assign the unique software test failure 610 to team of individuals 620 based on determining the ownership of code paths that are associated with one or more files names identified in the frames included in the generated language independent stack trace summary (from which the cluster 605 and unique software test failure 610 were created and may also include). By determining the code path ownership corresponding to the file names associated with the test failure, a team of individuals who are responsible for developing and maintaining the code included in the files may be assigned to fix the code which produced the test failure.
  • the assignment module 245 may utilize code path ownership data for files or code modules, such as data included in the source code control system 240 or data within the assignment module 245 itself, shown in Figure 2 , to determine the ownership of the code paths for the related files so that the unique software test failure 610 may be assigned to that team of individuals.
  • the assignment module 245 may utilize file revision data or version control data, such as data included in the source code control system 240 shown in Figure 2 , to determine the file change histories in order to identify the team of individuals who are associated with the most recently changed files so that the unique software test failure record 610 may be assigned to that team of individuals.
  • Figure 7 is a block diagram 700 illustrating an example computer system 710 with which the computing device 105, the test environment 110, and the stack trace processing server 120 of Figure 1 can be implemented.
  • the computer system 710 may be implemented using hardware or a combination of software and hardware, either in a dedicated server, or integrated into another entity, or distributed across multiple entities.
  • the computing system 710 includes at least one processor 750 for performing actions in accordance with instructions and one or more memory devices 770 or 775 for storing instructions and data.
  • the illustrated example computing system 710 includes one or more processors 750 in communication, via a bus 715, with at least one network interface driver controller 720 with one or more network interface cards 722 connecting to one or more network devices 724, memory 770, and any other devices 780, e.g., an I/O interface.
  • the network interface card 722 may have one or more network interface driver ports to communicate with the connected devices or components.
  • a processor 750 executes instructions received from memory.
  • the processor 750 illustrated incorporates, or is directly connected to, cache memory 775.
  • the processor 750 may be any logic circuitry that processes instructions, e.g., instructions fetched from the memory 770 or cache 775.
  • the processor 750 is a microprocessor unit or special purpose processor.
  • the computing device 710 may be based on any processor, or set of processors, capable of operating as described herein.
  • the processor 750 may be a single core or multi-core processor.
  • the processor 750 may be multiple processors.
  • the processor 750 can be configured to run multi-threaded operations.
  • the processor 750 may host one or more virtual machines or containers, along with a hypervisor or container manager for managing the operation of the virtual machines or containers. In such implementations, the method shown in Figure 3 can be implemented within the virtualized or containerized environments provided on the processor 750.
  • the memory 770 may be any device suitable for storing computer readable data.
  • the memory 770 may be a device with fixed storage or a device for reading removable storage media. Examples include all forms of non-volatile memory, media and memory devices, semiconductor memory devices (e.g., EPROM, EEPROM, SDRAM, and flash memory devices), magnetic disks, magneto optical disks, and optical discs (e.g., CD ROM, DVD-ROM, and Blu-ray ® discs).
  • a computing system 710 may have any number of memory devices 770.
  • the memory 770 supports virtualized or containerized memory accessible by virtual machine or container execution environments provided by the computing system 710.
  • the cache memory 775 is generally a form of computer memory placed in close proximity to the processor 750 for fast read times. In some implementations, the cache memory 775 is part of, or on the same chip as, the processor 750. In some implementations, there are multiple levels of cache 775, e.g., L2 and L3 cache layers.
  • the network interface driver controller 720 manages data exchanges via the network interface driver 722 (also referred to as network interface driver ports).
  • the network interface driver controller 720 handles the physical and data link layers of the OSI model for network communication. In some implementations, some of the network interface driver controller's tasks are handled by the processor 750. In some implementations, the network interface driver controller 720 is part of the processor 750.
  • a computing system 710 has multiple network interface driver controllers 720.
  • the network interface driver ports configured in the network interface card 722 are connection points for physical network links.
  • the network interface controller 720 supports wireless network connections and an interface port associated with the network interface card 722 is a wireless receiver/transmitter.
  • a computing device 710 exchanges data with other network devices 724 via physical or wireless links that interface with network interface driver ports configured in the network interface card 722.
  • the network interface controller 720 implements a network protocol such as Ethernet.
  • the other network devices 724 are connected to the computing device 710 via a network interface driver port included in the network interface card 722.
  • the other network devices 724 may be peer computing devices, network devices, or any other computing device with network functionality.
  • a first network device 724 may be a network device such as a hub, a bridge, a switch, or a router, connecting the computing device 710 to a data network such as the Internet.
  • the other devices 780 may include an I/O interface, external serial device ports, and any additional co-processors.
  • a computing system 710 may include an interface (e.g., a universal serial bus (USB) interface) for connecting input devices (e.g., a keyboard, microphone, mouse, or other pointing device), output devices (e.g., video display, speaker, or printer), or additional memory devices (e.g., portable flash drive or external media drive).
  • a computing device 700 includes an additional device 780 such as a coprocessor, e.g., a math co-processor can assist the processor 750 with high precision or complex calculations.
  • Implementations of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software embodied on a tangible medium, firmware, or hardware. Implementations of the subject matter described in this specification can be implemented as one or more computer programs embodied on a tangible medium, i.e., one or more modules of computer program instructions, encoded on one or more computer storage media for execution by, or to control the operation of, a data processing apparatus.
  • a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them.
  • the computer storage medium can also be, or be included in, one or more separate components or media (e.g., multiple CDs, disks, or other storage devices).
  • the computer storage medium may be tangible and non-transitory.
  • the operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
  • the operations may be executed within the native environment of the data processing apparatus or within one or more virtual machines or containers hosted by the data processing apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers or one or more virtual machines or containers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • Examples of communication networks include a local area network ("LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • references to "or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms.
  • the labels “first,” “second,” “third,” and so forth are not necessarily meant to indicate an ordering and are generally used merely to distinguish between like or similar items or elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Computer Hardware Design (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Debugging And Monitoring (AREA)

Claims (13)

  1. Un procédé mis en oeuvre par ordinateur pour générer un résumé (125, 425) de suivi de pile indépendant du langage et lisible par l'homme, le procédé comprenant :
    le fait (310) de recevoir une pluralité de rapports d'erreurs (115, 405), chaque rapport d'erreurs (115, 405) comprenant une trace de pile dépendante du langage générée en réponse à un échec de test logiciel d'un code mis en oeuvre dans un langage de programmation spécifique et une pluralité de métadonnées (415), le suivi de pile dépendant du langage générée comprenant une ou plusieurs trames ;
    le fait (320) de générer un résumé (125, 425) de suivi de pile indépendant du langage en traitant chaque trame du suivi de pile dépendant du langage quel que soit le langage de programmation spécifique, le traitement comprenant au moins deux parmi ce qui suit :
    le fait (305) de supprimer des valeurs de numéro de ligne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (315) de supprimer des valeurs de numéro de colonne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (325) de réduire un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des espaces de chaque trame parmi lesdites une ou plusieurs trames, et
    le fait de supprimer des caractères spéciaux de chaque trame parmi lesdites une ou plusieurs trames ;
    le fait (330) d'émettre le résumé (125, 425) de suivi de pile indépendant du langage généré ;
    le fait de générer un cluster de résumés de suivi de pile indépendants du langage, le cluster de résumés de suivi de pile comprenant des résumés similaires de suivi de pile indépendants du langage, la génération du cluster comprenant le fait de déterminer que les résumés de suivi de pile indépendants du langage sont similaires en comparant les uns aux autres des types d'erreur, des messages d'erreur et des noms de fichier identifiés dans chaque trame parmi une ou plusieurs trames incluses dans chacun des résumés de suivi de pile indépendants du langage ;
    le fait d'associer le cluster à un échec de test logiciel unique ; et
    le fait d'attribuer le cluster et l'échec de test logiciel unique à des ressources de développement pour la révision du code.
  2. Le procédé selon la revendication 1, dans lequel le traitement de chaque trame du suivi de pile dépendant du langage comprend au moins trois parmi ce qui suit :
    le fait (305) de supprimer des valeurs de numéro de ligne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (315) de supprimer des valeurs de numéro de colonne de chaque trame parmi lesdites une ou plusieurs trames
    le fait (325) de réduire un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des espaces de chaque trame parmi lesdites une ou plusieurs trames, et
    le fait de supprimer des caractères spéciaux de chaque trame parmi lesdites une ou plusieurs trames ; ou
    le traitement de chaque trame du suivi de pile dépendant du langage comprenant :
    le fait de supprimer des valeurs de numéro de ligne de chaque trame parmi lesdites une ou plusieurs trames
    le fait de supprimer des valeurs de numéro de colonne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait de réduire un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des espaces de chaque trame parmi lesdites une ou plusieurs trames, et
    le fait de supprimer des caractères spéciaux de chaque trame parmi lesdites une ou plusieurs trames.
  3. Le procédé selon la revendication 1, comprenant en outre le fait (340) d'appliquer une fonction de hachage au résumé (125, 425) de suivi de pile indépendant du langage généré et le fait (350) d'émettre le résumé haché de suivi de pile indépendant du langage.
  4. Le procédé selon la revendication 1, dans lequel la pluralité de métadonnées (415) incluses dans le rapport d'erreurs (115, 405) identifie un ou plusieurs parmi : un identifiant unique (410), une version de construction, un nom de client, un référent HTTP, un type d'erreur ou une description de configuration de test.
  5. Le procédé selon la revendication 1, dans lequel l'association du cluster à un échec de test logiciel unique est basée sur la comparaison d'une valeur dans le résumé (125, 425) de suivi de pile indépendant du langage à des valeurs stockées dans une ou plusieurs tables de recherche, lesdites une ou plusieurs tables de recherche incluant des listes d'échecs de test logiciel uniques déterminés précédemment ;
    l'échec de test logiciel unique est attribué à un individu semblable à un propriétaire sur la base de la détermination d'historiques de modification associés à un ou plusieurs noms de fichiers identifiés dans une ou plusieurs trames incluses dans le résumé (125, 425) de suivi de pile indépendant du langage ; ou
    l'échec de test logiciel unique est attribué à une équipe d'individus sur la base de la détermination de la propriété de chemins de code associés à un ou plusieurs noms de fichiers identifiés dans une ou plusieurs trames incluses dans le résumé (125, 425) de suivi de pile indépendant du langage et de l'attribution de l'échec du test logiciel unique en fonction de la propriété du chemin de code déterminé.
  6. Un système pour générer un résumé (125, 425) de suivi de pile indépendant du langage, lisible par l'homme, le système comprenant :
    une mémoire stockant des instructions lisibles par ordinateur et une ou plusieurs tables de recherche ; et
    un processeur, le processeur étant configuré pour exécuter les instructions lisibles par ordinateur, qui, lorsqu'elles sont exécutées, mettent en oeuvre le procédé comprenant :
    le fait (310) de recevoir une pluralité de rapports d'erreurs (115, 405), chaque rapport d'erreurs (115, 405) comprenant un suivi de pile dépendant du langage généré en réponse à un échec de test logiciel d'un code mis en oeuvre dans un langage de programmation spécifique et une pluralité de métadonnées (415), le suivi de pile dépendant du langage générée comprenant une ou plusieurs trames ;
    le fait (320) de générer un résumé (125, 425) de suivi de pile indépendant du langage en traitant chaque trame du suivi de pile dépendant du langage quel que soit le langage de programmation spécifique, le traitement comprenant au moins deux parmi ce qui suit :
    le fait (305) de supprimer des valeurs de numéro de ligne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (315) de supprimer des valeurs de numéro de colonne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (325) de réduire d'un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des espaces de chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des caractères spéciaux de chaque trame parmi lesdites une ou plusieurs trames ;
    le fait (330) d'émettre le résumé (125,425) de suivi de pile indépendant du langage généré ;
    le fait de générer un cluster de résumés de suivi de pile indépendants du langage, le cluster de résumés de suivi de pile comprenant des résumés similaires de suivi de pile indépendants du langage, la génération du cluster comprenant le fait de déterminer que les résumés de suivi de pile indépendants du langage sont similaires en comparant les uns aux autres des types d'erreur, des messages d'erreur et des noms de fichier identifiés dans chaque trame parmi une ou plusieurs trames incluses dans chacun des résumés de suivi de pile indépendants du langage ;
    le fait d'associer le cluster à un échec de test logiciel unique ; et
    le fait d'attribuer le cluster et l'échec du test logiciel unique à des ressources de développement pour la révision du code.
  7. Le système selon la revendication 6, dans lequel le traitement de chaque trame du suivi de pile dépendant du langage comprend au moins trois parmi ce qui suit :
    le fait (305) de supprimer des valeurs de numéro de ligne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (315) de supprimer des valeurs de numéro de colonne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait (325) de réduire un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des espaces de chaque trame parmi lesdites une ou plusieurs trames, et
    le fait de supprimer des caractères spéciaux de chaque trame parmi lesdites une ou plusieurs trames.
  8. Le système selon la revendication 6, dans lequel le traitement de chaque trame du suivi de pile dépendant du langage comprend :
    le fait de supprimer des valeurs de numéro de ligne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des valeurs de numéro de colonne de chaque trame parmi lesdites une ou plusieurs trames,
    le fait de réduire un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames,
    le fait de supprimer des espaces de chaque trame parmi lesdites une ou plusieurs trames, et
    le fait de supprimer des caractères spéciaux de chaque trame parmi lesdites une ou plusieurs trames.
  9. Le système selon la revendication 6, dans lequel la mémoire stocke en outre des instructions lisibles par ordinateur qui, lorsqu'elles sont exécutées, amènent le processeur à appliquer (340) une fonction de hachage au résumé (125, 425) de suivi de pile indépendant du langage généré et à émettre (350) le résumé de suivi de pile haché indépendant du langage.
  10. Le système selon la revendication 6, dans lequel la pluralité de métadonnées (415) incluses dans le rapport d'erreurs (115, 405) identifie un ou plusieurs parmi : un identifiant unique (410), une version de construction, un nom de client, un référent HTTP, un type d'erreur ou une description de configuration de test.
  11. Le système selon la revendication 6, dans lequel l'association du cluster à un échec de test logiciel unique est basée sur la comparaison d'une valeur dans le résumé (125, 425) de suivi de pile indépendant du langage à des valeurs stockées dans la ou les tables de recherche, une ou plusieurs tables de recherche comprenant des listes d'échecs de tests logiciels uniques préalablement déterminés.
  12. Le système selon la revendication 6, dans lequel l'échec de test logiciel unique est attribué à un individu tel qu'un propriétaire sur la base de la détermination d'historiques de modifications associés à un ou plusieurs noms de fichiers identifiés dans chaque trame parmi lesdites une ou plusieurs trames incluses dans le résumé (125, 425) de suivi de pile indépendant du langage.
  13. Le système selon la revendication 6, dans lequel l'échec de test logiciel unique est attribué à une équipe d'individus sur la base du fait de déterminer la propriété de chemins de code associés à un ou plusieurs noms de fichiers identifiés dans chaque trame parmi une ou plusieurs trames incluses dans le résumé (125, 425) de suivi de pile indépendant du langage et d'attribuer l'échec de test logiciel unique sur la base de la propriété de chemin de code déterminée.
EP18766472.7A 2017-10-20 2018-08-27 Production de résumé de trace de pile indépendant du langage lisible par l'utilisateur Active EP3616066B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/789,755 US10552296B2 (en) 2017-10-20 2017-10-20 Human-readable, language-independent stack trace summary generation
PCT/US2018/048142 WO2019078954A1 (fr) 2017-10-20 2018-08-27 Production de résumé de trace de pile indépendant du langage lisible par l'utilisateur

Publications (2)

Publication Number Publication Date
EP3616066A1 EP3616066A1 (fr) 2020-03-04
EP3616066B1 true EP3616066B1 (fr) 2023-08-02

Family

ID=63528938

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18766472.7A Active EP3616066B1 (fr) 2017-10-20 2018-08-27 Production de résumé de trace de pile indépendant du langage lisible par l'utilisateur

Country Status (3)

Country Link
US (1) US10552296B2 (fr)
EP (1) EP3616066B1 (fr)
WO (1) WO2019078954A1 (fr)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783053B1 (en) * 2017-06-16 2020-09-22 Palantir Technologies Inc. Contextualized notifications for verbose application errors
US11488081B2 (en) * 2018-08-31 2022-11-01 Orthogonal Networks, Inc. Systems and methods for optimizing automated modelling of resource allocation
IL281921B1 (en) * 2018-10-02 2024-08-01 Functionize Inc software testing
US11347624B1 (en) * 2019-06-28 2022-05-31 Meta Platforms, Inc. Systems and methods for application exception handling
US11200152B2 (en) * 2019-07-02 2021-12-14 International Business Machines Corporation Identifying diagnosis commands from comments in an issue tracking system
US11422925B2 (en) * 2020-09-22 2022-08-23 Sap Se Vendor assisted customer individualized testing
US11599342B2 (en) * 2020-09-28 2023-03-07 Red Hat, Inc. Pathname independent probing of binaries
US11231986B1 (en) * 2020-10-30 2022-01-25 Virtuozzo International Gmbh Systems and methods for collecting optimal set of log files for error reports
US11947439B2 (en) * 2020-11-30 2024-04-02 International Business Machines Corporation Learning from distributed traces for anomaly detection and root cause analysis
US11720471B2 (en) * 2021-08-09 2023-08-08 International Business Machines Corporation Monitoring stack memory usage to optimize programs

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7509538B2 (en) 2004-04-21 2009-03-24 Microsoft Corporation Systems and methods for automated classification and analysis of large volumes of test result data
US8291381B2 (en) * 2007-09-27 2012-10-16 Microsoft Corporation Call stack parsing in multiple runtime environments
US8589880B2 (en) * 2009-02-17 2013-11-19 International Business Machines Corporation Identifying a software developer based on debugging information
US8719791B1 (en) 2012-05-31 2014-05-06 Google Inc. Display of aggregated stack traces in a source code viewer
US9535818B2 (en) 2012-10-16 2017-01-03 Microsoft Technology Licensing, Llc Identifying high impact bugs
US9098627B2 (en) * 2013-03-06 2015-08-04 Red Hat, Inc. Providing a core dump-level stack trace
US9213622B1 (en) 2013-03-14 2015-12-15 Square, Inc. System for exception notification and analysis
US20150106663A1 (en) * 2013-10-15 2015-04-16 Sas Institute Inc. Hash labeling of logging messages
US9009539B1 (en) 2014-03-18 2015-04-14 Splunk Inc Identifying and grouping program run time errors
US9619375B2 (en) 2014-05-23 2017-04-11 Carnegie Mellon University Methods and systems for automatically testing software
US9710371B2 (en) * 2015-10-27 2017-07-18 Microsoft Technology Licensing, Llc Test failure bucketing
US20170132545A1 (en) 2015-11-11 2017-05-11 Microsoft Technology Licensing, Llc Recency-based identification of area paths for target components
CN106933689B (zh) 2015-12-29 2020-05-19 伊姆西Ip控股有限责任公司 一种用于计算设备的方法和装置
US10223238B1 (en) * 2016-11-08 2019-03-05 Amazon Technologies, Inc. Multiple-stage crash reporting
US10540258B2 (en) * 2017-07-17 2020-01-21 Sap Se Providing additional stack trace information for time-based sampling in asynchronous execution environments

Also Published As

Publication number Publication date
EP3616066A1 (fr) 2020-03-04
US10552296B2 (en) 2020-02-04
US20190121719A1 (en) 2019-04-25
WO2019078954A1 (fr) 2019-04-25

Similar Documents

Publication Publication Date Title
EP3616066B1 (fr) Production de résumé de trace de pile indépendant du langage lisible par l'utilisateur
US11449379B2 (en) Root cause and predictive analyses for technical issues of a computing environment
US10318412B1 (en) Systems, methods, and apparatus for dynamic software generation and testing
US11042476B2 (en) Variability system and analytics for continuous reliability in cloud-based workflows
US10922164B2 (en) Fault analysis and prediction using empirical architecture analytics
US8453027B2 (en) Similarity detection for error reports
US8661125B2 (en) System comprising probe runner, monitor, and responder with associated databases for multi-level monitoring of a cloud service
US11366713B2 (en) System and method for automatically identifying and resolving computing errors
CN110928772A (zh) 一种测试方法及装置
US20150067652A1 (en) Module Specific Tracing in a Shared Module Environment
US20150067654A1 (en) Tracing System for Application and Module Tracing
CN107533504A (zh) 用于软件分发的异常分析
US9471594B1 (en) Defect remediation within a system
CN111108481B (zh) 故障分析方法及相关设备
JP2019515403A (ja) 診断のためのグラフデータベースおよびシステム健康モニタリング
US20150106663A1 (en) Hash labeling of logging messages
US20130111473A1 (en) Passive monitoring of virtual systems using extensible indexing
JP2022100301A (ja) ソフトウェア・アップグレードがコンピューティング・デバイスに与える潜在的な影響を判定するための方法、コンピュータ・プログラム、および更新推奨コンピュータ・サーバ(ソフトウェア・アップグレードの安定性の推奨)
US10180872B2 (en) Methods and systems that identify problems in applications
US11962456B2 (en) Automated cross-service diagnostics for large scale infrastructure cloud service providers
US20130111018A1 (en) Passive monitoring of virtual systems using agent-less, offline indexing
US11422880B1 (en) Methods and systems for determining crash similarity based on stack traces and user action sequence information
US9354962B1 (en) Memory dump file collection and analysis using analysis server and cloud knowledge base
US11868236B2 (en) Methods and systems for classifying application-specific crash reports using application-agnostic machine learning models
Basin et al. Monitoring the internet computer

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20191126

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20201007

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref document number: 602018054568

Country of ref document: DE

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G06F0011360000

Ipc: G06F0011070000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06Q 10/0639 20230101ALI20230130BHEP

Ipc: G06Q 10/0631 20230101ALI20230130BHEP

Ipc: G06F 11/36 20060101ALI20230130BHEP

Ipc: G06F 11/07 20060101AFI20230130BHEP

INTG Intention to grant announced

Effective date: 20230301

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602018054568

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1595576

Country of ref document: AT

Kind code of ref document: T

Effective date: 20230802

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231204

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231102

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231202

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231103

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230827

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230827

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230831

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602018054568

Country of ref document: DE

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20230831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20240503

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20231002

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230802

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230831

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20240826

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IE

Payment date: 20240827

Year of fee payment: 7

Ref country code: DE

Payment date: 20240828

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240827

Year of fee payment: 7