US20150046909A1 - System, Method, and Apparatus for Automatic Recording and Replaying of Application Executions - Google Patents
System, Method, and Apparatus for Automatic Recording and Replaying of Application Executions Download PDFInfo
- Publication number
- US20150046909A1 US20150046909A1 US13/964,296 US201313964296A US2015046909A1 US 20150046909 A1 US20150046909 A1 US 20150046909A1 US 201313964296 A US201313964296 A US 201313964296A US 2015046909 A1 US2015046909 A1 US 2015046909A1
- Authority
- US
- United States
- Prior art keywords
- application
- interaction
- data
- server
- test device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Definitions
- the exemplary embodiments of this invention relate generally to testing and replay systems for applications on computing devices such as mobile devices and, more particularly, to systems, methods, and apparatuses for recording user actions and producing executable tests in the form of human-readable scripts that can be modified to create new tests.
- a method comprises recording an interaction between a test device operating system and an application, the interaction being based on a user input; and sending the recorded interaction between the test device operating system and the application to a server.
- a method comprises receiving, on a developer electronic device, data from a server, the data comprising human-readable action-description language from a script compiler of the server; modifying the human-readable action-description language from the script compiler on the developer electronic device; and returning the modified compiled script from the developer electronic device to the server.
- the data pertains to a test created by recording an interaction on a remotely-located testing electronic device.
- a method comprises receiving data pertaining to a recorded interaction between a test device operating system and an application on a test device, the recorded interaction being based on a user input; compiling the data pertaining to the recorded interaction in a script compiler, the data comprising human-readable action-description language; sending the compiled data comprising human-readable action-description language from the script compiler of the server to a developer device; receiving modified data from the developer device; and sending the modified data from the developer device to the test device.
- FIG. 1 is a schematic illustration of one exemplary embodiment of a system architecture illustrating interactions between various devices
- FIG. 2 is a schematic illustration of one exemplary embodiment of an implementation of instrumentation in the system architecture of FIG. 1 on an iOS platform;
- FIG. 3 is a schematic illustration of one exemplary embodiment of an implementation of instrumentation in the system architecture of FIG. 1 on an Android platform;
- FIG. 4 is a flow chart illustrating one exemplary embodiment of a method of general operation of the system of FIG. 1 ;
- FIG. 5 is a flow chart illustrating one exemplary embodiment of an implementation step, a logging/scripting step, and a playback step of the method of FIG. 4 ;
- FIG. 6 is a flow chart illustrating one exemplary embodiment of the playback step of the method of FIG. 4 ;
- FIG. 7 is a flow chart illustrating one exemplary system for performing the exemplary embodiments described herein.
- the exemplary embodiments disclosed herein are directed to systems, methods, and apparatuses for the automatic recording and replaying of application executions.
- Such systems, methods, and apparatuses involve the generation and replay of test scripts, which can then be deployed on devices (e.g., phones, tablets, computers, robotic surgery equipment, and the like, which are hereinafter collectively referred to as “devices”) for field testing.
- devices e.g., phones, tablets, computers, robotic surgery equipment, and the like, which are hereinafter collectively referred to as “devices”
- user actions e.g., deterministic input such as user input
- high-fidelity replay is less of a concern than is the ease of test creation and scalability.
- the systems, methods, and apparatuses disclosed herein also capture many types of non-deterministic inputs and the contexts around them such as timer events (e.g., time of day), location, and network data (e.g., connectivity status, state of device, ambient conditions, battery level, and the like) in support of deterministic replay scenarios where debugging may be the goal.
- timer events e.g., time of day
- location e.g., location
- network data e.g., connectivity status, state of device, ambient conditions, battery level, and the like
- debugging may be the goal.
- the level of complexity and overhead that would be added to fully support logging and replaying environmental, nondeterministic events goes beyond the criteria for an industrial mobile application testing tool.
- the systems, methods, and apparatuses disclosed herein are based on the principle that lower-fidelity replay is already sufficient to discover most bugs in mobile applications, and higher-fidelity replay can be enabled incrementally when desired.
- the architecture of the systems, methods, and apparatuses disclosed herein is based on various design goals, namely, transparency, ease of use, and diverse replay fidelities. In efforts to meet these goals, the tests are replayed on unmodified devices (e.g., devices that are neither rooted nor jailbroken). With regard to transparency, the systems, methods, and apparatuses can be injected into applications with minimal or no source code modifications. Developers do not need to manually annotate their code or issue special system commands to enable logging and replay. In one exemplary implementation of the systems, methods, and apparatuses into an Android device, access to source code of the application is not required. In another exemplary implementation into an iOS device, a one-line change to the main function of the code of the application is made. The systems, methods, and applications as disclosed herein can also be injected into hybrid applications.
- the systems disclosed herein overlay a record and replay a user interface (UI) on the application.
- This UI can then be used to drive the generation of trace logs and replay them on the device.
- Tests are then created by recording an interaction of a user (e.g., a tester) with the device.
- the tests are encoded in a human-readable action-description language from an open source JAVA script compiler (e.g., CLEARSCRIPT) that can be modified to create new tests.
- suitable fidelity of replay depends on the testing goals. Higher fidelity replays may incur additional costs but are more effective at determining causes for bugs, while lower fidelity replays may be cheaper to generate and still appropriate for testing code coverage.
- the underlying logging component of the exemplary systems, methods, and apparatuses disclosed herein instruments arbitrary application code and can be configured to create different levels of replay fidelity.
- the system architecture is configured to allow for the transparent, easy, and diverse field testing of the test scripts.
- testers can add the system to any native mobile application. Enablement of the field testing is achieved by encapsulating client-side components of the system into a single library, which is then injected as a subsystem into an existing application. Injecting the subsystem is achieved differently on different systems.
- system 100 one exemplary embodiment of a system illustrating interactions between various devices is designated generally by the reference number 100 and is hereinafter referred to as “system 100 .”
- an apparatus such as a test device 110 is linked to an apparatus such as a server 120 .
- the test device 110 includes a system library 112 , a testable application 114 , and a test device operating system 116 .
- the system library 112 operates as an intermediary between the testable application 114 and the test device operating system 116 and includes component definitions of one or more mobile framework classes, which serve to wrapper instantiated objects from a call stack when the testable application 114 is run.
- test device 110 User input is entered into the test device 110 to access the testable application 114 .
- method invocations on the test device operating system 116 are intercepted by the system library 112 and are processed by wrappered objects during execution in real time. These intercepted invocations contain all the associated parameters that are utilized for method execution.
- the invocations are logged as interactions into a memory of the test device 110 as the invocation is passed to the testable application 114 for processing.
- any returnable response is logged into the memory of the test device 110 using a logging function, and the response is returned back to the calling object (e.g., the test device operating system 116 ), thereby preserving the call stack.
- the system 100 also includes an apparatus such as a developer device 130 remotely-located from the test device 110 .
- the developer device 130 is similar to the test device 110 and includes a system library 132 , a developer application 134 , and a developer device operating system 136 .
- the system library 132 , the developer application 134 , and the developer device operating system 136 are configured to operate similar to the relevant corresponding elements of the test device 110 and are further configured to monitor the events (e.g., the invocations) during execution of the testable application 114 .
- data is transferred from the system library 112 of the test device 110 to the server 120 and logged (e.g., using recording log 125 ).
- the data is encoded in a compiler 115 and transferred to the system library 132 of the developer device 130 .
- the data transferred to the developer device 130 is encoded in the human-readable action-description language from the compiler 115 and modified by the developer on the developer device 130 to create new tests that can be returned back through the server 120 to the test device 110 to simulate the user input to the test device 110 .
- Modification of the human-readable action-description language includes, but is not limited to, modification of credentials in the script by a tester (e.g., to test if a username/password dialog screen is working correctly).
- the data transferred from the test device 110 to the server 120 and subsequently to the developer device 130 are logged in the recording log 125 as recorded interactions in the form of the human-readable action-description language from the compiler.
- This logging is line-by-line encoding of the human-readable action-description language, which thus allows the compiler 115 in the server 120 to act as a validator of the recording log 125 .
- the data transferred may be a compressed form of a text-based standard (e.g., JSON (JAVASCRIPT Object Notation)) that outlines captured events as objects, each object being converted to the human-readable action-description language by the compiler 115 at the server 120 .
- JSON JavaScript Object Notation
- the system 100 could be enabled to send the data to and from the test device 110 , the server 120 , and the developer device 130 in a format that can be used to obfuscate the recorded transactions in order to inhibit or at least minimize the chances of interception.
- iOS system 200 one exemplary embodiment of implementing instrumentation in the architecture of the system 100 on an iOS platform is designated generally by the reference number 200 and is hereinafter referred to as “iOS system 200 .”
- iOS system 200 native applications created for an iOS platform can be written using Objective-C, which is a dynamic language that allows code adaptation at run time.
- the system 100 achieves instrumentation of application code by using Objective-C categories and method swizzling (e.g., changing parameter register order while calling an operation).
- a given Objective-C class consists of both an interface definition and associated implementation.
- an object is created based on these definitions.
- a class interface 210 comprises properties (e.g., Property 1 and Property 2) and methods (e.g., Method 1 and Method 2).
- a class implementation 220 is used to provide a template using the properties and methods, thereby defining what data representations represent which attributes.
- Objective-C allows a developer to modify a class definition via a category. Categories are much like normal class definitions with an interface and implementation but serve to modify an existing class. A category can be used for any class in Objective-C and does not require access to source code to be recompiled.
- a category interface 250 is declared to comprise a method (e.g., Method 3). This method is implemented in a category implementation 260 .
- a category is defined for the sample class definition where Method 3 overrides Method 2 in the original class definition.
- Method 3 When the class is instantiated (in a class instantiation 270 ) at run time, the category augments the class with Method 3.
- Method 3 For the system library 112 , Method 3 is a wrapper over Method 2. However, when Method 3 is called, Method 3 creates a log item capturing all collected data. Furthermore, Method 3 uses method swizzling in Objective-C to call the original Method 2. Additionally, Method 3 can return the log result to the caller.
- an Android activity 310 allows for creating a custom class loader that can be used in place of a system class loader when loading classes needed by an Android application 320 .
- the system 100 makes use of this by overriding the system class loader with a specialized version that is capable of loading instrumented classes 340 that provide logging and replay operations. If the application is already compiled, the code to override the class loader and the instrumentation library can be bytecode-injected into the application post compile time.
- an instrumented class is a specialization of an Android class, which is shown at 350 .
- Objects created from instrumented classes inherit application logic from their parent object for most methods.
- One exemplary exception is that the instrumented object will have the ability to proxy UI events, network data, and application state, and to log this data as part of a data collection that will be sent to a server for translation by the compiler 115 .
- the instrumented object also has the ability to receive commands that allow it to present customized behavior at run time, for example replaying the recorded translation from the compiler 115 on the test device 110 .
- method 400 one exemplary embodiment of a method of general operation of the system 100 is illustrated at 400 and is hereinafter referred to as “method 400 .”
- an initiation step 410 is executed, followed by an implementation step 420 in which the method invocations are processed as described above.
- a routine 425 comprising a two-part logging/scripting step 430 and a playback step 440 is carried out.
- an application recording phase 432 is carried out in which the recording of loggable data pertaining to the interaction of the tester with the system 100 is initiated, followed by a disablement step 434 in which an intentional disablement of the recording phase 432 is carried out.
- the playback step 440 execution/playback of recorded data is carried out by the test device 110 .
- the implementation step 420 the method/operation is invoked by user (e.g., tester) interaction (at an invocation step 510 ), and an override step 520 is carried out (e.g., where Method 3 overrides Method 2 in the original class definition, as with the iOS system 200 , or where a custom class loader can be used in place of the system class loader, as with the Android system 300 ).
- user e.g., tester
- an override step 520 e.g., where Method 3 overrides Method 2 in the original class definition, as with the iOS system 200 , or where a custom class loader can be used in place of the system class loader, as with the Android system 300 ).
- the recording of loggable data is initiated via a predetermined subset of application functions from the system library 112 which call the original method and operation.
- This recording is tested by first initiating the recording phase 432 of the system 100 , which is enabled via a visible overlaid control set.
- the recording phase 432 allows for a series of method invocations to be executed within the testable application 114 , as triggered by interaction with the application.
- a log can be made of the invocation/operation result in an optional result logging step 550 . Control may then be passed back to the implementation step 420 as desired.
- the developer or tester Upon completion of the recording phase, the developer or tester disables recording via the visible control, which aggregates all of the logged method invocations and relevant responses, in the disablement step 434 . All logged data is aggregated in an aggregation step 560 . Optionally, the logged data can be compressed and/or encrypted in a compression/encryption step 570 .
- the system library 112 then sends (in a sending step 580 ) the logged data to the server 120 , which catalogs all the logged data (e.g., in the recording log 125 ) and generates script compiler templates (based on the data encoded in the human-readable action-description language from the compiler 115 and modified by the developer) from the aggregated contents.
- the testable application 114 is then re-executed in a playback phase based on the recorded interactions of the tester with the system.
- Playback consists of the system 100 creating a network connection from the test device 110 to the server 120 that stores the test script earmarked for playback.
- the system 100 parses the script template into executable commands in a parsing step 620 .
- Each entry is then executed in sequence by the system library 132 in an execute command step 630 in order to simulate the tester input based on the monitored events to run the application without tester input to cause the testable application 114 to replicate a state of interest, such as an operation failure or the presence of a bug.
- a troubleshooting log 635 may be created to facilitate a troubleshooting application (for example, to address issues with regard to performing playback operations), the troubleshooting log 635 being uploadable to a help center 637 for analysis.
- executed playback entries are classified in two categories: UI events and non-UI events.
- the UI events consist of invocations that result from the tester's direct interaction with the interface of the testable application 114 .
- Playback of UI events is accomplished by mapping the details of an event interaction to the object instance targeted as the recipient of the event. For example, the coordinates of a selection event are associated with a subview or widget upon which the event is targeted to interact with.
- playback of non-UI events is accomplished by mapping parameterized values parsed from the script compiler entry information (from the developer) to a targeted instance of an application object that can handle the execution and response of the targeted invocation.
- Examples consist of initialized network requests for accessing data outside of the running testable application 114 as well as the returned responses.
- Recorded script compiler templates have the advantage of being easily tractable by a human reader, allowing for simple modifications and testing of various conditions upon replay of the testable application 114 .
- the system 100 provides the ability to visually capture various stages of recording/replay during testing in a verification step 640 .
- the option to selectively capture a screenshot is presented via a selection on the overlaid control set.
- a static capture of the visual screen including any inputted values, is created and stored in memory.
- all captured screen images can optionally be stored on the test device 110 itself or sent to the server 120 . Accordingly, during the playback phase of testing, “verification points” can be constructed by capturing screen images of the playback results of the testing.
- control may be returned to the execute command step 630 .
- One advantage of the system 100 is the ability to introspect and test not only native applications, but hybrid applications as well. Hybrid applications run natively but host Web-based content using an embedded Web view to render the Web-based components.
- the system 100 provides an intermediary layer that wrappers the embedded Web view. Specifically, the system 100 inserts specialized JavaScript testing library code into the wrappered Web view, which executes upon initiation of any subsequent Web-rendering operation.
- the JavaScript testing library is responsible for capturing the state of all objects within the Web view and passing the state of any specified Web components to the higher-level native system library 112 . This allows for an entry to be created and logged in an identical manner as with previously described UI and non-UI native events.
- the system 100 translates Web events into loggable entries, and subsequent playback of the script compiler templates results in the wrappered Web container conveying translated Web events to the injected testing library for execution on targeted Web objects.
- the computing system 700 comprises one or more memories 745 , one or more processors 710 , one or more I/O interfaces 720 , and one or more wired or wireless network interfaces 730 .
- This example includes the testable application 114 , the system library 112 , and the test device operating system 116 .
- the server 120 may be connected to the computing system 700 through the network interfaces 730 .
- the computing system 700 may comprise circuitry 715 .
- the computing system 700 may be coupled to or include one or more displays 776 and one or more external device(s) 790 .
- the operations may also be performed, in part or completely, by the circuitry 715 that implements logic to carry out the operations.
- the circuitry 715 may be implemented as part of the one or more processors 710 or may be separate from the one or more processors 710 .
- the processors 710 may be any processing unit, such as a digital signal processor and/or single-core or multi-core general purpose processors.
- the circuitry 715 may be any electronic circuit such as an application specific integrated circuit or programmable logic.
- the memories 745 may comprise non-volatile and/or volatile RAM, cache memory, NAND-based flash memory, long term storage (e.g., hard drive), and/or read only memory.
- the one or more I/O interfaces 720 may include interfaces through which a user may interact with the computing system 700 .
- the display(s) 776 may be a touchscreen, flatscreen, monitor, television, projector, as examples.
- a user interacts with the computing system 700 through a UI 780 in an exemplary embodiment or through the network interface(s) 730 in another non-limiting embodiment.
- the external device(s) 790 enable a user to interact in one exemplary embodiment with the computing system 700 and may include a mouse, trackball, keyboard, or the like.
- the network interfaces 730 may be wired or wireless and may implement a number of protocols, such as cellular or local area network protocols.
- the elements in computing system 700 may be interconnected through any technology, such as buses, traces on a board, interconnects on semiconductors, or the like.
- aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer readable storage medium does not include a propagating wave.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Abstract
Description
- The exemplary embodiments of this invention relate generally to testing and replay systems for applications on computing devices such as mobile devices and, more particularly, to systems, methods, and apparatuses for recording user actions and producing executable tests in the form of human-readable scripts that can be modified to create new tests.
- Application development has proliferated in recent years due to an increase in the number and capabilities of handheld and tablet devices. The number of applications created and downloaded has resulted in a robust global ecosystem of software creation, application deployment, and revenue generation. The adoption of such applications has not been limited to consumers; enterprise deployment of mobile applications targeted at customers, clients, and internal employees has become widespread as companies compete for advantage despite economic challenges.
- With an exponential proliferation of applications created for computing devices such as smartphones and tablets, application testing by developers has become widespread. Before an application is deployed and made available to the public, it generally undergoes a period of testing. Without such a period of testing, risks that may compromise the marketability of the application may be incurred. For example, the application may receive negative reviews and be bypassed by alternative competitive offerings immediately available to potential end users. Such an outcome motivates the desire for a testing process that gives application developers the flexibility to record all the states of an application execution and to provide playback capabilities to replicate application changes and outcomes up to any given state of interest.
- Applications for computing devices pose an interesting challenge to developers. They are designed and developed in non-mobile environments, and tested either on special emulators that replicate a computing system, a mobile operating system, or on an actual mobile device. When attempting to resolve an error or bug, a developer typically uses a proprietary development kit to set breakpoints and debug code step by step. However, this approach is limited, as it does not account for field testing, which allows for testing an application in the actual context in which it will be used. For mobile applications, field testing is useful since emulators cannot always replicate the exact conditions under which an application will execute on a real device.
- In one exemplary aspect, a method comprises recording an interaction between a test device operating system and an application, the interaction being based on a user input; and sending the recorded interaction between the test device operating system and the application to a server.
- In another exemplary aspect, a method comprises receiving, on a developer electronic device, data from a server, the data comprising human-readable action-description language from a script compiler of the server; modifying the human-readable action-description language from the script compiler on the developer electronic device; and returning the modified compiled script from the developer electronic device to the server. The data pertains to a test created by recording an interaction on a remotely-located testing electronic device.
- In another exemplary aspect, a method comprises receiving data pertaining to a recorded interaction between a test device operating system and an application on a test device, the recorded interaction being based on a user input; compiling the data pertaining to the recorded interaction in a script compiler, the data comprising human-readable action-description language; sending the compiled data comprising human-readable action-description language from the script compiler of the server to a developer device; receiving modified data from the developer device; and sending the modified data from the developer device to the test device.
- The foregoing and other aspects of exemplary embodiments are made more evident in the following Detailed Description, when read in conjunction with the attached Drawing Figures, wherein:
-
FIG. 1 is a schematic illustration of one exemplary embodiment of a system architecture illustrating interactions between various devices; -
FIG. 2 is a schematic illustration of one exemplary embodiment of an implementation of instrumentation in the system architecture ofFIG. 1 on an iOS platform; -
FIG. 3 is a schematic illustration of one exemplary embodiment of an implementation of instrumentation in the system architecture ofFIG. 1 on an Android platform; -
FIG. 4 is a flow chart illustrating one exemplary embodiment of a method of general operation of the system ofFIG. 1 ; -
FIG. 5 is a flow chart illustrating one exemplary embodiment of an implementation step, a logging/scripting step, and a playback step of the method ofFIG. 4 ; -
FIG. 6 is a flow chart illustrating one exemplary embodiment of the playback step of the method ofFIG. 4 ; and -
FIG. 7 is a flow chart illustrating one exemplary system for performing the exemplary embodiments described herein. - The exemplary embodiments disclosed herein are directed to systems, methods, and apparatuses for the automatic recording and replaying of application executions. Such systems, methods, and apparatuses involve the generation and replay of test scripts, which can then be deployed on devices (e.g., phones, tablets, computers, robotic surgery equipment, and the like, which are hereinafter collectively referred to as “devices”) for field testing. In field testing the test scripts, user actions (e.g., deterministic input such as user input) can be replayed on a variety of devices operating in different environmental contexts to verify behaviors of the applications in the real world. In the field testing, high-fidelity replay is less of a concern than is the ease of test creation and scalability. From the perspective of an industrial tool designed for mobile-application testing, it should be noted that most mobile applications are driven by multiple tactile user interactions. Thus, capturing and replaying those interactions within an application is sufficient, in most cases, to replicate a state of interest, such as an operation failure or the presence of a bug.
- The systems, methods, and apparatuses disclosed herein also capture many types of non-deterministic inputs and the contexts around them such as timer events (e.g., time of day), location, and network data (e.g., connectivity status, state of device, ambient conditions, battery level, and the like) in support of deterministic replay scenarios where debugging may be the goal. However, the level of complexity and overhead that would be added to fully support logging and replaying environmental, nondeterministic events goes beyond the criteria for an industrial mobile application testing tool. In fact, the systems, methods, and apparatuses disclosed herein are based on the principle that lower-fidelity replay is already sufficient to discover most bugs in mobile applications, and higher-fidelity replay can be enabled incrementally when desired.
- The architecture of the systems, methods, and apparatuses disclosed herein is based on various design goals, namely, transparency, ease of use, and diverse replay fidelities. In efforts to meet these goals, the tests are replayed on unmodified devices (e.g., devices that are neither rooted nor jailbroken). With regard to transparency, the systems, methods, and apparatuses can be injected into applications with minimal or no source code modifications. Developers do not need to manually annotate their code or issue special system commands to enable logging and replay. In one exemplary implementation of the systems, methods, and apparatuses into an Android device, access to source code of the application is not required. In another exemplary implementation into an iOS device, a one-line change to the main function of the code of the application is made. The systems, methods, and applications as disclosed herein can also be injected into hybrid applications.
- With regard to the ease of use, once injected into an application, the systems disclosed herein overlay a record and replay a user interface (UI) on the application. This UI can then be used to drive the generation of trace logs and replay them on the device. Tests are then created by recording an interaction of a user (e.g., a tester) with the device. The tests are encoded in a human-readable action-description language from an open source JAVA script compiler (e.g., CLEARSCRIPT) that can be modified to create new tests.
- With regard to diverse replay fidelities, suitable fidelity of replay depends on the testing goals. Higher fidelity replays may incur additional costs but are more effective at determining causes for bugs, while lower fidelity replays may be cheaper to generate and still appropriate for testing code coverage. The underlying logging component of the exemplary systems, methods, and apparatuses disclosed herein instruments arbitrary application code and can be configured to create different levels of replay fidelity.
- Thus, the system architecture is configured to allow for the transparent, easy, and diverse field testing of the test scripts. From an architectural perspective, testers can add the system to any native mobile application. Enablement of the field testing is achieved by encapsulating client-side components of the system into a single library, which is then injected as a subsystem into an existing application. Injecting the subsystem is achieved differently on different systems.
- Referring to
FIG. 1 , one exemplary embodiment of a system illustrating interactions between various devices is designated generally by thereference number 100 and is hereinafter referred to as “system 100.” Insystem 100, an apparatus such as atest device 110 is linked to an apparatus such as aserver 120. Thetest device 110 includes asystem library 112, atestable application 114, and a testdevice operating system 116. Thesystem library 112 operates as an intermediary between thetestable application 114 and the testdevice operating system 116 and includes component definitions of one or more mobile framework classes, which serve to wrapper instantiated objects from a call stack when thetestable application 114 is run. - User input is entered into the
test device 110 to access thetestable application 114. As thetestable application 114 is run, method invocations on the testdevice operating system 116 are intercepted by thesystem library 112 and are processed by wrappered objects during execution in real time. These intercepted invocations contain all the associated parameters that are utilized for method execution. The invocations are logged as interactions into a memory of thetest device 110 as the invocation is passed to thetestable application 114 for processing. Upon completion of the processing, any returnable response is logged into the memory of thetest device 110 using a logging function, and the response is returned back to the calling object (e.g., the test device operating system 116), thereby preserving the call stack. - The
system 100 also includes an apparatus such as adeveloper device 130 remotely-located from thetest device 110. Thedeveloper device 130 is similar to thetest device 110 and includes asystem library 132, adeveloper application 134, and a developerdevice operating system 136. Thesystem library 132, thedeveloper application 134, and the developerdevice operating system 136 are configured to operate similar to the relevant corresponding elements of thetest device 110 and are further configured to monitor the events (e.g., the invocations) during execution of thetestable application 114. During testing, data is transferred from thesystem library 112 of thetest device 110 to theserver 120 and logged (e.g., using recording log 125). In theserver 120, the data is encoded in acompiler 115 and transferred to thesystem library 132 of thedeveloper device 130. The data transferred to thedeveloper device 130 is encoded in the human-readable action-description language from thecompiler 115 and modified by the developer on thedeveloper device 130 to create new tests that can be returned back through theserver 120 to thetest device 110 to simulate the user input to thetest device 110. Modification of the human-readable action-description language includes, but is not limited to, modification of credentials in the script by a tester (e.g., to test if a username/password dialog screen is working correctly). - In one exemplary embodiment, the data transferred from the
test device 110 to theserver 120 and subsequently to thedeveloper device 130 are logged in therecording log 125 as recorded interactions in the form of the human-readable action-description language from the compiler. This logging is line-by-line encoding of the human-readable action-description language, which thus allows thecompiler 115 in theserver 120 to act as a validator of therecording log 125. - In another exemplary embodiment, the data transferred may be a compressed form of a text-based standard (e.g., JSON (JAVASCRIPT Object Notation)) that outlines captured events as objects, each object being converted to the human-readable action-description language by the
compiler 115 at theserver 120. However, for security purposes thesystem 100 could be enabled to send the data to and from thetest device 110, theserver 120, and thedeveloper device 130 in a format that can be used to obfuscate the recorded transactions in order to inhibit or at least minimize the chances of interception. - Referring now to
FIG. 2 , one exemplary embodiment of implementing instrumentation in the architecture of thesystem 100 on an iOS platform is designated generally by thereference number 200 and is hereinafter referred to as “iOS system 200.” In theiOS system 200, native applications created for an iOS platform can be written using Objective-C, which is a dynamic language that allows code adaptation at run time. Thesystem 100 achieves instrumentation of application code by using Objective-C categories and method swizzling (e.g., changing parameter register order while calling an operation). - In the
iOS system 200, a given Objective-C class consists of both an interface definition and associated implementation. Typically, an object is created based on these definitions. In particular, aclass interface 210 comprises properties (e.g.,Property 1 and Property 2) and methods (e.g.,Method 1 and Method 2). Aclass implementation 220 is used to provide a template using the properties and methods, thereby defining what data representations represent which attributes. - Additionally, Objective-C allows a developer to modify a class definition via a category. Categories are much like normal class definitions with an interface and implementation but serve to modify an existing class. A category can be used for any class in Objective-C and does not require access to source code to be recompiled. A
category interface 250 is declared to comprise a method (e.g., Method 3). This method is implemented in acategory implementation 260. - A category is defined for the sample class definition where
Method 3 overridesMethod 2 in the original class definition. When the class is instantiated (in a class instantiation 270) at run time, the category augments the class withMethod 3. For thesystem library 112,Method 3 is a wrapper overMethod 2. However, whenMethod 3 is called,Method 3 creates a log item capturing all collected data. Furthermore,Method 3 uses method swizzling in Objective-C to call theoriginal Method 2. Additionally,Method 3 can return the log result to the caller. - Referring now to
FIG. 3 , one exemplary embodiment of implementing instrumentation in the architecture of thesystem 100 on an Android platform is designated generally by thereference number 300 and is hereinafter referred to as “Android system 300.” In theAndroid system 300, anAndroid activity 310 allows for creating a custom class loader that can be used in place of a system class loader when loading classes needed by anAndroid application 320. Thesystem 100 makes use of this by overriding the system class loader with a specialized version that is capable of loading instrumentedclasses 340 that provide logging and replay operations. If the application is already compiled, the code to override the class loader and the instrumentation library can be bytecode-injected into the application post compile time. At run time, when the specialized class loader is invoked, it checks to see if an instrumentedclass 340 exists. If it does, the specialized class loader first loads the instrumented class, rather than the system class, and instantiates an instrumented object in its place. An instrumented class is a specialization of an Android class, which is shown at 350. Objects created from instrumented classes inherit application logic from their parent object for most methods. One exemplary exception is that the instrumented object will have the ability to proxy UI events, network data, and application state, and to log this data as part of a data collection that will be sent to a server for translation by thecompiler 115. The instrumented object also has the ability to receive commands that allow it to present customized behavior at run time, for example replaying the recorded translation from thecompiler 115 on thetest device 110. - Referring to
FIG. 4 , one exemplary embodiment of a method of general operation of thesystem 100 is illustrated at 400 and is hereinafter referred to as “method 400.” Inmethod 400, aninitiation step 410 is executed, followed by animplementation step 420 in which the method invocations are processed as described above. Subsequent to theimplementation step 420, a routine 425 comprising a two-part logging/scripting step 430 and aplayback step 440 is carried out. In the logging/scripting step 430, anapplication recording phase 432 is carried out in which the recording of loggable data pertaining to the interaction of the tester with thesystem 100 is initiated, followed by adisablement step 434 in which an intentional disablement of therecording phase 432 is carried out. In theplayback step 440, execution/playback of recorded data is carried out by thetest device 110. - Referring to
FIG. 5 , theimplementation step 420, the logging/scripting step 430, and theplayback step 440 are described in more detail. In theimplementation step 420, the method/operation is invoked by user (e.g., tester) interaction (at an invocation step 510), and anoverride step 520 is carried out (e.g., whereMethod 3 overridesMethod 2 in the original class definition, as with theiOS system 200, or where a custom class loader can be used in place of the system class loader, as with the Android system 300). - In the logging/
scripting step 430, the recording of loggable data is initiated via a predetermined subset of application functions from thesystem library 112 which call the original method and operation. This recording is tested by first initiating therecording phase 432 of thesystem 100, which is enabled via a visible overlaid control set. Therecording phase 432 allows for a series of method invocations to be executed within thetestable application 114, as triggered by interaction with the application. Optionally, a log can be made of the invocation/operation result in an optionalresult logging step 550. Control may then be passed back to theimplementation step 420 as desired. - Upon completion of the recording phase, the developer or tester disables recording via the visible control, which aggregates all of the logged method invocations and relevant responses, in the
disablement step 434. All logged data is aggregated in anaggregation step 560. Optionally, the logged data can be compressed and/or encrypted in a compression/encryption step 570. Thesystem library 112 then sends (in a sending step 580) the logged data to theserver 120, which catalogs all the logged data (e.g., in the recording log 125) and generates script compiler templates (based on the data encoded in the human-readable action-description language from thecompiler 115 and modified by the developer) from the aggregated contents. Thetestable application 114 is then re-executed in a playback phase based on the recorded interactions of the tester with the system. - Referring now to
FIG. 6 , one exemplary embodiment of theplayback step 440 is shown. Playback consists of thesystem 100 creating a network connection from thetest device 110 to theserver 120 that stores the test script earmarked for playback. Upon retrieval of a template from theserver 120 in aretrieval step 610, thesystem 100 parses the script template into executable commands in a parsingstep 620. - Each entry is then executed in sequence by the
system library 132 in an executecommand step 630 in order to simulate the tester input based on the monitored events to run the application without tester input to cause thetestable application 114 to replicate a state of interest, such as an operation failure or the presence of a bug. In simulating the tester input, atroubleshooting log 635 may be created to facilitate a troubleshooting application (for example, to address issues with regard to performing playback operations), thetroubleshooting log 635 being uploadable to ahelp center 637 for analysis. - In the execute
command step 630, executed playback entries are classified in two categories: UI events and non-UI events. The UI events consist of invocations that result from the tester's direct interaction with the interface of thetestable application 114. Playback of UI events is accomplished by mapping the details of an event interaction to the object instance targeted as the recipient of the event. For example, the coordinates of a selection event are associated with a subview or widget upon which the event is targeted to interact with. Similarly, playback of non-UI events is accomplished by mapping parameterized values parsed from the script compiler entry information (from the developer) to a targeted instance of an application object that can handle the execution and response of the targeted invocation. Examples consist of initialized network requests for accessing data outside of the runningtestable application 114 as well as the returned responses. Recorded script compiler templates have the advantage of being easily tractable by a human reader, allowing for simple modifications and testing of various conditions upon replay of thetestable application 114. - The
system 100 provides the ability to visually capture various stages of recording/replay during testing in averification step 640. As a developer or tester initiates recording of a template, the option to selectively capture a screenshot is presented via a selection on the overlaid control set. By invoking the screen capture selection, a static capture of the visual screen, including any inputted values, is created and stored in memory. Upon completion of the recording template, all captured screen images can optionally be stored on thetest device 110 itself or sent to theserver 120. Accordingly, during the playback phase of testing, “verification points” can be constructed by capturing screen images of the playback results of the testing. These captured images can be compared to those stored during the initial recording of the testing template, thereby allowing for an accurate visual representation of both the recorded phases and the playback phases of the application provided, which in turn allows for an accurate visual comparison. At the close of theverification step 640, control may be returned to the executecommand step 630. - One advantage of the
system 100 is the ability to introspect and test not only native applications, but hybrid applications as well. Hybrid applications run natively but host Web-based content using an embedded Web view to render the Web-based components. Thesystem 100 provides an intermediary layer that wrappers the embedded Web view. Specifically, thesystem 100 inserts specialized JavaScript testing library code into the wrappered Web view, which executes upon initiation of any subsequent Web-rendering operation. The JavaScript testing library is responsible for capturing the state of all objects within the Web view and passing the state of any specified Web components to the higher-levelnative system library 112. This allows for an entry to be created and logged in an identical manner as with previously described UI and non-UI native events. Thesystem 100 translates Web events into loggable entries, and subsequent playback of the script compiler templates results in the wrappered Web container conveying translated Web events to the injected testing library for execution on targeted Web objects. - Referring now to
FIG. 7 , an overview of a computing system incorporating thetest device 110 and being suitable for use with the exemplary embodiments described herein is designated generally by thereference number 700 and is hereinafter referred to as “computing system 700.” Thecomputing system 700 comprises one ormore memories 745, one ormore processors 710, one or more I/O interfaces 720, and one or more wired or wireless network interfaces 730. This example includes thetestable application 114, thesystem library 112, and the testdevice operating system 116. Theserver 120 may be connected to thecomputing system 700 through the network interfaces 730. Alternatively or in addition to the one ormore processors 710, thecomputing system 700 may comprisecircuitry 715. Thecomputing system 700 may be coupled to or include one ormore displays 776 and one or more external device(s) 790. The operations may also be performed, in part or completely, by thecircuitry 715 that implements logic to carry out the operations. Thecircuitry 715 may be implemented as part of the one ormore processors 710 or may be separate from the one ormore processors 710. Theprocessors 710 may be any processing unit, such as a digital signal processor and/or single-core or multi-core general purpose processors. Thecircuitry 715 may be any electronic circuit such as an application specific integrated circuit or programmable logic. Thememories 745 may comprise non-volatile and/or volatile RAM, cache memory, NAND-based flash memory, long term storage (e.g., hard drive), and/or read only memory. The one or more I/O interfaces 720 may include interfaces through which a user may interact with thecomputing system 700. The display(s) 776 may be a touchscreen, flatscreen, monitor, television, projector, as examples. - A user interacts with the
computing system 700 through aUI 780 in an exemplary embodiment or through the network interface(s) 730 in another non-limiting embodiment. The external device(s) 790 enable a user to interact in one exemplary embodiment with thecomputing system 700 and may include a mouse, trackball, keyboard, or the like. The network interfaces 730 may be wired or wireless and may implement a number of protocols, such as cellular or local area network protocols. The elements incomputing system 700 may be interconnected through any technology, such as buses, traces on a board, interconnects on semiconductors, or the like. - As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium does not include a propagating wave.
- A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/964,296 US9697108B2 (en) | 2013-08-12 | 2013-08-12 | System, method, and apparatus for automatic recording and replaying of application executions |
CN201410380426.4A CN104375819A (en) | 2013-08-12 | 2014-08-05 | System, Method, and Apparatus for Automatic Recording and Replaying of Application Executions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/964,296 US9697108B2 (en) | 2013-08-12 | 2013-08-12 | System, method, and apparatus for automatic recording and replaying of application executions |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150046909A1 true US20150046909A1 (en) | 2015-02-12 |
US9697108B2 US9697108B2 (en) | 2017-07-04 |
Family
ID=52449763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/964,296 Expired - Fee Related US9697108B2 (en) | 2013-08-12 | 2013-08-12 | System, method, and apparatus for automatic recording and replaying of application executions |
Country Status (2)
Country | Link |
---|---|
US (1) | US9697108B2 (en) |
CN (1) | CN104375819A (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140229947A1 (en) * | 2010-06-29 | 2014-08-14 | Ca, Inc. | Ensuring determinism during programmatic replay in a virtual machine |
US20150277941A1 (en) * | 2014-02-06 | 2015-10-01 | Openpeak Inc. | Method and system for linking to shared library |
US20160077954A1 (en) * | 2014-09-17 | 2016-03-17 | Ricoh Company, Ltd. | Data processing apparatus and data processing method |
US9378109B1 (en) * | 2013-08-30 | 2016-06-28 | Amazon Technologies, Inc. | Testing tools for devices |
US20160246465A1 (en) * | 2015-02-23 | 2016-08-25 | Red Hat, Inc. | Duplicating a task sequence from a graphical user interface interaction for a development application in view of trace data |
CN106897225A (en) * | 2017-02-27 | 2017-06-27 | 网易(杭州)网络有限公司 | Record method, device and the electronic equipment of test script |
KR20180008581A (en) * | 2015-05-12 | 2018-01-24 | 미네우스 )( 에스.알.오. | Method and system for automating the testing process of a software application |
WO2018085455A1 (en) * | 2016-11-01 | 2018-05-11 | App Onboard, Inc. | Dynamic graphic visualizer for application metrics |
US10061604B2 (en) | 2016-08-09 | 2018-08-28 | Red Hat, Inc. | Program execution recording and playback |
US20180322034A1 (en) * | 2017-05-05 | 2018-11-08 | Microsoft Technology Licensing, Llc | Running test scripts in multiple language platforms |
US10169189B2 (en) * | 2015-09-11 | 2019-01-01 | International Business Machines Corporation | Functional test automation of mobile applications interacting with native stock applications |
US10204030B1 (en) * | 2017-10-09 | 2019-02-12 | International Business Machines Corporation | Debug session tree recorder with generic playback |
US10223233B2 (en) | 2015-10-21 | 2019-03-05 | International Business Machines Corporation | Application specific interaction based replays |
CN110032512A (en) * | 2019-03-28 | 2019-07-19 | 腾讯科技(深圳)有限公司 | A kind of adjustment method of small routine, relevant device and terminal |
US10360140B2 (en) * | 2013-11-27 | 2019-07-23 | Entit Software Llc | Production sampling for determining code coverage |
US10417116B2 (en) * | 2016-07-28 | 2019-09-17 | International Business Machines Corporation | System, method, and apparatus for crowd-sourced gathering of application execution events for automatic application testing and replay |
US20190286542A1 (en) * | 2018-03-19 | 2019-09-19 | Hcl Technologies Limited | Record and replay system and method for automating one or more activities |
US10474563B1 (en) * | 2016-12-28 | 2019-11-12 | Wells Fargo Bank, N.A. | System testing from production transactions |
US20190391905A1 (en) * | 2016-07-27 | 2019-12-26 | Undo Ltd. | Debugging systems |
US10552852B1 (en) * | 2014-03-11 | 2020-02-04 | Vmware, Inc. | Service monitor for monitoring and tracking the performance of applications running on different mobile devices |
CN112015845A (en) * | 2020-09-29 | 2020-12-01 | 北京百度网讯科技有限公司 | Method, device and equipment for map retrieval test and storage medium |
US10990362B1 (en) * | 2014-01-17 | 2021-04-27 | Tg Llc | Converting programs to visual representation with reading complied binary |
US11086711B2 (en) * | 2018-09-24 | 2021-08-10 | International Business Machines Corporation | Machine-trainable automated-script customization |
US11232252B2 (en) * | 2016-09-01 | 2022-01-25 | Verint Americas Inc. | System and computer-implemented method for in-page reporting of user feedback on a website or mobile app |
US11669347B2 (en) * | 2021-03-09 | 2023-06-06 | Oracle International Corporation | Generating video sequences from user interactions with graphical interfaces |
US20230385181A1 (en) * | 2022-05-31 | 2023-11-30 | Sap Se | Re-usable web-objects for use with automation tools |
WO2023229317A1 (en) * | 2022-05-27 | 2023-11-30 | Samsung Electronics Co., Ltd. | A system and method to enhance launching of application at a user equipment |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012118509A1 (en) * | 2011-03-03 | 2012-09-07 | Hewlett-Packard Development Company, L.P. | Testing integrated business systems |
CN106033368B (en) * | 2015-03-09 | 2019-02-22 | 北京大学 | A kind of method of multi-core virtual machine Deterministic Replay |
US9535820B2 (en) * | 2015-03-27 | 2017-01-03 | Intel Corporation | Technologies for application validation in persistent memory systems |
CN105956478B (en) * | 2016-04-25 | 2020-01-03 | 北京珊瑚灵御科技有限公司 | Data isolation system and Method based on iOS Method Swizzling technology |
WO2018010054A1 (en) * | 2016-07-11 | 2018-01-18 | SZ DJI Technology Co., Ltd. | System and method for movable object tracking and analysis |
US10860461B2 (en) | 2017-01-24 | 2020-12-08 | Transform Sr Brands Llc | Performance utilities for mobile applications |
CN107608877B (en) * | 2017-08-11 | 2021-04-09 | 上海巍擎信息技术有限责任公司 | Automatic application program interface testing method and system based on machine learning |
CN111611148B (en) * | 2019-02-22 | 2023-08-29 | 上海哔哩哔哩科技有限公司 | Compatibility concurrent test method and system for iOS system application software |
US10915426B2 (en) | 2019-06-06 | 2021-02-09 | International Business Machines Corporation | Intercepting and recording calls to a module in real-time |
US10929126B2 (en) * | 2019-06-06 | 2021-02-23 | International Business Machines Corporation | Intercepting and replaying interactions with transactional and database environments |
US11016762B2 (en) | 2019-06-06 | 2021-05-25 | International Business Machines Corporation | Determining caller of a module in real-time |
US11074069B2 (en) | 2019-06-06 | 2021-07-27 | International Business Machines Corporation | Replaying interactions with transactional and database environments with re-arrangement |
US11036619B2 (en) | 2019-06-06 | 2021-06-15 | International Business Machines Corporation | Bypassing execution of a module in real-time |
CN112416310A (en) * | 2019-08-22 | 2021-02-26 | 杭州萤石软件有限公司 | Function extension method of extended object-oriented software development kit |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5596714A (en) * | 1994-07-11 | 1997-01-21 | Pure Atria Corporation | Method for simultaneously testing multiple graphic user interface programs |
US5657438A (en) * | 1990-11-27 | 1997-08-12 | Mercury Interactive (Israel) Ltd. | Interactive system for developing tests of system under test allowing independent positioning of execution start and stop markers to execute subportion of test script |
US6029257A (en) * | 1996-12-06 | 2000-02-22 | Intergraph Corporation | Apparatus and method for testing computer systems |
US20010047510A1 (en) * | 1996-08-27 | 2001-11-29 | David J. Angel | Byte code instrumentation |
US6857120B1 (en) * | 2000-11-01 | 2005-02-15 | International Business Machines Corporation | Method for characterizing program execution by periodic call stack inspection |
US20070047924A1 (en) * | 2005-08-29 | 2007-03-01 | Eklund Don | Templatized commands in disc authoring |
US20070169055A1 (en) * | 2005-12-12 | 2007-07-19 | Bernd Greifeneder | Method and system for automated analysis of the performance of remote method invocations in multi-tier applications using bytecode instrumentation |
US20090133000A1 (en) * | 2006-10-17 | 2009-05-21 | Artoftest, Inc. | System, program product, and methods to enable visual recording and editing of test automation scenarios for web application |
US20090228970A1 (en) * | 2008-03-07 | 2009-09-10 | Nec Corporation | Gateway device having socket library for monitoring, communication method of gateway device having socket library for monitoring, and communication program of gateway device having socket library for monitoring |
US20100031196A1 (en) * | 2008-07-30 | 2010-02-04 | Autodesk, Inc. | Method and apparatus for selecting and highlighting objects in a client browser |
US20100095208A1 (en) * | 2008-04-15 | 2010-04-15 | White Alexei R | Systems and Methods for Remote Tracking and Replay of User Interaction with a Webpage |
US7721264B2 (en) * | 1994-09-30 | 2010-05-18 | Apple Inc. | Method and apparatus for storing and replaying creation history of multimedia software or other software content |
US8024706B1 (en) * | 2005-09-27 | 2011-09-20 | Teradata Us, Inc. | Techniques for embedding testing or debugging features within a service |
US20120131473A1 (en) * | 2010-11-23 | 2012-05-24 | Axeda Corporation | Scripting web services |
US20120136921A1 (en) * | 2010-11-30 | 2012-05-31 | Google Inc. | Event management for hosted applications |
US20120174076A1 (en) * | 2011-01-04 | 2012-07-05 | Zoran Rajic | Systems and methods for profiling servers |
US20120174069A1 (en) * | 2010-12-31 | 2012-07-05 | Verizon Patent And Licensing, Inc. | Graphical user interface testing systems and methods |
US20120198476A1 (en) * | 2011-01-31 | 2012-08-02 | Dmitry Markuza | Evaluating performance of an application using event-driven transactions |
US20120243745A1 (en) * | 2009-12-01 | 2012-09-27 | Cinnober Financial Technology Ab | Methods and Apparatus for Automatic Testing of a Graphical User Interface |
US9213625B1 (en) * | 2010-07-27 | 2015-12-15 | Intuit Inc. | Method and apparatus for performing automated user-interface layout testing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7665019B2 (en) | 2003-09-26 | 2010-02-16 | Nbor Corporation | Method for recording and replaying operations in a computer environment using initial conditions |
US20110078666A1 (en) | 2009-05-26 | 2011-03-31 | University Of California | System and Method for Reproducing Device Program Execution |
US9063766B2 (en) | 2011-03-16 | 2015-06-23 | Vmware, Inc. | System and method of manipulating virtual machine recordings for high-level execution and replay |
-
2013
- 2013-08-12 US US13/964,296 patent/US9697108B2/en not_active Expired - Fee Related
-
2014
- 2014-08-05 CN CN201410380426.4A patent/CN104375819A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5657438A (en) * | 1990-11-27 | 1997-08-12 | Mercury Interactive (Israel) Ltd. | Interactive system for developing tests of system under test allowing independent positioning of execution start and stop markers to execute subportion of test script |
US5596714A (en) * | 1994-07-11 | 1997-01-21 | Pure Atria Corporation | Method for simultaneously testing multiple graphic user interface programs |
US7721264B2 (en) * | 1994-09-30 | 2010-05-18 | Apple Inc. | Method and apparatus for storing and replaying creation history of multimedia software or other software content |
US20010047510A1 (en) * | 1996-08-27 | 2001-11-29 | David J. Angel | Byte code instrumentation |
US6029257A (en) * | 1996-12-06 | 2000-02-22 | Intergraph Corporation | Apparatus and method for testing computer systems |
US6857120B1 (en) * | 2000-11-01 | 2005-02-15 | International Business Machines Corporation | Method for characterizing program execution by periodic call stack inspection |
US20070047924A1 (en) * | 2005-08-29 | 2007-03-01 | Eklund Don | Templatized commands in disc authoring |
US8024706B1 (en) * | 2005-09-27 | 2011-09-20 | Teradata Us, Inc. | Techniques for embedding testing or debugging features within a service |
US20070169055A1 (en) * | 2005-12-12 | 2007-07-19 | Bernd Greifeneder | Method and system for automated analysis of the performance of remote method invocations in multi-tier applications using bytecode instrumentation |
US20090133000A1 (en) * | 2006-10-17 | 2009-05-21 | Artoftest, Inc. | System, program product, and methods to enable visual recording and editing of test automation scenarios for web application |
US20090228970A1 (en) * | 2008-03-07 | 2009-09-10 | Nec Corporation | Gateway device having socket library for monitoring, communication method of gateway device having socket library for monitoring, and communication program of gateway device having socket library for monitoring |
US20100095208A1 (en) * | 2008-04-15 | 2010-04-15 | White Alexei R | Systems and Methods for Remote Tracking and Replay of User Interaction with a Webpage |
US20100031196A1 (en) * | 2008-07-30 | 2010-02-04 | Autodesk, Inc. | Method and apparatus for selecting and highlighting objects in a client browser |
US20120243745A1 (en) * | 2009-12-01 | 2012-09-27 | Cinnober Financial Technology Ab | Methods and Apparatus for Automatic Testing of a Graphical User Interface |
US9213625B1 (en) * | 2010-07-27 | 2015-12-15 | Intuit Inc. | Method and apparatus for performing automated user-interface layout testing |
US20120131473A1 (en) * | 2010-11-23 | 2012-05-24 | Axeda Corporation | Scripting web services |
US20120136921A1 (en) * | 2010-11-30 | 2012-05-31 | Google Inc. | Event management for hosted applications |
US20120174069A1 (en) * | 2010-12-31 | 2012-07-05 | Verizon Patent And Licensing, Inc. | Graphical user interface testing systems and methods |
US20120174076A1 (en) * | 2011-01-04 | 2012-07-05 | Zoran Rajic | Systems and methods for profiling servers |
US20120198476A1 (en) * | 2011-01-31 | 2012-08-02 | Dmitry Markuza | Evaluating performance of an application using event-driven transactions |
Non-Patent Citations (3)
Title |
---|
"Call stack" retrived online 7/24/2015, Published by Wikipedia at * |
"custom class loader for android?" 5/31/2012, published by Stack Exchange, Inc. at * |
John, "Objective-C: Categories" 7/1/2008, Mac Developer Tips (macdevelopertips.com), archived 8/20/2008 at https://web.archive.org/web/20080820094110/http://macdevelopertips.com/objective-c/objective-c-categories.html * |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10585796B2 (en) | 2010-06-29 | 2020-03-10 | Ca, Inc. | Ensuring determinism during programmatic replay in a virtual machine |
US9606820B2 (en) * | 2010-06-29 | 2017-03-28 | Ca, Inc. | Ensuring determinism during programmatic replay in a virtual machine |
US20140229947A1 (en) * | 2010-06-29 | 2014-08-14 | Ca, Inc. | Ensuring determinism during programmatic replay in a virtual machine |
US9378109B1 (en) * | 2013-08-30 | 2016-06-28 | Amazon Technologies, Inc. | Testing tools for devices |
US10360140B2 (en) * | 2013-11-27 | 2019-07-23 | Entit Software Llc | Production sampling for determining code coverage |
US10990362B1 (en) * | 2014-01-17 | 2021-04-27 | Tg Llc | Converting programs to visual representation with reading complied binary |
US20150277941A1 (en) * | 2014-02-06 | 2015-10-01 | Openpeak Inc. | Method and system for linking to shared library |
US10552852B1 (en) * | 2014-03-11 | 2020-02-04 | Vmware, Inc. | Service monitor for monitoring and tracking the performance of applications running on different mobile devices |
US9626282B2 (en) * | 2014-09-17 | 2017-04-18 | Ricoh Company, Ltd. | Data processing apparatus and data processing method |
US20160077954A1 (en) * | 2014-09-17 | 2016-03-17 | Ricoh Company, Ltd. | Data processing apparatus and data processing method |
US20160246465A1 (en) * | 2015-02-23 | 2016-08-25 | Red Hat, Inc. | Duplicating a task sequence from a graphical user interface interaction for a development application in view of trace data |
US10430309B2 (en) * | 2015-02-23 | 2019-10-01 | Red Hat, Inc. | Duplicating a task sequence from a graphical user interface interaction for a development application in view of trace data |
US11023364B2 (en) * | 2015-05-12 | 2021-06-01 | Suitest S.R.O. | Method and system for automating the process of testing of software applications |
KR102529142B1 (en) * | 2015-05-12 | 2023-05-03 | 미네우스 )( 에스.알.오. | Method and system for automating the testing process of software applications |
KR20180008581A (en) * | 2015-05-12 | 2018-01-24 | 미네우스 )( 에스.알.오. | Method and system for automating the testing process of a software application |
US10169189B2 (en) * | 2015-09-11 | 2019-01-01 | International Business Machines Corporation | Functional test automation of mobile applications interacting with native stock applications |
US10223233B2 (en) | 2015-10-21 | 2019-03-05 | International Business Machines Corporation | Application specific interaction based replays |
US10761966B2 (en) * | 2016-07-27 | 2020-09-01 | Undo Ltd. | Generating program analysis data for analysing the operation of a computer program |
US20190391905A1 (en) * | 2016-07-27 | 2019-12-26 | Undo Ltd. | Debugging systems |
US10417116B2 (en) * | 2016-07-28 | 2019-09-17 | International Business Machines Corporation | System, method, and apparatus for crowd-sourced gathering of application execution events for automatic application testing and replay |
US10061604B2 (en) | 2016-08-09 | 2018-08-28 | Red Hat, Inc. | Program execution recording and playback |
US11907645B2 (en) | 2016-09-01 | 2024-02-20 | Verint Americas Inc. | System and computer-implemented method for in-page reporting of user feedback on a website or mobile app |
US11232252B2 (en) * | 2016-09-01 | 2022-01-25 | Verint Americas Inc. | System and computer-implemented method for in-page reporting of user feedback on a website or mobile app |
WO2018085455A1 (en) * | 2016-11-01 | 2018-05-11 | App Onboard, Inc. | Dynamic graphic visualizer for application metrics |
US10474563B1 (en) * | 2016-12-28 | 2019-11-12 | Wells Fargo Bank, N.A. | System testing from production transactions |
US10997063B1 (en) | 2016-12-28 | 2021-05-04 | Wells Fargo Bank, N.A. | System testing from production transactions |
CN106897225A (en) * | 2017-02-27 | 2017-06-27 | 网易(杭州)网络有限公司 | Record method, device and the electronic equipment of test script |
US20180322034A1 (en) * | 2017-05-05 | 2018-11-08 | Microsoft Technology Licensing, Llc | Running test scripts in multiple language platforms |
US10204030B1 (en) * | 2017-10-09 | 2019-02-12 | International Business Machines Corporation | Debug session tree recorder with generic playback |
US20190286542A1 (en) * | 2018-03-19 | 2019-09-19 | Hcl Technologies Limited | Record and replay system and method for automating one or more activities |
US11086711B2 (en) * | 2018-09-24 | 2021-08-10 | International Business Machines Corporation | Machine-trainable automated-script customization |
CN110032512A (en) * | 2019-03-28 | 2019-07-19 | 腾讯科技(深圳)有限公司 | A kind of adjustment method of small routine, relevant device and terminal |
JP2021165838A (en) * | 2020-09-29 | 2021-10-14 | ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド | Method, apparatus, device, storage medium and program for map retrieval test |
CN112015845A (en) * | 2020-09-29 | 2020-12-01 | 北京百度网讯科技有限公司 | Method, device and equipment for map retrieval test and storage medium |
JP7278324B2 (en) | 2020-09-29 | 2023-05-19 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Test method, device, storage medium, and program for map search of electronic map |
US11693764B2 (en) | 2020-09-29 | 2023-07-04 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method, apparatus, device and storage medium for map retrieval test |
EP3819789A3 (en) * | 2020-09-29 | 2021-09-22 | Beijing Baidu Netcom Science And Technology Co. Ltd. | Method, apparatus, device and storage medium for map retrieval test |
US11669347B2 (en) * | 2021-03-09 | 2023-06-06 | Oracle International Corporation | Generating video sequences from user interactions with graphical interfaces |
WO2023229317A1 (en) * | 2022-05-27 | 2023-11-30 | Samsung Electronics Co., Ltd. | A system and method to enhance launching of application at a user equipment |
US20230385181A1 (en) * | 2022-05-31 | 2023-11-30 | Sap Se | Re-usable web-objects for use with automation tools |
Also Published As
Publication number | Publication date |
---|---|
CN104375819A (en) | 2015-02-25 |
US9697108B2 (en) | 2017-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9697108B2 (en) | System, method, and apparatus for automatic recording and replaying of application executions | |
Nguyen et al. | GUITAR: an innovative tool for automated testing of GUI-driven software | |
Hu et al. | Efficiently, effectively detecting mobile app bugs with appdoctor | |
US9164870B2 (en) | Integrated fuzzing | |
Amalfitano et al. | Testing android mobile applications: Challenges, strategies, and approaches | |
Su et al. | Why my app crashes? understanding and benchmarking framework-specific exceptions of android apps | |
US8515876B2 (en) | Dry-run design time environment | |
US9268670B1 (en) | System for module selection in software application testing including generating a test executable based on an availability of root access | |
US20130117855A1 (en) | Apparatus for automatically inspecting security of applications and method thereof | |
US20120239987A1 (en) | System and Method of Manipulating Virtual Machine Recordings for High-Level Execution and Replay | |
US20130263090A1 (en) | System and method for automated testing | |
Liu et al. | Capture-replay testing for android applications | |
US20150143342A1 (en) | Functional validation of software | |
WO2020096665A2 (en) | System error detection | |
Arif et al. | Mobile Application testing tools and their challenges: A comparative study | |
Silva et al. | Characterizing mobile apps from a source and test code viewpoint | |
Tuovenen et al. | MAuto: Automatic mobile game testing tool using image-matching based approach | |
Silva et al. | An analysis of automated tests for mobile android applications | |
Mohammad et al. | A comparative analysis of quality assurance automated testing tools for windows mobile applications | |
Griebe et al. | Towards automated UI-tests for sensor-based mobile applications | |
Papoulias et al. | Mercury: Properties and design of a remote debugging solution using reflection | |
US10417116B2 (en) | System, method, and apparatus for crowd-sourced gathering of application execution events for automatic application testing and replay | |
CN115509913A (en) | Software automation test method, device, machine readable medium and equipment | |
Raut et al. | Android mobile automation framework | |
Lysne et al. | Software quality and quality management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIGMAN, JOSEPH W.;PISTOIA, MARCO;PONZO, JOHN;AND OTHERS;SIGNING DATES FROM 20130807 TO 20130809;REEL/FRAME:030988/0018 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: AIRBNB, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:056427/0193 Effective date: 20210106 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210704 |