US20180121400A1 - Visual state comparator - Google Patents

Visual state comparator Download PDF

Info

Publication number
US20180121400A1
US20180121400A1 US15/800,017 US201715800017A US2018121400A1 US 20180121400 A1 US20180121400 A1 US 20180121400A1 US 201715800017 A US201715800017 A US 201715800017A US 2018121400 A1 US2018121400 A1 US 2018121400A1
Authority
US
United States
Prior art keywords
browser
screenshot
metadata
operating system
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/800,017
Inventor
Keith Bentrup
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PayPal Inc
Original Assignee
PayPal Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PayPal Inc filed Critical PayPal Inc
Priority to US15/800,017 priority Critical patent/US20180121400A1/en
Publication of US20180121400A1 publication Critical patent/US20180121400A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • G06F17/2247
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/32Monitoring with visual or acoustical indication of the functioning of the machine
    • G06F11/323Visualisation of programs or trace data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable
    • G06F17/30386
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Definitions

  • the present disclosure relates generally to data processing, and in a specific example embodiment, to providing a visual state comparator.
  • FIG. 1 is a block diagram illustrating an example environment in which embodiments of a system for providing a visual state comparator may be implemented.
  • FIG. 2 is a block diagram of an example embodiment of a test engine.
  • FIG. 3 is a block diagram of an example embodiment of a report engine.
  • FIG. 4 is an example of a test parameter user interface.
  • FIG. 5 is an example of a side-by-side visual comparison screenshot.
  • FIG. 6 is an example of an overlay visual comparison screenshot.
  • FIG. 7 is a flow diagram of an example high-level method for providing a visual state comparison.
  • FIG. 8 is a flow diagram of a more detailed method for providing the visual state comparison.
  • FIG. 9 is a simplified block diagram of a machine in an example form of a computing system within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
  • systems and methods for providing visual state comparisons are provided.
  • parameters are received from a user device of a user.
  • the parameters indicate different browser/operating system combinations for a visual comparison of screenshots of a state for the different browser/operating system combinations.
  • the screenshots along with corresponding metadata for each indicated browser/operating system combination are retrieved.
  • a user interface that visually compares at least two retrieved screenshots is provided to the user device.
  • the user interface includes a display of the corresponding metadata for the at least two retrieved screenshots.
  • a user may easily and quickly identify visual differences of webpages for different browsers and operating systems.
  • the user does not need to generate and individually review each screenshot in isolation. Therefore, one or more of the methodologies discussed herein may obviate a need for time consuming data processing by the user.
  • This may have the technical effect of reducing computing resources used by one or more devices within the system. Examples of such computing resources include, without limitation, processor cycles, network traffic, memory usage, storage space, and power consumption.
  • FIG. 1 is a block diagram illustrating an example environment 100 in which embodiments of a system for providing a visual state comparator may be implemented.
  • a comparator system 102 is coupled via a network 104 (e.g., the Internet, wireless network, cellular network, or a Wide Area Network (WAN)) to a plurality of sources 106 .
  • the sources 106 may comprise web servers of various web sites for which the comparator system 102 is to perform visual state comparisons.
  • the source 106 may be a server for an online store that sells computer products.
  • the comparator system 102 may be configured to test various applications running on different combinations of browsers and operating systems for uniformity.
  • the application may be, in one example, a web-based process.
  • These applications may include, for example, a checkout application (e.g., webpages of a checkout flow), a registration application (e.g., webpages of a process to register with the online store), or a search application (e.g., webpages for a particular search process).
  • a checkout application e.g., webpages of a checkout flow
  • a registration application e.g., webpages of a process to register with the online store
  • search application e.g., webpages for a particular search process.
  • the comparator system 102 comprises a test engine 108 and a report engine 110 .
  • the test engine 108 allows a user (e.g., a developer or QA personnel) of the comparison system 102 to generate and run tests.
  • the test results in captured metadata and screenshots of various states of a build for different browsers and operating systems.
  • a build is based on a change in code which may result in a different version of webpage(s) for the web-based process.
  • the states may comprise the various pages (e.g., screenshots) of the web-based process or states within the pages.
  • test results e.g., captured screenshots and corresponding metadata
  • the test results are stored to a data storage 112 .
  • the report engine 110 accesses the data storage 112 to retrieve specific metadata and screenshots and provides the results to a device of the user for visual comparison.
  • the visual comparison may be presented in a side-by-side view or in an overlay view.
  • the test engine 108 and the report engine 110 will be discussed in further detail below.
  • the environment 100 may comprise other components that are not shown.
  • components, protocols, structures, and techniques not directly related to functions of example embodiments or that are optional have not been shown or discussed in detail.
  • a proxy device may be included in the environment 100 for managing modifications to headers as will be discussed in more detail below. However, this proxy device is optional.
  • FIG. 2 is a block diagram of an example embodiment of the test engine 108 .
  • the test engine 108 generates and runs tests of various builds of the application under test and applications for different browsers and operating systems.
  • the test engine 108 comprises a test builder module 202 , a testing module 204 , a state capture module 206 , a metadata capture module 208 , a modification module 210 , and an error tracking module 212 .
  • Alternative embodiments may comprise more or less modules, combine functionality of modules, or locate modules in a different location.
  • the test builder module 202 allows a user (e.g., developer or QA personnel) to create tests to be run against various web applications for different browsers and operating systems.
  • the user indicates steps in the test to the test builder module 220 .
  • the user may use Selenium (or other web application testing automation tools) to provide code for the test builder module 202 .
  • the user may also provide, to the test builder module 202 , parameters that are both configurable and fixed for the test. For example, for a checkout process test a starting product URL (Uniform Resource Locator) may be indicated as a configurable parameter in the test.
  • the test builder module 202 Based on the various user inputs, the test builder module 202 generates each test. The test is then stored for later use. In one embodiment, the generated test may be stored to the data storage 112 . It is noted that any number of tests may be generated and stored at any time prior to running the tests (e.g., allowing for batching of tests).
  • the testing module 204 sets up the test to be run by receiving parameters for the test from a user (e.g., developer or QA personnel).
  • the user running the test may be the same or different from the user that created the test.
  • the user provides the start URL to the testing module 204 and indicates different browsers and operating system combinations that the test is to be run against.
  • the user may also provide a height and width of the browsers to be compared.
  • Other parameters, as established by the user creating the tests, are also contemplated and may vary for each test.
  • the parameters are parsed into the different steps of the test by the testing module 204 .
  • a fetch URL test may be performed in which a plurality of URLs is provided to the testing module 204 .
  • the testing module 204 triggers the state capture module 206 and the metadata capture module 208 to perform the test using the various parameters.
  • the test may be run across multiple actual and virtual machines.
  • the state capture module 206 captures screenshots of a particular page (e.g., state) of the web application being tested for each indicated browser and operation system combination. That is, the state capture module 206 may open a web page or software application in multiple configurations of operating systems and browsers, and captures screenshots for comparison so that the user can see if the web page or software application is broken or misaligned on certain browsers.
  • a checkout application will have a particular checkout flow.
  • a first state of the checkout application will be a product page.
  • the next state or page may include a cart with the product selected. Accordingly, the next state may be an order form (e.g., for input of customer information) and include an error messaging associated with the form.
  • the state capture module 206 captures different points in time (e.g., the different states) of the process for each selected browser/operating system combination.
  • the captured screenshots are stored to a data store (e.g., data storage 112 ) by the state capture module 206 .
  • the metadata capture module 208 captures metadata for each state of each browser/operating system combination.
  • the metadata may be captured before and during the running of the test (e.g., while the state capture module 206 is capturing screenshots).
  • the metadata for each state may include, for example, the URL, the browser being used, a build number, an operating system, page/state titles chosen by a tester, server headers, or server responses.
  • the captured metadata are stored to a data store (e.g., data storage 112 ) by the metadata capture module 208 .
  • the modification module 210 allows modification of headers between tests.
  • the modifications are received from the user and sent to a proxy device before the test is run.
  • the modification may instruct an Internet Explorer 8 (IE8) browser to run the test as if it were Internet Explorer 7 (IE7). This may be desirable, for example, for backward compatibility reasons.
  • the testing may still be run against IE7 and IE8 to compare these to each other.
  • a server e.g., the source 106
  • the user e.g., developer
  • the modification(s) may be sent to the server by the modification module 210 to tell the server to behave differently so when the client (browser) visits, results based on the modification may be obtained.
  • the user e.g., developer
  • a proxy device performs the modification transmitted by the modification module 210 .
  • the modification may indicate a modified server header for the test.
  • the proxy device may intercept a request and modify the request on-the-fly.
  • a webpage may have foreign language on it, but foreign language character sets may not be available.
  • the server may be configured to tell IE8 to treat the foreign language as a normal character set.
  • the application associated with the webpage
  • the webpage may get a different character set.
  • the modification module 210 is the proxy device.
  • the error tracking module 212 tracks errors (e.g., JavaScript errors) during the rendering of the states.
  • errors e.g., JavaScript errors
  • web page requests are run through a proxy (e.g., proxy device) that injects JavaScript code to capture and log JavaScript errors.
  • Each screenshot can be captured with an error log that lists the URL and line numbers of any JavaScript errors.
  • the errors may be subsequently presented in a report.
  • the browser notes the error.
  • the error tracking module 212 polls the browser to find out how many errors have occurred.
  • the error tracking module 212 may note the state (e.g., cross-reference with testing data) and associate each error with the proper state.
  • FIG. 3 is a block diagram of an example embodiment of the report engine 110 .
  • the report engine 110 accesses the data storage 112 to retrieve stored metadata and screenshots for specific builds and browsers. These results are provided by the report engine 110 as visual comparisons. Accordingly, the report engine 110 comprises a report parameter module 302 , a data retrieval module 304 , and visual comparator modules 306 .
  • the report parameter module 302 receives parameters to obtain data for a report that provides the visual comparison. As such, the report parameter module 302 provides a user interface that allows the user to select the build, web application, or other information to retrieve data associated with a test that has been performed.
  • the data retrieval module 304 accesses the data store (e.g., data storage 112 ) and retrieves the corresponding screenshots and metadata. More specifically, the data retrieval module 304 may send a series of requests for all the information (e.g., screenshots and metadata). For example, if the user selects a particular build number (e.g., #90), an immediate request is sent (e.g., via JAVA script) to go and retrieve all the screenshots and metadata for the particular build number. Thus, once a selection of the build, state, and browser/operating combination is received, the specific information based on parameters received by the report parameter module 302 is retrieved.
  • a particular build number e.g., #90
  • an immediate request is sent (e.g., via JAVA script) to go and retrieve all the screenshots and metadata for the particular build number.
  • the visual comparator modules 306 format the retrieved screenshots and metadata for visual comparison.
  • the visual comparators modules 306 determine which metadata goes with which screenshot and renders a visual comparison page that includes screenshots (also referred to as tabs on the visual comparison page).
  • the visual comparison page includes a magnifying glass. As the user mouses over a pure aspect of the tab, the magnifying glass magnifies a view so the user can do a quick dive of what one tab looks like compared to another tab.
  • the visual comparator modules 306 comprise a metadata module 308 , a side-by-side (SBS) module 310 , an overlay module 312 , and a synch module 314 .
  • SBS side-by-side
  • the metadata module 308 takes the metadata that was generated before and during the capture of each screenshot and uses the metadata in generating reports. For example, the metadata module 308 may identify that a particular screenshot is from a particular browser/operating system combination on a particular date for a specific build at a certain time.
  • the metadata may also include information regarding a session, JAVA script errors, and network errors for each screenshot. This metadata may be displayed on the visual comparison UI to the user.
  • the SBS module 310 provides side-by-side visual comparison of tabs for rendering in a side-by side visual comparison user interface.
  • the SBS module 310 may also provide various buttons on the user interface to allow the user to toggle between various tabs.
  • the tabs may be toggled based on specific browser/operating system types. An example of a side-by-side visual comparison user interface is shown in FIG. 5 .
  • the overlay module 312 provides overlay visual comparison tabs for rendering in an overlay visual comparison user interface.
  • the overlay module 312 may also provide an opacity slider to allow the user to adjust the opacity of a tab in comparison with another tab.
  • Other tools may be provided by the overlay module 312 as will be discussed in more detail in connection with FIG. 6 below.
  • the synch module 314 manages the synching of screenshots between two or more comparisons. For example, a visual comparison may compare Build #91 for products in Chrome OSX and Build #91 on Firefox 3.6. Proceeding to a next visual comparison page, both screenshots (for Chrome OSX and Firefox 3.6) simultaneously change to reflect the next state in the web application. Ordinarily, the developer will want to keep the screenshots synched as the developer steps through and compares the screenshots within a web application. As such, synch maybe a user interface default. However, there may be times when the developer does not want to keep the screenshots synched. For example, the developer may want to compare a form errors page to a page with no errors to see if the errors come in correctly. In another example, the developer may want to compare a tab in Build #90 to Build #91 to see if changes go in properly or how much progress has been made.
  • test engine 108 and the report engine 110 have been discussed in terms of a variety of individual modules and engines, a skilled artisan will recognize that many of the items can be combined or organized in other ways. Furthermore, not all components of the test engine 108 and the report engine 110 may have been included in FIGS. 2 and 3 . In general, components, protocols, structures, and techniques not directly related to functions of example embodiments have not been shown or discussed in detail. The description given herein simply provides a variety of example embodiments to aid the reader in an understanding of the systems and methods used herein.
  • FIG. 4 is an example of a test parameter user interface (UI) 400 .
  • the test parameter UI 400 may be provided by the testing module 204 to allow the user to set the parameters for a test.
  • the test is for a web application for a checkout process (e.g., to purchase a product).
  • the test parameter UI 400 includes a starting product URL field 402 where the user enters a start URL for the checkout test.
  • the test parameter UI 400 further includes checkboxes 404 for a plurality of different browsers that the test can be run against.
  • the user may select the browsers to be tested by selecting the corresponding checkbox 404 .
  • the user has selected to test Internet Explorer7 (IE7), Internet Explorer8 (IE8), and Internet Explorer9 (IE9), but Internet Explorer6 (IE6) is not selected.
  • the user may also provide a width and height of the browsers to be compared in a width field 406 and height field 408 , respectfully.
  • An ignore patterns field 410 allows a user to input a list of patterns matching URLs (e.g., JAVA script URLs) that should not trigger a build failure. For example, it may be useful to ignore errors caused by analytics or product reviews. Instead, these errors may be listed as warnings.
  • patterns matching URLs e.g., JAVA script URLs
  • An IEmode field 412 allows the user to modify a server header for the test. As discussed above, the modification may be sent to the server by the modification module 210 to tell the server to behave differently. By selecting “NoChange,” the server headers are unmodified.
  • a charset field 414 allows the user to set the charset value of a Content-Type response header returned from the server.
  • the setting matches all requests with headers of Content-Type that match.
  • the browser may interpret tests differently. This may be an important setting for internationalization.
  • test parameter UI 400 Other parameters, as established by the user creating the tests, may be provided in the test parameter UI 400 . That is, the user creating the test may establish the various parameters that are required, fixed, and changeable on the test parameter UI 400 . Once all the required parameters are entered in the test parameter UI 400 , the user may trigger the test by selecting the build button 414 . In example embodiments, the parameters may be parsed into the different steps of the test by the testing module 204 . Subsequently a user interface having multiple tabs may be provided to the user. Each tab shows a particular build and browser/operating system combination of a web page (e.g., state).
  • a web page e.g., state
  • FIG. 5 is an example of a side-by-side visual comparison user interface 500 (herein referred to as “the SBS UI”), which allows the user to compare two or more tabs next to each other.
  • the SBS UI 500 is provided to the user by the SBS module 310 based on the user selecting a SBS display mode.
  • the SBS UI 500 provides a visual comparison of Build #90 which was tested on Apr. 23, 2012 at 12:03:56 PM.
  • the metadata module 308 is able to incorporate metadata (e.g., Build #90, test date of Apr. 23, 2012, test time of 12:03:56 PM) with the screenshots that were retrieved by the data retrieval module 304 .
  • the SBS module 310 takes the various retrieved information and provides a visual comparison as shown in the SBS UI 500 .
  • a toggle environment variables button 502 allows display or hiding of metadata about the test conditions and test parameters.
  • the default may be to hide the metadata without activation of the toggle environment variables button 502 .
  • a toggle build warnings button 504 allows display or hiding of warnings that occur during the test.
  • the default is for warnings to be hidden without activation of the toggle build warning button 504 .
  • a toggle build errors button 506 allows display or hiding of errors that occurred during the test. Errors may be more severe than a warning. In example embodiments, the default is for errors to be hidden without activation of the toggle build errors button 506 .
  • the SBS UI 500 illustrates the various tabs of states retrieved for a particular test available for visual comparison.
  • the tabs may be indicated by a browser icon along with a corresponding operating system icon.
  • a first available tab is based on a Chrome browser on a Windows operating system
  • a second displayed tab is based on the Chrome browser on an OSX (Apple) operating system.
  • the various combinations of browsers and operating systems available for visual comparison may be indicated in a selection area 508 .
  • a default may be to display all available combinations of browsers and operating systems.
  • the user may untoggle or change a display of the SBS UI 500 by selecting or deselecting combinations in the selection area 508 .
  • the user may select a toggle display button.
  • the various selected tabs are displayed side-by-side for visual comparison as indicated by the selected toggle display button.
  • a toggle all button 510 will display all of the available tabs.
  • a toggle Chrome button 512 will only display tabs associated with Chrome browsers.
  • a toggle FF (FireFox) button 514 will only display tabs associated with FireFox, while a toggle IE button 516 will only display tabs associated with Internet Explorer.
  • the user has selected all of the available tabs (using the checkbox in selection area 508 ) and selected the toggle all button 510 .
  • all of the tabs are displayed in a lower portion of the SBS UI 500 .
  • the user may then visually inspect the different tabs. While the example SBS UI 500 displays four of the retrieved tabs, more tabs may be displayed on the same screen. Alternatively, the tabs may be scrollable to allow the user to view more tabs than the screen area can provide.
  • the user may select the toggle FireFox button 514 , and only the screenshots (or tabs) obtained from a Firefox browser are displayed for visual comparison.
  • FIG. 6 is an example of an overlay visual comparison user interface 600 (herein referred to as the overlay UI).
  • the overlay UI 600 displays one tab over another tab.
  • An opacity slider 602 allows the user to see one tab darker, lighter, or equally the same as the other tab.
  • a first tab e.g., Build #90, state001_product, browser Chrome
  • the second tab e.g., Build #90, state001_product, browser Chrome_osx
  • FIG. 6 shows the Chrome tab with lighter text than the Chrom_OSX tab.
  • the tabs do not line up perfectly. Instead, the text of the Chrome tab appears slightly below and to the right of the text for the Chrome_OSX tab. However, the product images and the product title for both tabs appear to be aligned.
  • a synch button 604 allows the synching of tabs for different browsers.
  • FIG. 6 shows the tabs for the same build (Build #90) and same state (001_product), but different browsers (Chrome and Chrome OSX). Because the synch box 604 is checked, changing to the next state will automatically change both tabs to the next state (e.g., 002_cart). However, if the user does not want to keep the tabs synched, the user may uncheck one of the synch boxes 604 .
  • the developer may want to compare Build #90 to Build #91 (e.g., by unsynching the two tabs and changing the build number in a build field 606 ) to see if changes go in or how much progress has been made between builds.
  • the state may be changed by changing the selection in a state field 608 or the browser may be changed by changing the selection in a browser field 610 .
  • FIG. 7 is a flow diagram of an example high-level method 700 for providing a visual state comparison.
  • a test is built.
  • the test builder module 202 allows a user (e.g., developer or QA personnel) to create tests to be run against various web applications for different browsers and operating systems.
  • the user indicates steps in the test and parameters within the test that are both configurable and fixed. For example, a starting product URL may be indicated as a configurable parameter in a checkout process test.
  • the test builder module 202 generates the test based on the user inputs. The test is then stored for later use.
  • parameters of a test are received.
  • a user selects the test and provides at least some configurable parameters in a user interface provided by the testing module 204 .
  • the user may provide a start URL to the testing module 204 and indicates different browsers that the test is to be run against.
  • the user may also provide a height and width of the browsers to be compared.
  • the testing module 204 receives the parameters and triggers the capture of the proper screenshots and metadata in operation 706 .
  • the state capture module 206 captures screenshots of a particular page (e.g., state) of the web application being tested.
  • the state capture module 206 may open a web page or software application in multiple configurations of operating systems and browsers, and captures screenshots for comparison so a developer or quality assurance engineer can see if the web page or software application is broken or misaligned on certain browsers.
  • the metadata capture module 208 captures metadata for each state of each browser.
  • the metadata may be captured before and during the running of the test (e.g., while the state capture module 206 is capturing screenshots).
  • the metadata for each state may include, for example, the URL, the browser being used, a build number, and an operating system.
  • the captured screenshots and metadata are stored to a data store (e.g., data storage 112 ) by the metadata capture module 208 in operation 708 .
  • the visual state comparison may be performed in operation 710 .
  • operation 710 is shown in more detail. Initially, a user provides parameters for the visual comparison. Based on the parameters provided to identify at least one test received by the report parameter module 302 , the data retrieval module 304 retrieves the corresponding screenshots and metadata from the data store (e.g., data storage 112 ) in operation 802 .
  • the data store e.g., data storage 112
  • the screenshots are provided in a side-by-side UI in operation 806 .
  • the oSBS module 310 provides tabs for rendering in a side-by side visual comparison UI.
  • the SBS module 310 may also provide various buttons on the user interface to allow the user to toggle between various tabs. Activation of one of the buttons will trigger toggling in operation 808 .
  • an overlay UI is presented in operation 810 by the overlay module 312 .
  • a determination of whether a next display should be synched is made. In one embodiment, the default is to synch the tabs.
  • the overlay module 312 provides overlay visual comparison tabs that are synched for rendering in a next overlay visual comparison user interface in operation 814 . However, if the tabs are not to be synched, then a next UI is provided without synchronization in operation 816 .
  • the overlay module 312 may also provide an opacity slider, build field, state field, and browser field to allow the user to adjust the opacity, build, state, and browser of a tab in comparison with another tab in operation 818 .
  • operation 818 may occur at any time after the overlay UI is provided in operation 810 .
  • a synch operation (similar to operation 812 ) may be applicable to the side-by-side UI. That is, for example, tabs in a side-by-side UI may be unsynched to show different builds or states.
  • the method 710 of FIG. 8 is merely an example, and operations may be optional, removed, added, or practiced in a different order.
  • FIG. 9 is a block diagram illustrating components of a machine 900 , according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • FIG. 9 shows a diagrammatic representation of the machine 900 in the example form of a computer system and within which instructions 924 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 900 to perform any one or more of the methodologies discussed herein may be executed.
  • the machine 900 operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine 900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine 900 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a smartphone, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 924 , sequentially or otherwise, that specify actions to be taken by that machine.
  • the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions 924 to perform any one or more of the methodologies discussed herein.
  • the machine 900 includes a processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), or any suitable combination thereof), a main memory 904 , and a static memory 906 , which are configured to communicate with each other via a bus 908 .
  • the machine 900 may further include a graphics display 910 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)).
  • a graphics display 910 e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)
  • the machine 900 may also include an alpha-numeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), a storage unit 916 , a signal generation device 918 (e.g., a speaker), and a network interface device 920 .
  • an alpha-numeric input device 912 e.g., a keyboard
  • a cursor control device 914 e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument
  • storage unit 916 e.g., a storage unit 916
  • a signal generation device 918 e.g., a speaker
  • the storage unit 916 includes a machine-readable medium 922 on which is stored the instructions 924 embodying any one or more of the methodologies or functions described herein.
  • the instructions 924 may also reside, completely or at least partially, within the main memory 904 , within the processor 902 (e.g., within the processor's cache memory), or both, during execution thereof by the machine 900 . Accordingly, the main memory 904 and the processor 902 may be considered as machine-readable media.
  • the instructions 924 may be transmitted or received over a network 926 via the network interface device 920 .
  • the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 922 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions.
  • machine-readable medium shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions for execution by a machine (e.g., machine 900 ), such that the instructions, when executed by one or more processors of the machine (e.g., processor 902 ), cause the machine to perform any one or more of the methodologies described herein.
  • a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory, an optical medium, a magnetic medium, or any suitable combination thereof.
  • the instructions 924 may further be transmitted or received over a communications network 926 using a transmission medium via the network interface device 920 and utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
  • Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, POTS networks, and wireless data networks (e.g., WiFi and WiMax networks).
  • the term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules.
  • a “hardware module” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner.
  • one or more computer systems e.g., a standalone computer system, a client computer system, or a server computer system
  • one or more hardware modules of a computer system e.g., a processor or a group of processors
  • software e.g., an application or application portion
  • a hardware module may be implemented mechanically, electronically, or any suitable combination thereof.
  • a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations.
  • a hardware module may be a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC.
  • a hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
  • a hardware module may include software encompassed within a general-purpose processor or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • hardware module should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
  • “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
  • Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
  • a resource e.g., a collection of information
  • processors may be temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein.
  • processor-implemented module refers to a hardware module implemented using one or more processors.
  • the methods described herein may be at least partially processor-implemented, a processor being an example of hardware.
  • a processor being an example of hardware.
  • the operations of a method may be performed by one or more processors or processor-implemented modules.
  • the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS).
  • SaaS software as a service
  • at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)).
  • API application program interface
  • the performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines.
  • the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
  • inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of embodiments of the present invention.
  • inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is, in fact, disclosed.
  • the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present invention. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present invention as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Abstract

In various example embodiments, systems and methods for providing visual state comparisons is provided. In example embodiments, parameters are received from a user device of a user. The parameters indicate different browser/operating system combinations for a visual comparison of screenshots of a state for the different browser/operating system combinations. Based on the different browser/operating system combinations indicated by the parameters, the screenshots along with corresponding metadata for each indicated browser/operating system combination are retrieved. A user interface that visually compares at least two retrieved screenshots is provided to the user device. The user interface includes a display of the corresponding metadata for the at least two retrieved screenshots.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 14/599,888, filed on Jan. 19, 2015, now U.S. Pat. No. 9,805,007, issued Oct. 31, 2017; which is a continuation of U.S. patent application Ser. No. 13/610,082, filed on Sep. 11, 2012, now U.S. Pat. No. 8,938,683, issued Jan. 20, 2015; the disclosures of all of these applications and patents are incorporated by reference herein.
  • FIELD
  • The present disclosure relates generally to data processing, and in a specific example embodiment, to providing a visual state comparator.
  • BACKGROUND
  • Oftentimes, developers and quality assurance (QA) personnel want to test how an application will be displayed on different browsers and operating systems. These personnel may individually capture screenshots from each combination of browsers and operating systems, and view the screenshots one at a time.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various ones of the appended drawings merely illustrate example embodiments of the present invention and cannot be considered as limiting its scope.
  • FIG. 1 is a block diagram illustrating an example environment in which embodiments of a system for providing a visual state comparator may be implemented.
  • FIG. 2 is a block diagram of an example embodiment of a test engine.
  • FIG. 3 is a block diagram of an example embodiment of a report engine.
  • FIG. 4 is an example of a test parameter user interface.
  • FIG. 5 is an example of a side-by-side visual comparison screenshot.
  • FIG. 6 is an example of an overlay visual comparison screenshot.
  • FIG. 7 is a flow diagram of an example high-level method for providing a visual state comparison.
  • FIG. 8 is a flow diagram of a more detailed method for providing the visual state comparison.
  • FIG. 9 is a simplified block diagram of a machine in an example form of a computing system within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
  • DETAILED DESCRIPTION
  • The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the present invention. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques have not been shown in detail.
  • In example embodiments, systems and methods for providing visual state comparisons are provided. In example embodiments, parameters are received from a user device of a user. The parameters indicate different browser/operating system combinations for a visual comparison of screenshots of a state for the different browser/operating system combinations. Based on the different browser/operating system combinations indicated by the parameters, the screenshots along with corresponding metadata for each indicated browser/operating system combination are retrieved. A user interface that visually compares at least two retrieved screenshots is provided to the user device. The user interface includes a display of the corresponding metadata for the at least two retrieved screenshots.
  • By using embodiments of the present invention, a user may easily and quickly identify visual differences of webpages for different browsers and operating systems. The user does not need to generate and individually review each screenshot in isolation. Therefore, one or more of the methodologies discussed herein may obviate a need for time consuming data processing by the user. This may have the technical effect of reducing computing resources used by one or more devices within the system. Examples of such computing resources include, without limitation, processor cycles, network traffic, memory usage, storage space, and power consumption.
  • FIG. 1 is a block diagram illustrating an example environment 100 in which embodiments of a system for providing a visual state comparator may be implemented. In example embodiments, a comparator system 102 is coupled via a network 104 (e.g., the Internet, wireless network, cellular network, or a Wide Area Network (WAN)) to a plurality of sources 106. The sources 106 may comprise web servers of various web sites for which the comparator system 102 is to perform visual state comparisons. For example, the source 106 may be a server for an online store that sells computer products. The comparator system 102 may be configured to test various applications running on different combinations of browsers and operating systems for uniformity. The application may be, in one example, a web-based process. These applications may include, for example, a checkout application (e.g., webpages of a checkout flow), a registration application (e.g., webpages of a process to register with the online store), or a search application (e.g., webpages for a particular search process).
  • To enable the comparator system 102 to provide visual state comparisons, the comparator system 102 comprises a test engine 108 and a report engine 110. The test engine 108 allows a user (e.g., a developer or QA personnel) of the comparison system 102 to generate and run tests. The test results in captured metadata and screenshots of various states of a build for different browsers and operating systems. A build is based on a change in code which may result in a different version of webpage(s) for the web-based process. The states may comprise the various pages (e.g., screenshots) of the web-based process or states within the pages.
  • The test results (e.g., captured screenshots and corresponding metadata) are stored to a data storage 112. Subsequently, the report engine 110 accesses the data storage 112 to retrieve specific metadata and screenshots and provides the results to a device of the user for visual comparison. In example embodiments, the visual comparison may be presented in a side-by-side view or in an overlay view. The test engine 108 and the report engine 110 will be discussed in further detail below.
  • The environment 100 may comprise other components that are not shown. In general, components, protocols, structures, and techniques not directly related to functions of example embodiments or that are optional have not been shown or discussed in detail. For example, a proxy device may be included in the environment 100 for managing modifications to headers as will be discussed in more detail below. However, this proxy device is optional.
  • FIG. 2 is a block diagram of an example embodiment of the test engine 108. The test engine 108 generates and runs tests of various builds of the application under test and applications for different browsers and operating systems. In example embodiments, the test engine 108 comprises a test builder module 202, a testing module 204, a state capture module 206, a metadata capture module 208, a modification module 210, and an error tracking module 212. Alternative embodiments may comprise more or less modules, combine functionality of modules, or locate modules in a different location.
  • The test builder module 202 allows a user (e.g., developer or QA personnel) to create tests to be run against various web applications for different browsers and operating systems. In example embodiments, the user indicates steps in the test to the test builder module 220. In one example, the user may use Selenium (or other web application testing automation tools) to provide code for the test builder module 202. The user may also provide, to the test builder module 202, parameters that are both configurable and fixed for the test. For example, for a checkout process test a starting product URL (Uniform Resource Locator) may be indicated as a configurable parameter in the test. Based on the various user inputs, the test builder module 202 generates each test. The test is then stored for later use. In one embodiment, the generated test may be stored to the data storage 112. It is noted that any number of tests may be generated and stored at any time prior to running the tests (e.g., allowing for batching of tests).
  • The testing module 204 sets up the test to be run by receiving parameters for the test from a user (e.g., developer or QA personnel). The user running the test may be the same or different from the user that created the test. For example, the user provides the start URL to the testing module 204 and indicates different browsers and operating system combinations that the test is to be run against. The user may also provide a height and width of the browsers to be compared. Other parameters, as established by the user creating the tests, are also contemplated and may vary for each test. In example embodiments, the parameters are parsed into the different steps of the test by the testing module 204. In one embodiment, a fetch URL test may be performed in which a plurality of URLs is provided to the testing module 204.
  • The testing module 204 triggers the state capture module 206 and the metadata capture module 208 to perform the test using the various parameters. The test may be run across multiple actual and virtual machines.
  • The state capture module 206 captures screenshots of a particular page (e.g., state) of the web application being tested for each indicated browser and operation system combination. That is, the state capture module 206 may open a web page or software application in multiple configurations of operating systems and browsers, and captures screenshots for comparison so that the user can see if the web page or software application is broken or misaligned on certain browsers. For example, a checkout application will have a particular checkout flow. A first state of the checkout application will be a product page. The next state or page may include a cart with the product selected. Accordingly, the next state may be an order form (e.g., for input of customer information) and include an error messaging associated with the form. Subsequently, a confirm state and a print receipt state may follow. After the checkout process is completed, the test may return to an account state to check an order history to ensure the order is present. Accordingly, during a multi-step flow, the state capture module 206 captures different points in time (e.g., the different states) of the process for each selected browser/operating system combination. The captured screenshots are stored to a data store (e.g., data storage 112) by the state capture module 206.
  • The metadata capture module 208 captures metadata for each state of each browser/operating system combination. The metadata may be captured before and during the running of the test (e.g., while the state capture module 206 is capturing screenshots). The metadata for each state may include, for example, the URL, the browser being used, a build number, an operating system, page/state titles chosen by a tester, server headers, or server responses. The captured metadata are stored to a data store (e.g., data storage 112) by the metadata capture module 208.
  • The modification module 210 allows modification of headers between tests. The modifications are received from the user and sent to a proxy device before the test is run. For instance, the modification may instruct an Internet Explorer 8 (IE8) browser to run the test as if it were Internet Explorer 7 (IE7). This may be desirable, for example, for backward compatibility reasons. The testing may still be run against IE7 and IE8 to compare these to each other. However, a server (e.g., the source 106) may be instructed based on the modifications from the modification module 210 that for certain aspects of the test the modification should apply.
  • In another example, the user (e.g., developer) may be using a machine that is different in subtle ways than what is going to be out in production that another user (e.g., customer) sees. As such, the modification(s) may be sent to the server by the modification module 210 to tell the server to behave differently so when the client (browser) visits, results based on the modification may be obtained. Accordingly, the user (e.g., developer) can compare a current environment during development to what the user thinks the environment will be during production.
  • In one embodiment, a proxy device performs the modification transmitted by the modification module 210. The modification may indicate a modified server header for the test. The proxy device may intercept a request and modify the request on-the-fly. For example, a webpage may have foreign language on it, but foreign language character sets may not be available. The server may be configured to tell IE8 to treat the foreign language as a normal character set. However, once the application (associated with the webpage) goes into production, the webpage may get a different character set. In order to compare the two states, in one test, the developer may get boxes where a Japanese character should be and in another test, the developer will see the Japanese characters. In one embodiment, the modification module 210 is the proxy device.
  • The error tracking module 212 tracks errors (e.g., JavaScript errors) during the rendering of the states. In capturing the screenshots, web page requests are run through a proxy (e.g., proxy device) that injects JavaScript code to capture and log JavaScript errors. Each screenshot can be captured with an error log that lists the URL and line numbers of any JavaScript errors. Thus, each time a JavaScript error occurs in a browser and state, the error is noted to a log by the error tracking module 212. The errors may be subsequently presented in a report. In one embodiment, whenever a JavaScript error occurs, the browser notes the error. At various intervals, the error tracking module 212 polls the browser to find out how many errors have occurred. The error tracking module 212 may note the state (e.g., cross-reference with testing data) and associate each error with the proper state.
  • FIG. 3 is a block diagram of an example embodiment of the report engine 110. The report engine 110 accesses the data storage 112 to retrieve stored metadata and screenshots for specific builds and browsers. These results are provided by the report engine 110 as visual comparisons. Accordingly, the report engine 110 comprises a report parameter module 302, a data retrieval module 304, and visual comparator modules 306.
  • The report parameter module 302 receives parameters to obtain data for a report that provides the visual comparison. As such, the report parameter module 302 provides a user interface that allows the user to select the build, web application, or other information to retrieve data associated with a test that has been performed.
  • Based on the parameters received by the report parameter module 302, the data retrieval module 304 accesses the data store (e.g., data storage 112) and retrieves the corresponding screenshots and metadata. More specifically, the data retrieval module 304 may send a series of requests for all the information (e.g., screenshots and metadata). For example, if the user selects a particular build number (e.g., #90), an immediate request is sent (e.g., via JAVA script) to go and retrieve all the screenshots and metadata for the particular build number. Thus, once a selection of the build, state, and browser/operating combination is received, the specific information based on parameters received by the report parameter module 302 is retrieved.
  • The visual comparator modules 306 format the retrieved screenshots and metadata for visual comparison. The visual comparators modules 306 determine which metadata goes with which screenshot and renders a visual comparison page that includes screenshots (also referred to as tabs on the visual comparison page). In one embodiment, the visual comparison page includes a magnifying glass. As the user mouses over a pure aspect of the tab, the magnifying glass magnifies a view so the user can do a quick dive of what one tab looks like compared to another tab. Accordingly, the visual comparator modules 306 comprise a metadata module 308, a side-by-side (SBS) module 310, an overlay module 312, and a synch module 314.
  • The metadata module 308 takes the metadata that was generated before and during the capture of each screenshot and uses the metadata in generating reports. For example, the metadata module 308 may identify that a particular screenshot is from a particular browser/operating system combination on a particular date for a specific build at a certain time. The metadata may also include information regarding a session, JAVA script errors, and network errors for each screenshot. This metadata may be displayed on the visual comparison UI to the user.
  • The SBS module 310 provides side-by-side visual comparison of tabs for rendering in a side-by side visual comparison user interface. The SBS module 310 may also provide various buttons on the user interface to allow the user to toggle between various tabs. In one embodiment, the tabs may be toggled based on specific browser/operating system types. An example of a side-by-side visual comparison user interface is shown in FIG. 5.
  • The overlay module 312 provides overlay visual comparison tabs for rendering in an overlay visual comparison user interface. The overlay module 312 may also provide an opacity slider to allow the user to adjust the opacity of a tab in comparison with another tab. Other tools may be provided by the overlay module 312 as will be discussed in more detail in connection with FIG. 6 below.
  • The synch module 314 manages the synching of screenshots between two or more comparisons. For example, a visual comparison may compare Build #91 for products in Chrome OSX and Build #91 on Firefox 3.6. Proceeding to a next visual comparison page, both screenshots (for Chrome OSX and Firefox 3.6) simultaneously change to reflect the next state in the web application. Ordinarily, the developer will want to keep the screenshots synched as the developer steps through and compares the screenshots within a web application. As such, synch maybe a user interface default. However, there may be times when the developer does not want to keep the screenshots synched. For example, the developer may want to compare a form errors page to a page with no errors to see if the errors come in correctly. In another example, the developer may want to compare a tab in Build #90 to Build #91 to see if changes go in properly or how much progress has been made.
  • Although the various components of the test engine 108 and the report engine 110 have been discussed in terms of a variety of individual modules and engines, a skilled artisan will recognize that many of the items can be combined or organized in other ways. Furthermore, not all components of the test engine 108 and the report engine 110 may have been included in FIGS. 2 and 3. In general, components, protocols, structures, and techniques not directly related to functions of example embodiments have not been shown or discussed in detail. The description given herein simply provides a variety of example embodiments to aid the reader in an understanding of the systems and methods used herein.
  • FIG. 4 is an example of a test parameter user interface (UI) 400. The test parameter UI 400 may be provided by the testing module 204 to allow the user to set the parameters for a test. In the present example, the test is for a web application for a checkout process (e.g., to purchase a product). The test parameter UI 400 includes a starting product URL field 402 where the user enters a start URL for the checkout test.
  • The test parameter UI 400 further includes checkboxes 404 for a plurality of different browsers that the test can be run against. The user may select the browsers to be tested by selecting the corresponding checkbox 404. For example, the user has selected to test Internet Explorer7 (IE7), Internet Explorer8 (IE8), and Internet Explorer9 (IE9), but Internet Explorer6 (IE6) is not selected. The user may also provide a width and height of the browsers to be compared in a width field 406 and height field 408, respectfully.
  • An ignore patterns field 410 allows a user to input a list of patterns matching URLs (e.g., JAVA script URLs) that should not trigger a build failure. For example, it may be useful to ignore errors caused by analytics or product reviews. Instead, these errors may be listed as warnings.
  • An IEmode field 412 allows the user to modify a server header for the test. As discussed above, the modification may be sent to the server by the modification module 210 to tell the server to behave differently. By selecting “NoChange,” the server headers are unmodified.
  • A charset field 414 allows the user to set the charset value of a Content-Type response header returned from the server. The setting matches all requests with headers of Content-Type that match. By modifying the charset value of the Content-Type response header, the browser may interpret tests differently. This may be an important setting for internationalization. By selecting “NoChange,” the server headers are unmodified.
  • Other parameters, as established by the user creating the tests, may be provided in the test parameter UI 400. That is, the user creating the test may establish the various parameters that are required, fixed, and changeable on the test parameter UI 400. Once all the required parameters are entered in the test parameter UI 400, the user may trigger the test by selecting the build button 414. In example embodiments, the parameters may be parsed into the different steps of the test by the testing module 204. Subsequently a user interface having multiple tabs may be provided to the user. Each tab shows a particular build and browser/operating system combination of a web page (e.g., state).
  • FIG. 5 is an example of a side-by-side visual comparison user interface 500 (herein referred to as “the SBS UI”), which allows the user to compare two or more tabs next to each other. The SBS UI 500 is provided to the user by the SBS module 310 based on the user selecting a SBS display mode. Continuing with the test that was established with parameters in FIG. 4, the SBS UI 500 provides a visual comparison of Build #90 which was tested on Apr. 23, 2012 at 12:03:56 PM. As such the metadata module 308 is able to incorporate metadata (e.g., Build #90, test date of Apr. 23, 2012, test time of 12:03:56 PM) with the screenshots that were retrieved by the data retrieval module 304. As such, the SBS module 310 takes the various retrieved information and provides a visual comparison as shown in the SBS UI 500.
  • A toggle environment variables button 502 allows display or hiding of metadata about the test conditions and test parameters. In example embodiments, the default may be to hide the metadata without activation of the toggle environment variables button 502.
  • A toggle build warnings button 504 allows display or hiding of warnings that occur during the test. In example embodiments, the default is for warnings to be hidden without activation of the toggle build warning button 504.
  • A toggle build errors button 506 allows display or hiding of errors that occurred during the test. Errors may be more severe than a warning. In example embodiments, the default is for errors to be hidden without activation of the toggle build errors button 506.
  • The SBS UI 500 illustrates the various tabs of states retrieved for a particular test available for visual comparison. The tabs may be indicated by a browser icon along with a corresponding operating system icon. For example, a first available tab is based on a Chrome browser on a Windows operating system, while a second displayed tab is based on the Chrome browser on an OSX (Apple) operating system. The various combinations of browsers and operating systems available for visual comparison may be indicated in a selection area 508. In an initial SBS UI 500, a default may be to display all available combinations of browsers and operating systems. Subsequently, the user may untoggle or change a display of the SBS UI 500 by selecting or deselecting combinations in the selection area 508. Once the combinations are selected or deselected (e.g., by deselecting a corresponding checkbox), the user may select a toggle display button. As a result, the various selected tabs are displayed side-by-side for visual comparison as indicated by the selected toggle display button. For example, a toggle all button 510 will display all of the available tabs. A toggle Chrome button 512 will only display tabs associated with Chrome browsers. Similarly, a toggle FF (FireFox) button 514 will only display tabs associated with FireFox, while a toggle IE button 516 will only display tabs associated with Internet Explorer.
  • In the present example, the user has selected all of the available tabs (using the checkbox in selection area 508) and selected the toggle all button 510. As a result, all of the tabs are displayed in a lower portion of the SBS UI 500. The user may then visually inspect the different tabs. While the example SBS UI 500 displays four of the retrieved tabs, more tabs may be displayed on the same screen. Alternatively, the tabs may be scrollable to allow the user to view more tabs than the screen area can provide. In another example, the user may select the toggle FireFox button 514, and only the screenshots (or tabs) obtained from a Firefox browser are displayed for visual comparison.
  • FIG. 6 is an example of an overlay visual comparison user interface 600 (herein referred to as the overlay UI). The overlay UI 600 displays one tab over another tab. An opacity slider 602 allows the user to see one tab darker, lighter, or equally the same as the other tab. As shown in FIG. 6, a first tab (e.g., Build #90, state001_product, browser Chrome) is set at roughly half opacity using the opacity slider 602 a. The second tab (e.g., Build #90, state001_product, browser Chrome_osx) is set at full opacity using the opacity slider 602 b. As a result, FIG. 6 shows the Chrome tab with lighter text than the Chrom_OSX tab. In the present example, the tabs do not line up perfectly. Instead, the text of the Chrome tab appears slightly below and to the right of the text for the Chrome_OSX tab. However, the product images and the product title for both tabs appear to be aligned.
  • As shown in FIG. 6, a synch button 604 allows the synching of tabs for different browsers. For example, FIG. 6 shows the tabs for the same build (Build #90) and same state (001_product), but different browsers (Chrome and Chrome OSX). Because the synch box 604 is checked, changing to the next state will automatically change both tabs to the next state (e.g., 002_cart). However, if the user does not want to keep the tabs synched, the user may uncheck one of the synch boxes 604. For example, the developer may want to compare Build #90 to Build #91 (e.g., by unsynching the two tabs and changing the build number in a build field 606) to see if changes go in or how much progress has been made between builds. Similarly, the state may be changed by changing the selection in a state field 608 or the browser may be changed by changing the selection in a browser field 610.
  • FIG. 7 is a flow diagram of an example high-level method 700 for providing a visual state comparison. In operation 702, a test is built. In example embodiments, the test builder module 202 allows a user (e.g., developer or QA personnel) to create tests to be run against various web applications for different browsers and operating systems. In example embodiments, the user indicates steps in the test and parameters within the test that are both configurable and fixed. For example, a starting product URL may be indicated as a configurable parameter in a checkout process test. The test builder module 202 generates the test based on the user inputs. The test is then stored for later use.
  • In operation 704, parameters of a test are received. In example embodiments, a user selects the test and provides at least some configurable parameters in a user interface provided by the testing module 204. For example, the user may provide a start URL to the testing module 204 and indicates different browsers that the test is to be run against. The user may also provide a height and width of the browsers to be compared.
  • The testing module 204 receives the parameters and triggers the capture of the proper screenshots and metadata in operation 706. In example embodiments, the state capture module 206 captures screenshots of a particular page (e.g., state) of the web application being tested. The state capture module 206 may open a web page or software application in multiple configurations of operating systems and browsers, and captures screenshots for comparison so a developer or quality assurance engineer can see if the web page or software application is broken or misaligned on certain browsers. Similarly, the metadata capture module 208 captures metadata for each state of each browser. The metadata may be captured before and during the running of the test (e.g., while the state capture module 206 is capturing screenshots). The metadata for each state may include, for example, the URL, the browser being used, a build number, and an operating system. The captured screenshots and metadata are stored to a data store (e.g., data storage 112) by the metadata capture module 208 in operation 708.
  • At any time after the storing of the captured screenshots and metadata to the data store, the visual state comparison may be performed in operation 710. Referring now to FIG. 8, operation 710 is shown in more detail. Initially, a user provides parameters for the visual comparison. Based on the parameters provided to identify at least one test received by the report parameter module 302, the data retrieval module 304 retrieves the corresponding screenshots and metadata from the data store (e.g., data storage 112) in operation 802.
  • In operation 804, a determination is made as to whether the visual comparison should be displayed in an overlay or side-by-side visual comparison. Accordingly, one or more visual comparator modules 306 may make this determination based on a user input (e.g., selection of a corresponding button).
  • If the visual comparison is to be shown in a side-by-side format, then the screenshots are provided in a side-by-side UI in operation 806. In example embodiments, the oSBS module 310 provides tabs for rendering in a side-by side visual comparison UI. The SBS module 310 may also provide various buttons on the user interface to allow the user to toggle between various tabs. Activation of one of the buttons will trigger toggling in operation 808.
  • If the visual comparison is to be shown in an overlay format, an overlay UI is presented in operation 810 by the overlay module 312. In operation 812, a determination of whether a next display should be synched is made. In one embodiment, the default is to synch the tabs. As such, the overlay module 312 provides overlay visual comparison tabs that are synched for rendering in a next overlay visual comparison user interface in operation 814. However, if the tabs are not to be synched, then a next UI is provided without synchronization in operation 816.
  • The overlay module 312 may also provide an opacity slider, build field, state field, and browser field to allow the user to adjust the opacity, build, state, and browser of a tab in comparison with another tab in operation 818. It is noted that operation 818 may occur at any time after the overlay UI is provided in operation 810. It is also noted that in one embodiment, a synch operation (similar to operation 812) may be applicable to the side-by-side UI. That is, for example, tabs in a side-by-side UI may be unsynched to show different builds or states. As such, the method 710 of FIG. 8 is merely an example, and operations may be optional, removed, added, or practiced in a different order.
  • FIG. 9 is a block diagram illustrating components of a machine 900, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, FIG. 9 shows a diagrammatic representation of the machine 900 in the example form of a computer system and within which instructions 924 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 900 to perform any one or more of the methodologies discussed herein may be executed. In alternative embodiments, the machine 900 operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 900 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a smartphone, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 924, sequentially or otherwise, that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions 924 to perform any one or more of the methodologies discussed herein.
  • The machine 900 includes a processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), or any suitable combination thereof), a main memory 904, and a static memory 906, which are configured to communicate with each other via a bus 908. The machine 900 may further include a graphics display 910 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)). The machine 900 may also include an alpha-numeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), a storage unit 916, a signal generation device 918 (e.g., a speaker), and a network interface device 920.
  • The storage unit 916 includes a machine-readable medium 922 on which is stored the instructions 924 embodying any one or more of the methodologies or functions described herein. The instructions 924 may also reside, completely or at least partially, within the main memory 904, within the processor 902 (e.g., within the processor's cache memory), or both, during execution thereof by the machine 900. Accordingly, the main memory 904 and the processor 902 may be considered as machine-readable media. The instructions 924 may be transmitted or received over a network 926 via the network interface device 920.
  • As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 922 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions for execution by a machine (e.g., machine 900), such that the instructions, when executed by one or more processors of the machine (e.g., processor 902), cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory, an optical medium, a magnetic medium, or any suitable combination thereof.
  • The instructions 924 may further be transmitted or received over a communications network 926 using a transmission medium via the network interface device 920 and utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, POTS networks, and wireless data networks (e.g., WiFi and WiMax networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
  • Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
  • In some embodiments, a hardware module may be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module may be a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module may include software encompassed within a general-purpose processor or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
  • Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
  • The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors.
  • Similarly, the methods described herein may be at least partially processor-implemented, a processor being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)).
  • The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
  • Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of embodiments of the present invention. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is, in fact, disclosed.
  • The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
  • As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present invention. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present invention as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (21)

1. (canceled)
2. A method comprising:
building a test for a web application;
obtaining a modification to a server header of a server system associated with a first run of the test with respect to a first browser/operating system combination, the modification to the server header adjusting behavior of the server system, the server system including one or more servers;
performing the first run of the test, wherein performing the first run of the test includes:
communicating, to the server system, the modification to the server header such that behavior of the server system is adjusted according to the modification during the first run of the test;
capturing a first screenshot of a state of the web application that is encountered during the first run; and
capturing first metadata that corresponds to the state as encountered during the first run; and
storing, in data storage, the first metadata and the first screenshot in association with the first browser/operating system combination such that the first screenshot and the first metadata are available for a visual comparison against a second screenshot and corresponding second metadata of the state as encountered during a second run of the test with respect to a second browser/operating system combination.
3. The method of claim 2, further comprising:
retrieving the first screenshot, the first metadata, the second screenshot, and the second metadata from the data storage; and
causing presentation on a user interface of a user device a visual comparison of the retrieved first screenshot and first metadata compared against the retrieved second screenshot and second metadata.
4. The method of claim 3, further comprising:
receiving, from the user device, first parameters indicating the first browser/operating system combination and second parameters indicating the second browser/operating system combination for a visual comparison of the state with respect to the first browser/operating system combination and the second browser/operating system combination; and
retrieving the first screenshot, the first metadata, the second screenshot, and the second metadata in response to receiving the first parameters and the second parameters.
5. The method of claim 3, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in a synchronized manner along the state and a corresponding build of the web application.
6. The method of claim 3, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in a side-by-side visual format.
7. The method of claim 3, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in an overlay visual format.
8. The method of claim 2, wherein the second browser/operating system combination is the same as the first browser/operating system combination and wherein performing the second run of the test excludes communicating to the server system the modification to the server header such that the behavior of the server system is not adjusted according to the modification during the second run.
9. One or more non-transitory machine-readable storage media having instructions stored thereon that are executable by one or more processors to cause a system to perform operations, the operations comprising:
performing a first run of a test for a web application with respect to a first browser/operating system combination, wherein performing the first run of the test includes:
adjusting, for the first run of the test, behavior of a browser of the first browser/operating system combination based on a modification to a header associated with the browser;
capturing a first screenshot of a state of the web application that is encountered during the first run; and
capturing first metadata that corresponds to the state as encountered during the first run; and
storing, in data storage, the first metadata and the first screenshot in association with the first browser/operating system combination such that the first screenshot and the first metadata are available for a visual comparison against a second screenshot and corresponding second metadata of the state as encountered during a second run of the test with respect to a second browser/operating system combination.
10. The one or more non-transitory machine-readable storage media of claim 9, wherein the modification to the header indicates that adjusting the behavior of the browser includes adjusting a version of the browser.
11. The one or more non-transitory machine-readable storage media of claim 9, wherein the second browser/operating system combination is the same as the first browser/operating system combination and wherein performing the second run of the test excludes adjusting the behavior of the browser based on the modification to the header associated with the browser.
12. The one or more non-transitory machine-readable storage media of claim 9, wherein the operations further comprise:
retrieving the first screenshot, the first metadata, the second screenshot, and the second metadata from the data storage; and
causing presentation on a user interface of a user device a visual comparison of the retrieved first screenshot and first metadata compared against the retrieved second screenshot and second metadata.
13. The one or more non-transitory machine-readable storage media of claim 12, wherein the operations further comprise:
receiving, from the user device, first parameters indicating the first browser/operating system combination and second parameters indicating the second browser/operating system combination for a visual comparison of the state with respect to the first browser/operating system combination and the second browser/operating system combination; and
retrieving the first screenshot, the first metadata, the second screenshot, and the second metadata in response to receiving the first parameters and the second parameters.
14. The one or more non-transitory machine-readable storage media of claim 12, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in a synchronized manner along the state and a corresponding build of the web application.
15. The one or more non-transitory machine-readable storage media of claim 12, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in a side-by-side visual format.
16. The one or more non-transitory machine-readable storage media of claim 12, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in an overlay visual format.
17. A system comprising:
one or more processors; and
one or more non-transitory machine-readable storage media having instructions stored thereon that are executable by the one or more processors to cause the system to perform operations, the operations comprising:
building a test for a web application;
obtaining one or more header modifications selected from a group of header modifications consisting of: a server header modification of a server header of a server system associated with a first run of the test with respect to a first browser/operating system combination, the server header modification adjusting behavior of the server system, the server system including one or more servers; and a browser header modification of a browser header, the browser header modification adjusting behavior of a browser associated with the first browser/operating system combination;
performing the first run of the test, wherein performing the first run of the test includes:
causing, for the first run of the test based on the one or more header modifications, adjustment of behavior of the server system, the browser, or a combination of the server system and the browser;
capturing a first screenshot of a state of the web application that is encountered during the first run; and
capturing first metadata that corresponds to the state as encountered during the first run; and
storing, in data storage, the first metadata and the first screenshot in association with the first browser/operating system combination such that the first screenshot and the first metadata are available for a visual comparison against a second screenshot and corresponding second metadata of the state as encountered during a second run of the test with respect to a second browser/operating system combination.
18. The system of claim 17, wherein the second browser/operating system combination is the same as the first browser/operating system combination and wherein performing the second run of the test excludes adjusting the behavior based on the one or more header modifications.
19. The system of claim 17, wherein the second browser/operating system combination is different from the first browser/operating system combination.
20. The system of claim 17, wherein the operations further comprise:
receiving, from a user device of a user, first parameters indicating the first browser/operating system combination and second parameters indicating the second browser/operating system combination for a visual comparison of the state with respect to the first browser/operating system combination and the second browser/operating system combination;
retrieving the first screenshot, the first metadata, the second screenshot, and the second metadata in response to receiving the first parameters and the second parameters; and
causing presentation on a user interface of the user device a visual comparison of the retrieved first screenshot and first metadata compared against the retrieved second screenshot and second metadata.
21. The system of claim 17, wherein causing presentation on the user interface of the visual comparison includes causing the user interface to display the first screenshot and the second screenshot in a synchronized manner along the state and a corresponding build of the web application.
US15/800,017 2012-09-11 2017-10-31 Visual state comparator Abandoned US20180121400A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/800,017 US20180121400A1 (en) 2012-09-11 2017-10-31 Visual state comparator

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/610,082 US8938683B2 (en) 2012-09-11 2012-09-11 Visual state comparator
US14/599,888 US9805007B2 (en) 2012-09-11 2015-01-19 Visual state comparator
US15/800,017 US20180121400A1 (en) 2012-09-11 2017-10-31 Visual state comparator

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/599,888 Continuation US9805007B2 (en) 2012-09-11 2015-01-19 Visual state comparator

Publications (1)

Publication Number Publication Date
US20180121400A1 true US20180121400A1 (en) 2018-05-03

Family

ID=50234701

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/610,082 Active 2033-03-22 US8938683B2 (en) 2012-09-11 2012-09-11 Visual state comparator
US14/599,888 Active 2033-08-04 US9805007B2 (en) 2012-09-11 2015-01-19 Visual state comparator
US15/800,017 Abandoned US20180121400A1 (en) 2012-09-11 2017-10-31 Visual state comparator

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/610,082 Active 2033-03-22 US8938683B2 (en) 2012-09-11 2012-09-11 Visual state comparator
US14/599,888 Active 2033-08-04 US9805007B2 (en) 2012-09-11 2015-01-19 Visual state comparator

Country Status (1)

Country Link
US (3) US8938683B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230214081A1 (en) * 2020-05-26 2023-07-06 Indeed, Inc. System and Method for Displaying and Analyzing Interface Variants for Concurrent Analysis by a User

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130263098A1 (en) * 2012-03-29 2013-10-03 Pawel Piotr Duda Method and system for testing of mobile web sites
US9449126B1 (en) 2012-06-01 2016-09-20 Inkling Systems, Inc. System and method for displaying content according to a target format for presentation on a target presentation device
US20140189576A1 (en) * 2012-09-10 2014-07-03 Applitools Ltd. System and method for visual matching of application screenshots
US8938683B2 (en) 2012-09-11 2015-01-20 Ebay Inc. Visual state comparator
US20140095600A1 (en) * 2012-09-28 2014-04-03 Bradford H. Needham Multiple-device screen capture
US10387294B2 (en) 2012-10-12 2019-08-20 Vmware, Inc. Altering a test
US9292416B2 (en) 2012-10-12 2016-03-22 Vmware, Inc. Software development kit testing
US9684587B2 (en) 2012-10-12 2017-06-20 Vmware, Inc. Test creation with execution
US9292422B2 (en) 2012-10-12 2016-03-22 Vmware, Inc. Scheduled software item testing
US10067858B2 (en) * 2012-10-12 2018-09-04 Vmware, Inc. Cloud-based software testing
US9519624B1 (en) 2013-02-05 2016-12-13 Inkling Systems, Inc. Displaying previews of content items for electronic works in a target rendering environment
US20140317489A1 (en) * 2013-04-18 2014-10-23 Microsoft Corporation Device-independent validation of website elements
US9910764B2 (en) * 2013-06-24 2018-03-06 Linkedin Corporation Automated software testing
US9135151B2 (en) * 2013-09-18 2015-09-15 Yahoo! Inc. Automatic verification by comparing user interface images
US11521229B2 (en) * 2014-01-09 2022-12-06 Xandr Inc. Systems and methods for mobile advertisement review
US10241978B2 (en) 2014-06-09 2019-03-26 Entit Software Llc Measuring compatibility of viewers by leveraging user-provided element definitions
US9317398B1 (en) * 2014-06-24 2016-04-19 Amazon Technologies, Inc. Vendor and version independent browser driver
US9336126B1 (en) * 2014-06-24 2016-05-10 Amazon Technologies, Inc. Client-side event logging for heterogeneous client environments
US9430361B1 (en) 2014-06-24 2016-08-30 Amazon Technologies, Inc. Transition testing model for heterogeneous client environments
US10445166B2 (en) * 2014-06-24 2019-10-15 International Business Machines Corporation System verification of interactive screenshots and log files between client systems and server systems within a network computing environment
US10097565B1 (en) 2014-06-24 2018-10-09 Amazon Technologies, Inc. Managing browser security in a testing context
WO2016043729A1 (en) 2014-09-17 2016-03-24 Hewlett Packard Enterprise Development Lp User interface layout comparison
CN104361021B (en) * 2014-10-21 2018-07-24 小米科技有限责任公司 Method for identifying web page coding and device
WO2016065216A2 (en) * 2014-10-22 2016-04-28 Springbox Labs, Inc. Method and apparatus for rendering websites on physical devices
US9836385B2 (en) * 2014-11-24 2017-12-05 Syntel, Inc. Cross-browser web application testing tool
US10055340B2 (en) 2015-06-10 2018-08-21 International Business Machines Corporation Dynamic test topology visualization
US10216377B2 (en) * 2016-03-22 2019-02-26 Microsoft Technology Licensing, Llc Visual regression analysis
US10719428B2 (en) * 2016-07-20 2020-07-21 Salesforce.Com, Inc. Automation framework for testing user interface applications
CN106569945B (en) * 2016-10-19 2019-10-25 上海斐讯数据通信技术有限公司 Color system and method are distinguished in a kind of automation for ui testing
US10824594B2 (en) 2016-11-07 2020-11-03 Qualcomm Incorporated Associating a captured screenshot with application-specific metadata that defines a session state of an application contributing image data to the captured screenshot
US10127689B2 (en) * 2016-12-20 2018-11-13 International Business Machines Corporation Mobile user interface design testing tool
CN106873871B (en) * 2017-01-06 2018-09-11 腾讯科技(深圳)有限公司 Page screenshot method and apparatus
US9934129B1 (en) 2017-03-17 2018-04-03 Google Llc Determining application test results using screenshot metadata
CN108182059A (en) * 2017-12-28 2018-06-19 云之行互联网科技(北京)有限公司 A kind of processing method and processing device of software code
US10909024B2 (en) * 2018-04-19 2021-02-02 Think Research Corporation System and method for testing electronic visual user interface outputs
CN108959068B (en) * 2018-06-04 2022-04-22 广州视源电子科技股份有限公司 Software interface testing method, device and storage medium
US11113040B2 (en) * 2018-07-18 2021-09-07 Verizon Patent And Licensing Inc. Systems and methods for orchestration and automated input handling of interactions received via a user interface
US10635574B1 (en) 2018-11-06 2020-04-28 Login VSI B.V. Screenshot testing of applications on windows desktop environments

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080077855A1 (en) * 2006-09-21 2008-03-27 Shirel Lev Generic website
US20090100345A1 (en) * 2007-10-15 2009-04-16 Miller Edward F Method and System for Testing Websites
US20090249216A1 (en) * 2008-03-28 2009-10-01 International Business Machines Corporation Interacting with multiple browsers simultaneously using linked browsers controlled from a primary browser interface
US20100211865A1 (en) * 2009-02-19 2010-08-19 Microsoft Corporation Cross-browser page visualization generation
US20100211893A1 (en) * 2009-02-19 2010-08-19 Microsoft Corporation Cross-browser page visualization presentation
US20100218106A1 (en) * 2009-02-24 2010-08-26 International Business Machines Corporation Automatically Selecting Internet Browser and Providing Web Page Service
US20110078663A1 (en) * 2009-09-29 2011-03-31 International Business Machines Corporation Method and Apparatus for Cross-Browser Testing of a Web Application
US20110173589A1 (en) * 2010-01-13 2011-07-14 Microsoft Corporation Cross-Browser Interactivity Testing
US20120042281A1 (en) * 2010-08-12 2012-02-16 Vmware, Inc. Same-display comparison of content for different renditions of a single computer program
US20120084345A1 (en) * 2010-10-05 2012-04-05 Microsoft Corporation Website compatibility shims
US8285813B1 (en) * 2007-12-05 2012-10-09 Appcelerator, Inc. System and method for emulating different user agents on a server
US20130212461A1 (en) * 2012-02-13 2013-08-15 Accenture Global Services Limited Browser and operating system compatibility
US20130227354A1 (en) * 2012-02-23 2013-08-29 Qualcomm Innovation Center, Inc. Device, method, and system to enable secure distribution of javascripts

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6918066B2 (en) * 2001-09-26 2005-07-12 International Business Machines Corporation Method and system for evaluating applications on different user agents
DE102006036304A1 (en) * 2006-08-03 2008-02-07 Universität Karlsruhe (Th) Method for analyzing and / or testing at least one user interface, data processing device and computer program product
US9208054B2 (en) * 2011-02-14 2015-12-08 Fujitsu Limited Web service for automated cross-browser compatibility checking of web applications
US8938683B2 (en) 2012-09-11 2015-01-20 Ebay Inc. Visual state comparator

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080077855A1 (en) * 2006-09-21 2008-03-27 Shirel Lev Generic website
US20090100345A1 (en) * 2007-10-15 2009-04-16 Miller Edward F Method and System for Testing Websites
US8285813B1 (en) * 2007-12-05 2012-10-09 Appcelerator, Inc. System and method for emulating different user agents on a server
US20090249216A1 (en) * 2008-03-28 2009-10-01 International Business Machines Corporation Interacting with multiple browsers simultaneously using linked browsers controlled from a primary browser interface
US20100211865A1 (en) * 2009-02-19 2010-08-19 Microsoft Corporation Cross-browser page visualization generation
US20100211893A1 (en) * 2009-02-19 2010-08-19 Microsoft Corporation Cross-browser page visualization presentation
US20100218106A1 (en) * 2009-02-24 2010-08-26 International Business Machines Corporation Automatically Selecting Internet Browser and Providing Web Page Service
US20110078663A1 (en) * 2009-09-29 2011-03-31 International Business Machines Corporation Method and Apparatus for Cross-Browser Testing of a Web Application
US20110173589A1 (en) * 2010-01-13 2011-07-14 Microsoft Corporation Cross-Browser Interactivity Testing
US20120042281A1 (en) * 2010-08-12 2012-02-16 Vmware, Inc. Same-display comparison of content for different renditions of a single computer program
US20120084345A1 (en) * 2010-10-05 2012-04-05 Microsoft Corporation Website compatibility shims
US20130212461A1 (en) * 2012-02-13 2013-08-15 Accenture Global Services Limited Browser and operating system compatibility
US20130227354A1 (en) * 2012-02-23 2013-08-29 Qualcomm Innovation Center, Inc. Device, method, and system to enable secure distribution of javascripts

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230214081A1 (en) * 2020-05-26 2023-07-06 Indeed, Inc. System and Method for Displaying and Analyzing Interface Variants for Concurrent Analysis by a User

Also Published As

Publication number Publication date
US8938683B2 (en) 2015-01-20
US20140075344A1 (en) 2014-03-13
US20150135100A1 (en) 2015-05-14
US9805007B2 (en) 2017-10-31

Similar Documents

Publication Publication Date Title
US20180121400A1 (en) Visual state comparator
US20210318851A1 (en) Systems and Methods for Dataset Merging using Flow Structures
US9152542B2 (en) Automatic generation of test scripts
US8104020B2 (en) Method and system to automate software testing using sniffer side and browser side recording and a toolbar interface
US10404789B2 (en) Systems, method, and non-transitory computer-readable storage media for generating code for displaying a webpage
US20190243751A1 (en) Automated selection of test cases for regression testing
US20120278698A1 (en) Method and system for processing a webpage
US10614156B1 (en) System and method for using a dynamic webpage editor
US11201806B2 (en) Automated analysis and recommendations for highly performant single page web applications
US9690682B2 (en) Program information generating system, method, and computer program product
US20220138093A1 (en) Method and apparatus for continuous integration testing
US11321524B1 (en) Systems and methods for testing content developed for access via a network
US10951486B2 (en) Terminal device, UI expansion method, and UI expansion program
JP2021108148A (en) Method for processing data, data processor, electronic apparatus, computer readable storage medium, and computer program
JP6441786B2 (en) Test support apparatus, test support method, and program
US20150121192A1 (en) Debugging errors in display of web pages with partial page refresh
US10437707B2 (en) Evaluating and presenting software testing project status indicators
US9348613B2 (en) User context and behavior based user interface recommendation
JP2020095679A (en) Information processing device, program, and system
CN112783762B (en) Software quality assessment method, device and server
US9489417B2 (en) Auto-search textbox in a content submission system
CN113900663A (en) Data processing method and device
CN113378346A (en) Method and device for model simulation
KR20210038975A (en) Session recommendation method, device and electronic device
WO2015001721A1 (en) User-interface review method, device, and program

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION