US20190370150A1 - Methods and systems for isolating software components - Google Patents

Methods and systems for isolating software components Download PDF

Info

Publication number
US20190370150A1
US20190370150A1 US16/446,692 US201916446692A US2019370150A1 US 20190370150 A1 US20190370150 A1 US 20190370150A1 US 201916446692 A US201916446692 A US 201916446692A US 2019370150 A1 US2019370150 A1 US 2019370150A1
Authority
US
United States
Prior art keywords
software
code
coupled
given
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/446,692
Inventor
Eli Lopian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TYPEMOCK Ltd
Original Assignee
TYPEMOCK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/IL2007/001152 external-priority patent/WO2008038265A2/en
Application filed by TYPEMOCK Ltd filed Critical TYPEMOCK Ltd
Priority to US16/446,692 priority Critical patent/US20190370150A1/en
Publication of US20190370150A1 publication Critical patent/US20190370150A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/362Software debugging
    • G06F11/3644Software debugging by instrumenting at runtime
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • G06F11/3612Software analysis for verifying properties of programs by runtime analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable

Definitions

  • the present invention relates generally to validating software.
  • Dependency Injection describes the situation where one object uses a second object to provide a particular capacity. For example, being passed a database connection as an argument to the constructor instead of creating one internally.
  • the term “Dependency injection” is a misnomer, since it is not a dependency that is injected, rather it is a provider of some capability or resource that is injected.”
  • Validating software is a complex problem that grows exponentially as the complexity of the software grows. Even a small mistake in the software can cause a large financial cost. In order to cut down on these costs, software companies test each software component as they are developed or during interim stages of development.
  • Certain embodiments of the present invention disclose a method that enables isolating software components, without changing the production code. Testing isolated software components gives better testing results as the coverage of the tests is much higher and the complexity does not grow exponentially. This is a basic requirement for validating a software component. In order to isolate the components, there is a need to design the program that utilizes the software components in such a way that the components can be changed. This is part of a pattern called Inversion of Control or Dependency Injection. For example when validating that software behaves correctly on the 29th of February, there is a need to change the computer system's date before running the test. This is not always possible (due to security means) or wanted (it may disturb other applications).
  • the method used today to verify this is by wrapping the system call to get the current date with a new class.
  • This class may have the ability to return a fake date when required. This may allow injecting the fake date into the code being tested for, and enable validating the code under the required conditions.
  • isolating the code base and injecting fake data are required.
  • isolating the code base and injecting fake data are required.
  • a mock framework 110 may dynamically create a fake object that implements the same interface of the real object (the same interface that is created using the Abstract Factory), and has the ability to define the behavior of the object and to validate the arguments passed to the object.
  • Legacy code refers to any code that was not designed to allow insertions of fake objects. It would be too costly to rewrite them, as this may lead to an increase in development time just to make the code testable.
  • certain embodiments of the invention add code that is inserted or weaved 107 into the production code base 106 ( FIG. 1 ) that is being tested.
  • the added code may enable hooking fake or mock objects into the production code by calling the Mock framework 110 .
  • This framework can decide to return a fake object.
  • the framework may also be able to validate and change the arguments passed into the method.
  • processors may be used to process, display, store and accept information, including computer programs, in accordance with some or all of the teachings of the present invention, such as but not limited to a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device, either general-purpose or specifically constructed, for processing; a display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting.
  • the term “process” as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and/or memories of a computer.
  • the above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention.
  • FIG. 1 is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention
  • FIG. 2 is an example of a decision table for .NET code, constructed and operative in accordance with certain embodiments of the present invention
  • FIG. 3 is a simplified flowchart illustration for the weaver of FIG. 1 , constructed and operative in accordance with certain embodiments of the present invention
  • FIG. 4 is a simplified functional block diagram of a profile linker and associated components, constructed and operative in accordance with certain embodiments of the present invention
  • FIG. 5 is a simplified functional block diagram of the mock framework of FIG. 5 and associated components, all constructed and operative in accordance with certain embodiments of the present invention
  • FIG. 6 is a simplified flow diagram of expectations used by the expectation manager of FIG. 5 , in accordance with certain embodiments of the present invention.
  • FIG. 7 is a simplified flow diagram of a natural mock setting embodiment of the present invention.
  • FIG. 8 is a simplified flow diagram of a mocked method flow which may be performed by the apparatus of FIG. 1 , in accordance with certain embodiments of the present invention.
  • FIG. 9 is a simplified diagram of a method by which the mock framework of FIG. 1 sends messages to the tracer of FIG. 1 , all in accordance with certain embodiments of the present invention.
  • FIG. 1 is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention.
  • the run time system 102 is the system that actually runs the code and the tests; this could be an operating system, a scripting system or a virtual machine (as in Java or .NET).
  • the weaver 104 is responsible for inserting the added hooking code into the production code base 106 . In each method of the production code the weaver 104 may insert a small piece of code 107 that calls the Mock framework 110 which then decides whether to call the original code or to fake the call. The inserted code 107 can also modify the arguments passed to the production method if required. This is handy for arguments passed by reference.
  • the production code base 106 is the code that is to be isolated. There is no need to change the design of this code just to isolate the code.
  • the test code 108 calls the Mock framework 110 in order to change the behavior of the production code. Here the test can set up what to fake, how to validate the arguments that are passed, what to return instead of the original code and when to fail the test.
  • the mock framework 110 is responsible for creating mock objects dynamically and for managing the expectations and behavior of all fake calls.
  • the tracer 112 is typically used to debug and graphically display the methods that are mocked. It is typically used to analyze the faked and original calls of the production code.
  • the configurator 114 is used to set the options of the tool.
  • code 107 into production code 106
  • ways in which it is possible to insert code 107 into production code 106 such as but not limited to the following:
  • FIG. 2 is an example of a decision table for .NET code.
  • the method that was chosen was the Profiler API ( FIG. 3 ).
  • a Profiler Linker was created. ( FIG. 4 )
  • the Weaver 104 registers to the .NET Runtime (CLR) 102 and typically just before the JIT Compiler is run to create machine code 304 from the Byte code 302 , instructions pertaining to the added hooking code are inserted as indicated at reference numeral 308 .
  • the Weaver 104 typically analyses the signature of the method in order to understand the parameters passed and the return value. This enables writing code that may call the Mock framework 110 to check if the method needs to be faked, and to pass the arguments to the Framework 110 for validating.
  • the code also changes the values of the parameters if required. This is useful for parameters that are passed by reference and for swapping the values for the test (e.g. it is possible to change a filename that is passed as a parameter to point to a dummy file required for the test).
  • the weaver 104 is actually a framework that can be used to insert any new code into a code base.
  • the weaver 104 has to change the metadata and add information that points to the correct Mock framework 110 . This is typically done by putting the framework 110 in a known directory (GAC) and by parsing the assembly (dll file) to extract relevant information (version and signing signature). Some information is passed from the Mock framework 110 to the Weaver 104 , this is typically done using environment variables, although there are other methods available to do this. According to certain embodiments of the present invention, one, some or all of the following may hold:
  • Each call to a method is defined as ‘call ⁇ entry in method table>’.
  • Each entry in the method table has the name of the method its type (which is actually an ⁇ entry in the type table>) and other information.
  • Each entry in the type table has the name of the type and the assembly that it is defined in (which is an ⁇ entry in the assembly table>).
  • the Profiler Linker 401 In order to support profiling and code coverage tools that may be required to run together with the tests, a profile linker may be employed.
  • the profile linker 401 loads one or more profile assemblies (COM objects that are suitable to be a profiler) and then calls each profiler sequentially and weaves code from both the assemblies.
  • the profiler linker 401 takes care of profilers from different versions and manages to make sure that the profilers work correctly.
  • the testing system may detour a createprocess system api (or any other similar api). This is the api that tells the operating system to start a new process.
  • the testing application may then check if the new process requires mocking or linking. It can do so by looking at the executable name and at the environment variables passed.
  • the system may change/add to these variables and/or may call the linker profiler and/or signal the 2 or more profilers to run together.
  • the mock framework 110 is in charge of managing the expectations. This framework is linked by the test code 108 , and expectations are recorded using the frameworks API.
  • the mock framework 110 typically comprises an Expectation Manager 550 , a Natural Mock Recorder 520 , a Dynamic Mock Builder 530 , an Argument Validation 540 , a Run Time Engine 510 and a Verifier 560 .
  • the Expectation Manager 550 is a module used to manage the expectations for the fake code.
  • the expectations may be kept in the following way, which is not the only way do to this, but it has its advantages.
  • the Framework 110 holds a map of type expectations 620 that are indexed via the type name. Each Type Expectation is connected to a list of Instance Expectations 630 indexed by the instance and another reference to an Instance Expectation that represents the expectations for all instances.
  • All Instance Expectations of the same type reference an Instance Expectation that manages the expectations for all static methods. This is because static methods have no instance.
  • Each Instance Expectation contains a map of Method Expectations 660 that is indexed via the method name. Each method may have the following four lists as shown in FIG. 6 :
  • the Method Expectation 660 may first check for a conditional value then a default conditional value, then a regular value and finally the default value.
  • the Null Return Value 680 and Null Instance Expectation 640 are classes that are part of the Null Object pattern. This leads to faster code while running, as there is no need to check if references to Return Value or Instance Expectation are null.
  • Expectations of Generic types are managed each in its own Type Expectation class with the generic parameters as a key, although the non generic Type Expectation points to the generic one.
  • Expectations of Generic methods are managed each in its own Method Expectation class with the generic parameters as a key, although the non generic Method Expectation points to the generic one.
  • Reflective mocks use strings names of the methods that are to be mocked.
  • the Framework analyzes the tested assembly, searches for the method and checks that it exists and has the correct return value. The method is then added to the expectations of that method.
  • the test code 108 can then change the behavior of the code and registers what that method should do and how many times. The method may be instructed to return a fake result, throw an exception, or call the original code. The framework may also be instructed to always fake a method (this is the default return), or to fake the next call or number of calls (managed by the Return Value Stack).
  • the Framework can be directed to mock all instances, a future instance or to create the mocked instance so that it can be passed to the production code 106 (this may be managed by the Type Expectation).
  • Methods can also have conditional expectations. Conditional expectations may fake calls only if the arguments passed are the same as those expected. The framework allows expectations to be canceled and changed before the actual code is called.
  • Natural Mocks use the actual calls to the methods that are to be mocked.
  • the Framework may be called by these calls (because all the methods are already weaved) and the framework may record that the call is expected, and add it to the list of expectations.
  • the framework allows setting the behavior in the same way as Reflective Mocks.
  • Chained calls are also supported using Natural Mocks. This allows a chain of calls to be mocked in one statement.
  • the Framework may build the return object of one statement in the chain as an input for the next statement in the chain.
  • the framework has to differentiate between creating Dynamic Mocks for incomplete types and real objects with dummy constructor arguments for complete or static objects.
  • Natural Mocks is easier than Reflective Mocks and they are supported by IDE editors that allow code completion and automatic re-factoring, but these cannot account for all cases.
  • Re-Factoring is the process of restructuring code without changing its behavior. There are development tools that help to automate this task. When a method cannot be called from the code (for example if its scope is private), Reflective Mocks must be used. Although Reflective Mocks have the advantage of covering all scopes of the methods, they are more prone to mistakes as the methods are passed as a string.
  • FIG. 7 is a data flow diagram showing a Natural Mock Setting Expectations Flow according to an embodiment of the present invention.
  • the Dynamic Mock Builder 530 is used to create new objects in a dynamic assembly. This creates real objects out of incomplete classes (with abstract methods or interfaces). These objects can then be used and passed to the production code, so that when methods are called the Run Time Engine 510 may return fake results to the created methods. These objects are built using the standard Reflection library.
  • the Argument Validation 540 is responsible for verifying that the arguments passed are those that were expected. This is done using a hook that actually does the validation.
  • the Arguments passed and those expected are sent to a validation method that checks different attributes of the object.
  • the attributes which may be of virtually unlimited scope, may, for example, indicate that the objects are the same or that the .Equals( ) method is true.
  • the framework 110 has a predefined group of argument validators including string comparisons, Group and Sets comparisons, which verify that the object is being faked by the framework.
  • the test code 108 can register a customized validator if this is required.
  • the arguments passed to the recording method are used to validate the arguments, unless explicitly overridden.
  • the framework 110 also allows setting arguments of the mocked methods. This actually changes the values of the arguments before the actual code is called. This is useful for arguments that are passed by reference, so that their values can be changed before they are returned and fake [out] arguments.
  • the run time engine 510 is called from the code weaved into the production code.
  • the Run Time engine 510 checks to see if the specific type, instance and method should be faked. If they are, the code may validate the arguments and return the fake return value.
  • the Run Time Engine 510 checks the arguments to see if a conditional expectation should be used. The engine also calls the argument validation, and when the arguments are not valid the engine may throw an expectation. There are cases where throwing the expectation is not enough and, when configured correctly, these validation errors may appear at the verifying stage too.
  • Performance is an issue for the Run Time engine 510 as it is run for every method called.
  • One way to solve this is to check if the method is faked; this returns quickly if no mocks have been created or if the type is not mocked. Only after knowing that the method is mocked, are the arguments passed and validated, since passing the argument can take time as they are all required to be encapsulated within an object.
  • the Run Time Engine 510 passes each call to the Natural Mock Recorder.
  • FIG. 8 A flow diagram of the Mocked Method Flow described herein is shown in FIG. 8 .
  • the Engine 510 may employ the type, method, instance and type generic and method generic parameters. The last two are for generic specific code only and with them it is possible to map the correct expectations.
  • the engine receives this information from the weaver 104 that analyzed the metadata of the code.
  • the Run Time Engine 510 checks if expectations contain mocks for the new instance. This way the Engine can manage mocking objects that are created after the expectations are set (Future Objects).
  • a static constructor is called once for each type.
  • the Run Time Engine 510 remembers that this was mocked. Then when a method of that type is called and the type is not mocked any more, the static constructor may be called. This ensures that mocking the static constructor in one test will not affect another test.
  • the verifier is called at the end of the test and throws errors when not all the expected calls are made or when an argument validator fails.
  • the verifier can wait till all expected mocks are completed. This is a feature that helps test multi-threaded code, where the tested code runs asynchronically in another thread.
  • the framework must run in all .NET versions and uses reflection methods to call the newer version API from the old version.
  • Re the Production code base 106 nothing has to change here.
  • the test code 108 calls the Mock Framework API in order to change the behavior of the production code.
  • the tracer 112 is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code. Mocking of future objects can be a bit confusing, and the tracer 112 helps track these issues.
  • FIG. 9 show the Mock framework 110 sending messages to the tracer 112 process.
  • the configurator 114 is used to configure the behavior of the Framework 110 .
  • Using the Configurator 114 it is possible to link a code coverage tool with the mock framework 110 . This may be done by changing the registry key of the coverage tool to point to the Profile Linker 401 .
  • the Linker 401 then loads both the coverage tool and the mock framework 110 .
  • Advantages of certain embodiments of the present invention include that it is much easier to verify the code base of an application. There is no need to perform pre-compile steps, or to create special designed code to be able to isolate the code in order for the code to be testable. For example, suppose a developer had the following production code: Dogs.GetDog(“rusty”).Tail.Wag( ) . . . Speed(5);
  • test code 108 would look like this (the production code changes are not shown):
  • the weaver 104 may add code to the ByteCode that mimics the following code may be added to the original code, it being emphasized that the weaver 104 adds code to directly to the ByteCode, the original code being unaffected.
  • the equivalent high level language is shown for clarity:
  • the stack may be used to keep the mockReturn object instead of a local variable. This saves the weaver 104 from defining the variable in the metadata. This helps to test the code. Now that this is in place it is possible to test that the code that counts the number of days in the current month works for also leap years. Following is an example of one test, showing the code to be tested:
  • Verifying Calls The mechanism can be used to test that a certain call was actually made. In the previous test DateTime.Now might never even be called. As the Mock framework 110 counts the calls made, it can now be verified that the expected calls were actually made.
  • the Weaved code 107 may be:
  • the Original Code may be:
  • the Weaved code 107 may include:
  • a new method may be added in the metadata that points to the original method (eg, for CreateProcess add a new method >>CreateProcess).
  • “>>” may be used in the name, as it is legal for it but not for higher level languages.
  • the original line may then be changed to point to a new allocated piece of code that simply calls the new method just defined. In this manner, all calls to the pinvoke method will now be directed to the new method.
  • the new method can now be faked as described herein in relation to normal methods.
  • a static constructor When a static constructor is called, it may be saved. Subsequently, when a clean up of the system between tests is requested/desired, all the static constructors may be re-invoked. For loaded types that don't have static constructors—all the static fields may be reset. In order to make sure that all types have been identified—all types loaded in the system may be run. According to further embodiments, it is possible to tell if a type has been loaded by:
  • stale mocks After an instance of a given method or function is mocked, it may be left in a bad/improper state and, therefore, should not be used in the system after the test. Such methods or functions may be referred to as “stale mocks”. To ensure that stale mocks are not re-used by the system, a list of used mock instances may be stored. Whenever an instance is used it may be tested against that list, and fail if it is a stale mock, i.e. on the list.
  • software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs.
  • ROM read only memory
  • EEPROM electrically erasable programmable read-only memory
  • Components described herein as software may, alternatively, be implemented wholly or partly in hardware, if desired, using conventional techniques.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A software testing system operative to test a software application comprising a plurality of software components, at least some of which are highly coupled hence unable to support a dependency injection, each software component operative to perform a function, the system comprising apparatus for at least partially isolating, from within the software application, at least one highly coupled software component which performs a given function, and apparatus for testing at least the at least partially isolated highly coupled software component.

Description

    PRIORITY CLAIMS
  • This application is a continuation of U.S. patent application Ser. No. 15/950,202, titled “Method and System for Isolating Software Components”, filed by the inventors of the present application on Apr. 11, 2018;
  • which, in turn, is a continuation of U.S. patent application Ser. No. 15/005,145, titled “Method and System for Isolating Software Components”, filed by the inventors of the present application on Jan. 25, 2016;
    which, in turn, is a continuation in part of U.S. Utility patent application Ser. No. 13/706,711, titled “Method and System for Isolating Software Components”, filed by the inventors of the present application on Dec. 6, 2012;
    which, in turn, is a continuation of U.S. patent application Ser. No. 12/442,948, titled “Method and System for Isolating Software Components”, filed by the inventors of the present application on Mar. 25, 2009;
    which, in turn, is a national phase entry of PCT Application No. PCT/IL2007/001152 titled “Method and System for Isolating Software Components”, filed by the inventors of the present application on Sep. 20, 2007;
    which, in turn, claims priority from U.S. provisional application No. 60/826,759, entitled “Method and System for Isolating Software Components” and filed by the inventors of the present Application on Sep. 25, 2006.
    Based on the above listed priority chain, priority is hereby claimed from all of the above listed Applications, all of which are hereby incorporated by reference in their entirety into the present Application.
  • FIELD OF THE INVENTION
  • The present invention relates generally to validating software.
  • BACKGROUND OF THE INVENTION
  • Conventional Internet sources state that “Dependency Injection describes the situation where one object uses a second object to provide a particular capacity. For example, being passed a database connection as an argument to the constructor instead of creating one internally. The term “Dependency injection” is a misnomer, since it is not a dependency that is injected, rather it is a provider of some capability or resource that is injected.”
  • Validating software is a complex problem that grows exponentially as the complexity of the software grows. Even a small mistake in the software can cause a large financial cost. In order to cut down on these costs, software companies test each software component as they are developed or during interim stages of development.
  • The disclosures of all publications and patent documents mentioned in the specification, and of the publications and patent documents cited therein directly or indirectly, are hereby incorporated by reference.
  • SUMMARY OF THE INVENTION
  • Certain embodiments of the present invention disclose a method that enables isolating software components, without changing the production code. Testing isolated software components gives better testing results as the coverage of the tests is much higher and the complexity does not grow exponentially. This is a basic requirement for validating a software component. In order to isolate the components, there is a need to design the program that utilizes the software components in such a way that the components can be changed. This is part of a pattern called Inversion of Control or Dependency Injection. For example when validating that software behaves correctly on the 29th of February, there is a need to change the computer system's date before running the test. This is not always possible (due to security means) or wanted (it may disturb other applications). The method used today to verify this is by wrapping the system call to get the current date with a new class. This class may have the ability to return a fake date when required. This may allow injecting the fake date into the code being tested for, and enable validating the code under the required conditions. There are many cases where isolating the code base and injecting fake data are required. Here are a few examples:
  • 1. Fake a behavior that is scarce. (Dates, Out of Memory)
  • 2. Fake slow running components. (Database, Internet)
  • 3. Fake components that are difficult to set up (send e-mail, ftp)
  • Other cases may require a more complex solution. When faking a complete set of API's (for example: faking sending an email) there is a need to build a framework that enables isolating the complete API set. This means that the code may now have to support creating and calling two different components. One way to do this is to use the Abstract Factory Pattern. Using this pattern, the production code should never create the object (that needs to be faked for tests). Instead of creating the object, the Factory is asked to create the object, and the code calls the methods of the object that the factory created. The factory can then choose what object to create: a real one or a fake one. This requires using an interface that both clients (real and fake) need to implement. It also requires creating a complex mechanism that may allow the factory to choose what object to create and how to do so. This is done mainly through configuration files although it can be done in code too.
  • When testing using fake objects, it is important to validate the arguments passed to the fake object. In this way it is possible to validate that an e-mail that is supposed to be sent has the correct subject and address. The e-mail, of course, is not actually sent. There is no need to validate that component again, as the e-mail tests are done in isolation for the e-mail object.
  • It is possible to write the fake object and methods by hand or to use a mock framework 110. A mock framework 110 may dynamically create a fake object that implements the same interface of the real object (the same interface that is created using the Abstract Factory), and has the ability to define the behavior of the object and to validate the arguments passed to the object.
  • Although these methods work and enable testing the code base, they also require that the code is designed to be testable. This cannot always be done, as sometimes the code is a legacy code, and should remain as such. Legacy code refers to any code that was not designed to allow insertions of fake objects. It would be too costly to rewrite them, as this may lead to an increase in development time just to make the code testable.
  • The more complex the code the harder it is to maintain. Designing the code to be testable, puts constraints into the design that are not always compatible with the production design. For example, the code may be required to implement hooks that enable changing the actual object to a fake one. This hook can lead to misuse and hard-to-debug code, as it is intended for testing but it is in the production code.
  • It would be easier to test such code if there were no need to change the design for testability, but it should be able to isolate and fake the code required to validate such code.
  • For example, it would be easier if the system could be programmed to fake the real e-mail object. There would then be no need to create an Abstract Factory or interfaces or hooks if the system could be configured not to make the real calls on the e-mail object, but to fake them. In order to solve this problem, certain embodiments of the invention add code that is inserted or weaved 107 into the production code base 106 (FIG. 1) that is being tested. The added code may enable hooking fake or mock objects into the production code by calling the Mock framework 110. This framework can decide to return a fake object. The framework may also be able to validate and change the arguments passed into the method.
  • Any suitable processor, display and input means may be used to process, display, store and accept information, including computer programs, in accordance with some or all of the teachings of the present invention, such as but not limited to a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device, either general-purpose or specifically constructed, for processing; a display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting. The term “process” as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and/or memories of a computer.
  • The above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
  • The apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein. Alternatively or in addition, the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention.
  • Any trademark occurring in the text or drawings is the property of its owner and occurs herein merely to explain or illustrate one example of how an embodiment of the invention may be implemented.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Certain embodiments of the present invention are illustrated in the following drawings:
  • FIG. 1 is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention;
  • FIG. 2 is an example of a decision table for .NET code, constructed and operative in accordance with certain embodiments of the present invention;
  • FIG. 3 is a simplified flowchart illustration for the weaver of FIG. 1, constructed and operative in accordance with certain embodiments of the present invention;
  • FIG. 4 is a simplified functional block diagram of a profile linker and associated components, constructed and operative in accordance with certain embodiments of the present invention;
  • FIG. 5 is a simplified functional block diagram of the mock framework of FIG. 5 and associated components, all constructed and operative in accordance with certain embodiments of the present invention;
  • FIG. 6 is a simplified flow diagram of expectations used by the expectation manager of FIG. 5, in accordance with certain embodiments of the present invention;
  • FIG. 7 is a simplified flow diagram of a natural mock setting embodiment of the present invention;
  • FIG. 8 is a simplified flow diagram of a mocked method flow which may be performed by the apparatus of FIG. 1, in accordance with certain embodiments of the present invention; and
  • FIG. 9 is a simplified diagram of a method by which the mock framework of FIG. 1 sends messages to the tracer of FIG. 1, all in accordance with certain embodiments of the present invention.
  • DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
  • Reference is now made to FIG. 1 which is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention. The run time system 102 is the system that actually runs the code and the tests; this could be an operating system, a scripting system or a virtual machine (as in Java or .NET). The weaver 104 is responsible for inserting the added hooking code into the production code base 106. In each method of the production code the weaver 104 may insert a small piece of code 107 that calls the Mock framework 110 which then decides whether to call the original code or to fake the call. The inserted code 107 can also modify the arguments passed to the production method if required. This is handy for arguments passed by reference.
  • The production code base 106 is the code that is to be isolated. There is no need to change the design of this code just to isolate the code. The test code 108 calls the Mock framework 110 in order to change the behavior of the production code. Here the test can set up what to fake, how to validate the arguments that are passed, what to return instead of the original code and when to fail the test. The mock framework 110 is responsible for creating mock objects dynamically and for managing the expectations and behavior of all fake calls. The tracer 112 is typically used to debug and graphically display the methods that are mocked. It is typically used to analyze the faked and original calls of the production code. The configurator 114 is used to set the options of the tool.
  • There are several ways in which it is possible to insert code 107 into production code 106 such as but not limited to the following:
  • (a) Change the executable on disk before running the tests,
    (b) Use System IO Hooks to change the executable just before reading it from the disk, (c) Use function hooking techniques,
    (d) Use RunTime ClassLoader hooks to change the code before it is run, and
    (e) Use Profiler and Debug API's to change the code 302 before it is loaded as indicated by arrow 306 in FIGS. 3-4.
    Each method has its pros and cons. The main decision factors are Ease of implementation and Manual vs Automatic as selected by the user.
  • FIG. 2 is an example of a decision table for .NET code. The method that was chosen was the Profiler API (FIG. 3). In order to solve the issues with the code coverage tool, a Profiler Linker was created. (FIG. 4)
  • Referring now to FIG. 3, the Weaver 104 registers to the .NET Runtime (CLR) 102 and typically just before the JIT Compiler is run to create machine code 304 from the Byte code 302, instructions pertaining to the added hooking code are inserted as indicated at reference numeral 308. The Weaver 104 typically analyses the signature of the method in order to understand the parameters passed and the return value. This enables writing code that may call the Mock framework 110 to check if the method needs to be faked, and to pass the arguments to the Framework 110 for validating. The code also changes the values of the parameters if required. This is useful for parameters that are passed by reference and for swapping the values for the test (e.g. it is possible to change a filename that is passed as a parameter to point to a dummy file required for the test).
  • The weaver 104 is actually a framework that can be used to insert any new code into a code base. The weaver 104 has to change the metadata and add information that points to the correct Mock framework 110. This is typically done by putting the framework 110 in a known directory (GAC) and by parsing the assembly (dll file) to extract relevant information (version and signing signature). Some information is passed from the Mock framework 110 to the Weaver 104, this is typically done using environment variables, although there are other methods available to do this. According to certain embodiments of the present invention, one, some or all of the following may hold:
      • 1. The weaver 104 must run well in debug mode too and thus it is required to fix the debug to code mapping to ignore the code that is added.
      • 2. Try catch handlers must also be updated to point to the correct positions in the code after the code has been added.
      • 3. The weaver 104 must take into consideration small and large method headers and event handlers.
      • 4. Creating new code must take place when the assembly is first loaded.
      • 5. Signed assemblies can only call other signed assemblies so the Mock framework 110 is signed.
      • 6. In order to support multiple .NET versions the same Weaver 104 is used and has instructions that enable it to use features of the later version only when that version is available.
      • 7. The Mock framework 110 assembly should not be weaved as this may lead to a recursive infinite loop.
  • Weaving via the MetaData is now described with reference to FIG. 3.
  • Another method to isolate code and to insert fake objects is by changing the metadata tables. Each call to a method is defined as ‘call <entry in method table>’. Each entry in the method table has the name of the method its type (which is actually an <entry in the type table>) and other information. Each entry in the type table has the name of the type and the assembly that it is defined in (which is an <entry in the assembly table>). By switching these entries, for example the assembly of the <type> and its <name> all calls to a method can be redirected to a mocked object. Although this method requires building the mock object and handling delegating calls back to the original object, it has the advantage of being less intrusive as it does not change the production code, but only the metadata tables. This is useful in cases where the Run time system 102 has restrictions on the code being inserted.
  • An embodiment of the Profiler Linker 401 is now described with reference to FIG. 4. In order to support profiling and code coverage tools that may be required to run together with the tests, a profile linker may be employed. The profile linker 401 loads one or more profile assemblies (COM objects that are suitable to be a profiler) and then calls each profiler sequentially and weaves code from both the assemblies. The profiler linker 401 takes care of profilers from different versions and manages to make sure that the profilers work correctly. According to certain embodiments of the present invention, in order to have the ability to debug the code, there is a need to map the actual code with the source file. When code is added, the map needs to be fixed, and/or the linker 401 changes the code of both assemblies.
  • According to further embodiments, in order to automatically set correct environment variables for the testing system to work and to link several profilers together, the testing system may detour a createprocess system api (or any other similar api). This is the api that tells the operating system to start a new process. The testing application may then check if the new process requires mocking or linking. It can do so by looking at the executable name and at the environment variables passed. The system may change/add to these variables and/or may call the linker profiler and/or signal the 2 or more profilers to run together.
  • An embodiment of the Mock Framework 110 is now described with reference to FIGS. 5 and 6. The mock framework 110 is in charge of managing the expectations. This framework is linked by the test code 108, and expectations are recorded using the frameworks API. The mock framework 110, as shown in FIG. 5, typically comprises an Expectation Manager 550, a Natural Mock Recorder 520, a Dynamic Mock Builder 530, an Argument Validation 540, a Run Time Engine 510 and a Verifier 560.
  • The Expectation Manager 550 is a module used to manage the expectations for the fake code. The expectations may be kept in the following way, which is not the only way do to this, but it has its advantages. The Framework 110 holds a map of type expectations 620 that are indexed via the type name. Each Type Expectation is connected to a list of Instance Expectations 630 indexed by the instance and another reference to an Instance Expectation that represents the expectations for all instances.
  • All Instance Expectations of the same type reference an Instance Expectation that manages the expectations for all static methods. This is because static methods have no instance. Each Instance Expectation contains a map of Method Expectations 660 that is indexed via the method name. Each method may have the following four lists as shown in FIG. 6:
  • 1. a default Return Value representing a value to return by default
  • 2. a queue of return values that should be faked
  • 3. a queue of conditional values that are used only when the arguments match
  • 4. a queue of conditional default values are used only when the arguments match
  • The Method Expectation 660 may first check for a conditional value then a default conditional value, then a regular value and finally the default value. The Null Return Value 680 and Null Instance Expectation 640 are classes that are part of the Null Object pattern. This leads to faster code while running, as there is no need to check if references to Return Value or Instance Expectation are null. Expectations of Generic types are managed each in its own Type Expectation class with the generic parameters as a key, although the non generic Type Expectation points to the generic one. Expectations of Generic methods are managed each in its own Method Expectation class with the generic parameters as a key, although the non generic Method Expectation points to the generic one.
  • Two ways to set expectations, namely by the use of Reflective mocks or Natural Mocks, are now described.
  • a. Reflective mocks use strings names of the methods that are to be mocked. The Framework analyzes the tested assembly, searches for the method and checks that it exists and has the correct return value. The method is then added to the expectations of that method. The test code 108 can then change the behavior of the code and registers what that method should do and how many times. The method may be instructed to return a fake result, throw an exception, or call the original code. The framework may also be instructed to always fake a method (this is the default return), or to fake the next call or number of calls (managed by the Return Value Stack).
  • There are also hooks to call user supplied code when the method is called. As some methods are instance methods, there are ways to tell the Framework what instance to mock. For example, the Framework can be directed to mock all instances, a future instance or to create the mocked instance so that it can be passed to the production code 106 (this may be managed by the Type Expectation). Methods can also have conditional expectations. Conditional expectations may fake calls only if the arguments passed are the same as those expected. The framework allows expectations to be canceled and changed before the actual code is called.
  • b. Natural Mocks use the actual calls to the methods that are to be mocked. The Framework may be called by these calls (because all the methods are already weaved) and the framework may record that the call is expected, and add it to the list of expectations. The framework allows setting the behavior in the same way as Reflective Mocks. Chained calls are also supported using Natural Mocks. This allows a chain of calls to be mocked in one statement. The Framework may build the return object of one statement in the chain as an input for the next statement in the chain. Of course the framework has to differentiate between creating Dynamic Mocks for incomplete types and real objects with dummy constructor arguments for complete or static objects.
  • Using Natural Mocks is easier than Reflective Mocks and they are supported by IDE editors that allow code completion and automatic re-factoring, but these cannot account for all cases. Re-Factoring is the process of restructuring code without changing its behavior. There are development tools that help to automate this task. When a method cannot be called from the code (for example if its scope is private), Reflective Mocks must be used. Although Reflective Mocks have the advantage of covering all scopes of the methods, they are more prone to mistakes as the methods are passed as a string.
  • FIG. 7 is a data flow diagram showing a Natural Mock Setting Expectations Flow according to an embodiment of the present invention.
  • The Dynamic Mock Builder 530 is used to create new objects in a dynamic assembly. This creates real objects out of incomplete classes (with abstract methods or interfaces). These objects can then be used and passed to the production code, so that when methods are called the Run Time Engine 510 may return fake results to the created methods. These objects are built using the standard Reflection library.
  • The Argument Validation 540 is responsible for verifying that the arguments passed are those that were expected. This is done using a hook that actually does the validation. The Arguments passed and those expected are sent to a validation method that checks different attributes of the object. The attributes, which may be of virtually unlimited scope, may, for example, indicate that the objects are the same or that the .Equals( ) method is true. The framework 110 has a predefined group of argument validators including string comparisons, Group and Sets comparisons, which verify that the object is being faked by the framework. The test code 108 can register a customized validator if this is required.
  • When Natural Mocks are used, the arguments passed to the recording method are used to validate the arguments, unless explicitly overridden. The framework 110 also allows setting arguments of the mocked methods. This actually changes the values of the arguments before the actual code is called. This is useful for arguments that are passed by reference, so that their values can be changed before they are returned and fake [out] arguments.
  • The run time engine 510 is called from the code weaved into the production code. The Run Time engine 510 checks to see if the specific type, instance and method should be faked. If they are, the code may validate the arguments and return the fake return value. The Run Time Engine 510 checks the arguments to see if a conditional expectation should be used. The engine also calls the argument validation, and when the arguments are not valid the engine may throw an expectation. There are cases where throwing the expectation is not enough and, when configured correctly, these validation errors may appear at the verifying stage too.
  • Performance is an issue for the Run Time engine 510 as it is run for every method called. One way to solve this is to check if the method is faked; this returns quickly if no mocks have been created or if the type is not mocked. Only after knowing that the method is mocked, are the arguments passed and validated, since passing the argument can take time as they are all required to be encapsulated within an object. When Natural Mocks are used the Run Time Engine 510 passes each call to the Natural Mock Recorder.
  • A flow diagram of the Mocked Method Flow described herein is shown in FIG. 8.
  • In order for the runtime engine 510 to map the called code to the correct mock expectation the Engine 510 may employ the type, method, instance and type generic and method generic parameters. The last two are for generic specific code only and with them it is possible to map the correct expectations. The engine receives this information from the weaver 104 that analyzed the metadata of the code. When a new instance is created and its constructor is called, the Run Time Engine 510 checks if expectations contain mocks for the new instance. This way the Engine can manage mocking objects that are created after the expectations are set (Future Objects).
  • A static constructor is called once for each type. When a static constructor is called, the Run Time Engine 510 remembers that this was mocked. Then when a method of that type is called and the type is not mocked any more, the static constructor may be called. This ensures that mocking the static constructor in one test will not affect another test.
  • The verifier is called at the end of the test and throws errors when not all the expected calls are made or when an argument validator fails. The verifier can wait till all expected mocks are completed. This is a feature that helps test multi-threaded code, where the tested code runs asynchronically in another thread.
  • In certain embodiments of the invention, the framework must run in all .NET versions and uses reflection methods to call the newer version API from the old version. Re the Production code base 106, nothing has to change here. The test code 108 calls the Mock Framework API in order to change the behavior of the production code. The tracer 112 is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code. Mocking of future objects can be a bit confusing, and the tracer 112 helps track these issues.
  • FIG. 9 show the Mock framework 110 sending messages to the tracer 112 process.
  • The configurator 114 is used to configure the behavior of the Framework 110. Using the Configurator 114 it is possible to link a code coverage tool with the mock framework 110. This may be done by changing the registry key of the coverage tool to point to the Profile Linker 401. The Linker 401 then loads both the coverage tool and the mock framework 110.
  • Advantages of certain embodiments of the present invention include that it is much easier to verify the code base of an application. There is no need to perform pre-compile steps, or to create special designed code to be able to isolate the code in order for the code to be testable. For example, suppose a developer had the following production code: Dogs.GetDog(“rusty”).Tail.Wag( ) . . . Speed(5);
  • This actually fetches the dog from somewhere in the Internet. Instead of changing the code to be able to insert a fake dog and setting all the expectations on the different methods, using certain embodiments of the invention may enable the code to be isolated by writing:
  • MockTheFollowing( );
    Dogs.GetDog(“rusty”).Tail.Wag( ).Speed(5);
    CheckArguments( );
    EndMocking( );
  • In contrast, in the absence of the present invention, the following may have been required:
  • 1. Write a framework allowing Dogs to fetch from a fake Internet.
  • 2. Create a fake Internet
  • 3. Set Dogs to use the fake Internet
  • 4. Return a fake Dog when “rusty” is called
  • 5. Return a fake Tail of “rusty”
  • 6. Make sure that the tail is wagging
  • 7. Make sure that the wag was set to correct speed.
  • The test code 108 would look like this (the production code changes are not shown):
  • FakeDogInternet fakeInternet = new FakeDogInternet( );
    Dogs.SetInternet(fakeInternet);
    FakeDog fakeDog= new FakeDog( );
    fakeInternet.ExpectCall(“GetDog”);
    CheckArguments(“rusty”);
    Return(fakeDog);
    FakeTail fakeTail = new FakeTail( );
    fakeDog.ExpectGetProperty(“Tail”);
    Return(fakeTail);
    FakeWagger fakeWagger = new FakeWagger( );
    fakeTail.ExpectCall(“Wag”).Return(fakeWagger);
    fakeWagger.ExpectCall(“Speed”);
    CheckArguments(5);
  • The following interfaces would need to be created:
  • 1. IDogInternet
  • 2. IDog
  • 3. ITail
  • 4. IWagger
  • The following implementation would need to be created (this can be done with a dynamic mock framework 110):
  • 1. FakeDogInternet
  • 2. FakeDog
  • 3. FakeTail
  • 4. FakeWagger
  • The following public method may be in the production code: Dogs.SetInternet( ).
  • An implementation of an embodiment of the invention for .NET code is now described. Provided is the following static method that returns the current time.
  • Original Code
  • public static DateTime get_Now( )
    {
    // This is just an example..
    return System.DateTicks.ToLocalTime( );
    }

    This is actually compiled to the following ByteCode:
  • call System::get_DateTicks ( )
    stloc.0
    ldloca.s time1
    call instance DateTime::ToLocalTime( )
    ret
  • Before the ByteCode is run the weaver 104 may add code to the ByteCode that mimics the following code may be added to the original code, it being emphasized that the weaver 104 adds code to directly to the ByteCode, the original code being unaffected. The equivalent high level language is shown for clarity:
  • public static DateTime get_Now( )
    {
    // Are we mocked?
    if (MockFramework.isMocked(“DateTime.get_Now”)
    {
    // Yes, get the fake return value
    object fakeReturn =
    MockFramework.getReturn(“DateTime.get_Now”);
    // should we Continue with original code?
    if (!MockFramework.shouldCallOriginal(mockReturn))
    {
    return (DateTime)fakeReturn;
    }
    }
    return System.DateTicks.ToLocalTime( );
    }
  • Actually add the following byte code may be added:
  • ldstr “DateTime.getNow”
    call MockFramework.isMocked
    brfalse.s label1
    ldstr “DateTime.getNow”
    call MockFramework.getReturn
    dup
    brtrue.s 0x07
    unbox DateTime
    Ildind.i1
    ret
    pop
    label1: call System::get_DateTicks ( )
    stloc.0
    ldloca.s time1
    call instance DateTime::ToLocalTime( )
    ret
  • The stack may be used to keep the mockReturn object instead of a local variable. This saves the weaver 104 from defining the variable in the metadata. This helps to test the code. Now that this is in place it is possible to test that the code that counts the number of days in the current month works for also leap years. Following is an example of one test, showing the code to be tested:
  • // List of days in each month
    int[ ] days_in_month = {31,28,31,30,31,30,31,31,30,31,30,31};
    public int CalculateDayInCurrentMonth( )
    {
    DateTime now = DateTime.Now;
    int month = now.get_Month( );
    return days_in_month[month];
    }
  • Following this, the user wishes to test that it works for leap years. DateTime.Now is isolated and made to return a fake date, the leap year date. As the system can be isolated, the MockFramework can be instructed to return a fake date
  • DateTime leapDate = new DateTime(“29-Feb-2004”);
    // Fake next DataTime.Now, will return 29-Feb-2004
    MockFramework.Mock(DateTime.Now).ToReturn(leapDate);
    // run the method under test
    int actualDays = CalculateDayInCurrentMonth( );
    // make sure that the correct amount was recived
    Assert.AreEqual(29, actualDays);
  • Verifying Calls: The mechanism can be used to test that a certain call was actually made. In the previous test DateTime.Now might never even be called. As the Mock framework 110 counts the calls made, it can now be verified that the expected calls were actually made.
  • // fail if we haven't called all expectations
    MockFramework.VerifyThatAllExpectedCallsWhereMade( );
  • Verifying Arguments: Some scenarios require that the arguments that are passed are validated. To support this, the arguments to the MockFramework must be sent for verifying. Given Original Code:
  • public static void Log(int severity,string message){
    Console.WriteLine(severity.ToString( )+“ ”+message);
    }
  • the Weaved code 107 may be:
  • public static void Log(int severity,string message)
    {
    if (MockFramework.isMocked(“DateTime.IsSame”)
    {
    // Yes, get the fake return value and validate the arguments
    object fakeReturn =
    MockFramework.getReturn(“DateTime.IsSame”,
    severity, message);
    // should we Continue with original code?
    if (!MockFramework.shouldCallOriginal(mockReturn))
    {
    return;
    }
    }
    Console.WriteLine(severity.ToString( )+“ ”+message);
    }
  • This helps to test the code. Now that this is in place it is possible to test that our code Logs the correct message. Following is an example of one test.
  • // Fake next Log,
    MockFramework.Mock(Logger.Log(1,“An Error message”)).
    ToReturn(leapDate).CheckArguments( );
    // run the method under test
    RunAMethodThatCallsLog ( );
    // we will fail if Log is called with other arguments
  • Ref and Out Arguments: Some arguments are changed by the method and are passed back to the caller. The following shows how the code is weaved.
  • Given Original Code:
  • public static bool OpenFile(string fileName, out File file){
    file = new File (fileName);
    return file.Open( );
    }

    the Weaved code 107 may be:
  • public bool OpenFile(string fileName, out File file)
    {
    if (MockFramework.isMocked(“IO.OpenFile”)
    {
    // Yes, get the fake return value and validate the arguments
    object fakeReturn = MockFramework.getReturn(“IO.OpenFile”,
    fileName, file);
    // fake first arg
    if (MockFramework.shouldChangeArgument(1))
    {
    fileName = (string)MockFramework.getArgument(1);
    }
    // fake 2nd arg
    if (MockFramework. shouldChangeArgument(2))
    {
    file = (File)MockFramework.getArgument(2);
    }
    // should we Continue with original code?
    if (!MockFramework.shouldCallOriginal(mockReturn))
    {
    return (bool) fakeReturn;
    }
    }
    Console.WriteLine(severity.ToString( )+“ ”+message);
    }

    This helps to test the code. It is now possible to isolate the OpenFile. Following is an example of one test:
  • // Fake next OpenFile and open a test File,
    File testFile = new File(“testExample”);
    MockFramework.Mock(IO.OpenFile(“realfile”, out testFile)).
    ToReturn (true).CheckArguments( );
    }
    // run the method under test
    RunAMethodReadsTheFile ( );
    // we will read the fake file and not the real file, but fail if the real file was
    not passed
  • Modern languages support the notation of Generics. Using Generics allows the same logic to run with different types. A Stack is a classic example. In order to support mocking, only certain types of generic code, information about the generic parameters must be passed to the Mock framework 110. There may be two kinds of generic parameters: Type Generic—these are types that stay the same for all methods; and Method Generics—these are types that stay the same for one method. These types are passed to the MockFramework.getReturn method.
  • The Original Code may be:
  • public static void DoSomething<MethodType>(MethodType
    action,ClassType message){
    action.Perform(message);
    }

    The Weaved code 107 may include:
  • public static void DoSomething<MethodType>(MethodType
    action,ClassType message)
    {
    if (MockFramework.isMocked(“Namespace.DoSomething”)
    {
    Type typeGenerics = new Type[ ] { ClassType };
    Type methodGenerics = new Type[ ] { MethodType };
    // Yes, get the fake return value and validate the arguments
    object fakeReturn =
    MockFramework.getReturn(“DateTime.IsSame”,
    typeGenerics, methodGenerics,severity, message);
    // should we Continue with original code?
    if (!MockFramework.shouldCallOriginal(mockReturn))
    {
    return;
    }
    }
    action.Perform(message);
    }
  • Suppose the user has both class Base with method Count( ) and also a class, Derived, that is derived from base. When calling the Derived.Count( ) method, the user is actually calling the Base.Count method. In order to be able to mock Count( ) only for the derived class, the user needs to know what the class of the method is. This is why the user passes a context with the actual instance to the Mock framework 110. The Weaved code 107 may now look like this:
  • public static int Count( )
    {
    if (MockFramework.isMocked(“Base.Count”)
    {
    // pass this so we can tell if this is being called
    // from Base or Derived
    object fakeReturn = MockFramework.getReturn(“Base.Count”,
    this);
    // should we Continue with original code?
    if (!MockFramework.shouldCallOriginal(mockReturn))
    {
    return;
    }
    }
    action.Perform(message);
    }
  • According to some embodiments of the present invention, in order to Fake pinvoke methods that are native calls, when a module loads, for each native call line in the metadata, a new method may be added in the metadata that points to the original method (eg, for CreateProcess add a new method >>CreateProcess). Optionally, “>>” may be used in the name, as it is legal for it but not for higher level languages. The original line may then be changed to point to a new allocated piece of code that simply calls the new method just defined. In this manner, all calls to the pinvoke method will now be directed to the new method. The new method can now be faked as described herein in relation to normal methods.
  • According to some embodiments of the present invention, When a static constructor is called, it may be saved. Subsequently, when a clean up of the system between tests is requested/desired, all the static constructors may be re-invoked. For loaded types that don't have static constructors—all the static fields may be reset. In order to make sure that all types have been identified—all types loaded in the system may be run. According to further embodiments, it is possible to tell if a type has been loaded by:
      • i. a signal from a profiler,
      • ii. checking the address of the constructor to see if it points to the JIT or the real method, and/or
      • iii. using reflection to see if the type/types used by the type are already loaded. by using loadreflection only, for example.
  • After an instance of a given method or function is mocked, it may be left in a bad/improper state and, therefore, should not be used in the system after the test. Such methods or functions may be referred to as “stale mocks”. To ensure that stale mocks are not re-used by the system, a list of used mock instances may be stored. Whenever an instance is used it may be tested against that list, and fail if it is a stale mock, i.e. on the list.
  • It is appreciated that software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs. Components described herein as software may, alternatively, be implemented wholly or partly in hardware, if desired, using conventional techniques.
  • Features of the present invention which are described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, features of the invention which are described for brevity in the context of a single embodiment may be provided separately or in any suitable subcombination.

Claims (24)

1. A system for providing testing for a given software application comprising a plurality of software components, at least some of which are coupled, said system comprising:
a first processor functionally associated with a digital memory, which digital memory stores processor executable software testing code adapted to cause one or more second processors to:
at least partially isolate from within the given software application, during runtime, at least one coupled software component which performs a given function by introducing, prior to execution of the software application, code elements for runtime access of application points associated with the at least one coupled software component of the given software application, such that at least one of the introduced code elements provides the software testing code access between first utilizing software components to first utilized software components during runtime; wherein the at least one coupled software component is a generic software component;
retrieve generic parameters relating to the at least one coupled software component;
map a plurality of expectations relating to execution of the given software application, each mapped expectation comprising an expected call to an individual component from among the plurality of software components and an identity of the individual component, the retrieved generic parameters serving as keys for mapped expectations relating to the at least one coupled software component; and
test, by use of the second processors running the testing code, the given software application by imposing a fake behavior on the at least one coupled software component, wherein imposing a fake behavior includes removing or replacing an expected behavior of the at least one coupled software component, during runtime, by use of the access provided by the at least one of the introduced code elements.
2. The system of claim 1, wherein the generic parameters comprise type generic and method generic.
3. The system according to claim 1, wherein the at least one coupled software component is type generic.
4. The system according to claim 1, wherein the at least one coupled software component is method generic.
5. The system according to claim 1, wherein a utilizing software component of the first utilizing software components comprises test code and wherein the code elements are designed to generate a plurality of testing scenarios for said test code by suitably controlling access of the test code to a utilized software component of the first utilized software components.
6. The system according to claim 1, wherein said processor executable software testing code is further adapted to cause the one or more second processors to, after the given software application has been loaded into directly accessible memory by an operating system from an executable file on disk and before the given software application has been run, parse the software application and add the code elements to the parsed software application, thereby providing an at least partially isolatable weaved application.
7. The system according to claim 1, wherein the given software application does not include interfaces for injection of alternate functions.
8. The system according to claim 1, wherein the software testing code is further adapted to cause the second processors to track objects returned from mocked calls and to mock methods of the tracked objects, so as to facilitate a chained mock response.
9. The system according to claim 1, where the given software application further includes second utilizing software components including metadata pointing to second utilized software components and said processor executable software testing code is further adapted to cause the one or more second processors to modify the metadata to point to access control code designed to control access of the second utilizing software components to the second utilized software components.
10. A system for providing testing for a given software application comprising a plurality of software components, at least some of which are coupled, said system comprising:
a first processor functionally associated with a digital memory, which digital memory stores processor executable software testing code adapted to cause one or more second processors to:
at least partially isolate from within the given software application, during runtime, at least one coupled software component which performs a given function by introducing, prior to execution of the software application, code elements for runtime access of application points associated with the at least one coupled component of the given software application;
test, by use of the second processors running the testing code, the given software application by removing or replacing a behavior of the at least partially isolated coupled software component, during runtime, by use of the access provided by the at least one of the introduced code elements; and
map a plurality of expectations, each of which comprises a string identifying an individual component from among the plurality of software components, which individual components are software components called during execution of the given software application.
11. The system according to claim 10, wherein said processor executable software testing code is adapted to cause the one or more second processors to map expectations by recording actual calls to the individual components.
12. The system according to claim 11, where said processor executable software testing code is further adapted to cause the one or more second processors to record arguments passed to the individual components with the actual calls and add the recorded arguments to the generated expectations.
13. The system according to claim 10, where at least one of the individual components calls another software component during execution of the at least one of the individual components, thereby defining a chain of calls during execution of the given software application.
14. The system according to claim 13, wherein said processor executable software testing code is further adapted to cause the one or more second processors to generate a set of n expectations, by recording the chain of calls during execution of the given software application.
15. The system according to claim 10, wherein the at least one coupled software component is a generic software component and imposing a fake behavior on the at least one coupled software component includes retrieving generic parameters relating to the at least one coupled software component.
16. The system according to claim 15, wherein the at least one coupled software component is type generic.
17. The system according to claim 15, wherein the at least one coupled software component is method generic.
18. The system according to claim 10, wherein the given software application does not include interfaces for injection of alternate functions.
19. The system according to claim 10, where the given software application further includes utilizing software components including metadata pointing to utilized software components and said processor executable software testing code is further adapted to cause the one or more second processors to modify the metadata to point to access control code designed to control access of the utilizing software components to the utilized software components.
20. A system for providing testing for a given software application comprising a plurality of software components; at least some of which are coupled, said system comprising:
a first processor functionally associated with a digital memory, which digital memory stores processor executable software testing code adapted to cause one or more second processors to:
at least partially isolate from within the given software application, during runtime, at least one coupled software component which performs a given function by introducing, prior to execution of the software application, code elements for runtime access of application points associated with the at least one coupled component of the given software application, such that at least one of the introduced code elements provides the software testing code access between first utilizing software components to first utilized software components during runtime; and
test, by use of the second processors running the testing code, the given software application, by imposing a fake behavior on the at least one coupled software component, wherein imposing behavior includes removing or replacing an expected behavior of the at least one coupled software component, during runtime; by use of the access provided by the at least one of the introduced code elements;
wherein the given software application further includes second utilizing software components including metadata pointing to second utilized software components and said processor executable software testing code is further adapted to cause the one or more second processors to modify the metadata to point to access control code designed to control access of the second utilizing software components to the second utilized software components.
21. The system according to claim 20, wherein the at least one coupled software component is a generic software component and imposing a fake behavior on the at least one coupled software component includes retrieving generic parameters relating to the at least one coupled software component.
22. The system according to claim 20, east one coupled software component is type generic.
23. The system according to claim 20, wherein the given software application does not include interfaces for injection of alternate functions.
24. The system according to claim 20, wherein the software testing code is further adapted to cause the second processors to track objects returned from mocked calls and to mock methods of the tracked objects, so as to facilitate a chained mock response.
US16/446,692 2006-09-25 2019-06-20 Methods and systems for isolating software components Abandoned US20190370150A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/446,692 US20190370150A1 (en) 2006-09-25 2019-06-20 Methods and systems for isolating software components

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US82674906P 2006-09-25 2006-09-25
PCT/IL2007/001152 WO2008038265A2 (en) 2006-09-25 2007-09-20 Method and system for isolating software components
US44294809A 2009-03-25 2009-03-25
US13/706,711 US9251041B2 (en) 2006-09-25 2012-12-06 Method and system for isolating software components
US15/005,145 US10078574B2 (en) 2006-09-25 2016-01-25 Methods and systems for isolating software components
US15/950,202 US20180232298A1 (en) 2006-09-25 2018-04-11 Methods and systems for isolating software components
US16/446,692 US20190370150A1 (en) 2006-09-25 2019-06-20 Methods and systems for isolating software components

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/950,202 Continuation US20180232298A1 (en) 2006-09-25 2018-04-11 Methods and systems for isolating software components

Publications (1)

Publication Number Publication Date
US20190370150A1 true US20190370150A1 (en) 2019-12-05

Family

ID=55961794

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/005,145 Active US10078574B2 (en) 2006-09-25 2016-01-25 Methods and systems for isolating software components
US15/950,202 Abandoned US20180232298A1 (en) 2006-09-25 2018-04-11 Methods and systems for isolating software components
US16/446,692 Abandoned US20190370150A1 (en) 2006-09-25 2019-06-20 Methods and systems for isolating software components

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US15/005,145 Active US10078574B2 (en) 2006-09-25 2016-01-25 Methods and systems for isolating software components
US15/950,202 Abandoned US20180232298A1 (en) 2006-09-25 2018-04-11 Methods and systems for isolating software components

Country Status (1)

Country Link
US (3) US10078574B2 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8893086B2 (en) 2009-09-11 2014-11-18 International Business Machines Corporation System and method for resource modeling and simulation in test planning
US8539438B2 (en) 2009-09-11 2013-09-17 International Business Machines Corporation System and method for efficient creation and reconciliation of macro and micro level test plans
US8527955B2 (en) 2009-09-11 2013-09-03 International Business Machines Corporation System and method to classify automated code inspection services defect output for defect analysis
US10235269B2 (en) 2009-09-11 2019-03-19 International Business Machines Corporation System and method to produce business case metrics based on defect analysis starter (DAS) results
US8495583B2 (en) * 2009-09-11 2013-07-23 International Business Machines Corporation System and method to determine defect risks in software solutions
US8578341B2 (en) 2009-09-11 2013-11-05 International Business Machines Corporation System and method to map defect reduction data to organizational maturity profiles for defect projection modeling
US9697109B2 (en) * 2014-06-26 2017-07-04 Parasoft Corporation Dynamically configurable test doubles for software testing and validation
WO2016167760A1 (en) * 2015-04-15 2016-10-20 Hewlett Packard Enterprise Development Lp Code coverage information
US10230712B2 (en) * 2016-09-12 2019-03-12 Microsoft Technology Licensing, Llc Binary experimentation on running web servers
US10853057B1 (en) 2017-03-29 2020-12-01 Amazon Technologies, Inc. Software library versioning with caching
EP3638649A4 (en) 2017-06-16 2021-03-17 The Research Foundation for The State University of New York Anti-fungals compounds targeting the synthesis of fungal sphingolipids
US10346293B2 (en) * 2017-10-04 2019-07-09 International Business Machines Corporation Testing pre and post system call exits
CN108132876B (en) * 2017-12-07 2021-03-19 中国航发控制系统研究所 Embedded software object code unit testing method based on injection mode
US10565090B1 (en) * 2018-01-03 2020-02-18 Amazon Technologies, Inc. Proxy for debugging transformed code
US20190310933A1 (en) * 2018-04-10 2019-10-10 Ca, Inc. Externalized definition and reuse of mocked transactions
US10798464B1 (en) 2018-04-27 2020-10-06 Amazon Technologies, Inc. Streaming delivery of client-executable code
US11550899B2 (en) * 2019-07-22 2023-01-10 Cloud Linux Software Inc. Systems and methods for hardening security systems using live patching
US10963275B1 (en) 2019-10-31 2021-03-30 Red Hat, Inc. Implementing dependency injection via direct bytecode generation
US11360880B1 (en) 2020-05-18 2022-06-14 Amazon Technologies, Inc. Consistent replay of stateful requests during software testing
US11567857B1 (en) 2020-05-18 2023-01-31 Amazon Technologies, Inc. Bypassing generation of non-repeatable parameters during software testing
US11210206B1 (en) 2020-05-18 2021-12-28 Amazon Technologies, Inc. Spoofing stateful dependencies during software testing
US11775417B1 (en) 2020-05-18 2023-10-03 Amazon Technologies, Inc. Sharing execution states among storage nodes during testing of stateful software

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5652869A (en) 1991-03-07 1997-07-29 Digital Equipment Corporation System for executing and debugging multiple codes in a multi-architecture environment using jacketing means for jacketing the cross-domain calls
EP0686916A1 (en) 1994-06-07 1995-12-13 Digital Equipment Corporation Method and apparatus for testing software
JP3402527B2 (en) 1994-10-28 2003-05-06 セイコーインスツルメンツ株式会社 Reflective color image projector
CA2199108C (en) * 1996-03-05 2002-04-23 Hirotoshi Maegawa Parallel distributed processing system and method of same
US7058822B2 (en) 2000-03-30 2006-06-06 Finjan Software, Ltd. Malicious mobile code runtime monitoring system and methods
US6164841A (en) 1998-05-04 2000-12-26 Hewlett-Packard Company Method, apparatus, and product for dynamic software code translation system
AU6782800A (en) 1999-08-16 2001-07-03 Z-Force Corporation System of reusable software parts for implementing concurrency and hardware access, and methods of use
US6701514B1 (en) 2000-03-27 2004-03-02 Accenture Llp System, method, and article of manufacture for test maintenance in an automated scripting framework
US20020199173A1 (en) 2001-01-29 2002-12-26 Matt Bowen System, method and article of manufacture for a debugger capable of operating across multiple threads and lock domains
JP3779665B2 (en) 2002-09-25 2006-05-31 富士通株式会社 Test support program
US7594111B2 (en) 2002-12-19 2009-09-22 Massachusetts Institute Of Technology Secure execution of a computer program
US7568188B2 (en) 2003-03-07 2009-07-28 Microsoft Corporation Method for testing a software shim
US20040220947A1 (en) * 2003-05-02 2004-11-04 International Business Machines Corporation Method and apparatus for real-time intelligent workload reporting in a heterogeneous environment
US20050039171A1 (en) * 2003-08-12 2005-02-17 Avakian Arra E. Using interceptors and out-of-band data to monitor the performance of Java 2 enterprise edition (J2EE) applications
US7389223B2 (en) 2003-09-18 2008-06-17 International Business Machines Corporation Method and apparatus for testing a software program using mock translation input method editor
EP1680741B1 (en) 2003-11-04 2012-09-05 Kimberly-Clark Worldwide, Inc. Testing tool for complex component based software systems
US7624449B1 (en) 2004-01-22 2009-11-24 Symantec Corporation Countering polymorphic malicious computer code through code optimization
US7263462B2 (en) 2004-07-30 2007-08-28 Ailive, Inc. Non-disruptive embedding of specialized elements
US20060137016A1 (en) 2004-12-20 2006-06-22 Dany Margalit Method for blocking unauthorized use of a software application
US20060155525A1 (en) 2005-01-10 2006-07-13 Aguilar Maximino Jr System and method for improved software simulation using a plurality of simulator checkpoints
JP2006227681A (en) 2005-02-15 2006-08-31 Matsushita Electric Ind Co Ltd Debug device, debug method and program
US7721272B2 (en) * 2005-12-12 2010-05-18 Microsoft Corporation Tracking file access patterns during a software build
US8352923B2 (en) 2006-09-25 2013-01-08 Typemock Ltd. Method and system for isolating software components
US8370941B1 (en) * 2008-05-06 2013-02-05 Mcafee, Inc. Rootkit scanning system, method, and computer program product

Also Published As

Publication number Publication date
US20180232298A1 (en) 2018-08-16
US20160140021A1 (en) 2016-05-19
US10078574B2 (en) 2018-09-18

Similar Documents

Publication Publication Date Title
US20190370150A1 (en) Methods and systems for isolating software components
US8352923B2 (en) Method and system for isolating software components
US8954929B2 (en) Automatically redirecting method calls for unit testing
US8171460B2 (en) System and method for user interface automation
US6026237A (en) System and method for dynamic modification of class files
Feldthaus et al. Tool-supported refactoring for JavaScript
US9471282B2 (en) System and method for using annotations to automatically generate a framework for a custom javaserver faces (JSF) component
US5701408A (en) Method for testing computer operating or application programming interfaces
CN103970659B (en) Android application software automation testing method based on pile pitching technology
US10331425B2 (en) Automated source code adaption to inject features between platform versions
CN109614165B (en) Multi-version parallel operation method and device for COM (component object model) component
US20030191864A1 (en) Method and system for detecting deprecated elements during runtime
US20080022260A1 (en) Method for accessing internal states of objects in object oriented programming
US20080301636A1 (en) Per-instance and per-class aspects
Abercrombie et al. jContractor: Bytecode instrumentation techniques for implementing design by contract in Java
Smith et al. Value-dependent information-flow security on weak memory models
WO2017130087A1 (en) Methods and systems for isolating software components
Arlt et al. Trends in model-based gui testing
Tan Security analyser tool for finding vulnerabilities in Java programs
Marick Generic Coverage Tool (GCT) User’s Guide
Tong Enjoying Web Development with Tapestry
Vincenzi et al. JaBUTi–Java Bytecode Understanding and Testing
Gunadi et al. Formal certification of android bytecode
Cutler et al. Practical Programming
PARRISH et al. Applying conventional unit testing techniques to abstract data type operations

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION