EP1780946B1 - Consensus testing of electronic system - Google Patents
Consensus testing of electronic system Download PDFInfo
- Publication number
- EP1780946B1 EP1780946B1 EP05110181A EP05110181A EP1780946B1 EP 1780946 B1 EP1780946 B1 EP 1780946B1 EP 05110181 A EP05110181 A EP 05110181A EP 05110181 A EP05110181 A EP 05110181A EP 1780946 B1 EP1780946 B1 EP 1780946B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- value
- consensus
- test
- electronic system
- tester
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 343
- 238000000034 method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 7
- 238000003860 storage Methods 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 239000000463 material Substances 0.000 description 23
- 239000000047 product Substances 0.000 description 9
- 238000013461 design Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 235000000332 black box Nutrition 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/50—Testing arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
Definitions
- the invention relates to a tester for testing an electronic system, a computer program product, and a method for testing an electronic system.
- Protocol conformance testing is to ensure that different products, often from different vendors, are interoperable. That is, they speak the same language and can work together. Conformance testing is described in lan Sommerville: Software Engineering (7th Edition), 2004, ISBN 0321210263. Conformance testing usually has the following steps (Sommerville, page 539 ):
- Model-based testing is another way to create conformance tests. Model-based testing is described in M. Blackburn, R. Busser, A. Nauman: Why Model-Based Test Automation is Different and What You Should Know to Get Started, 2004, Software Productivity Consortium. Model-based testing usually has the following steps:
- the expected outcome for each test case is determined by executing the model.
- the problem is that the effort of creating the model can be compared to creating the actual conformance test cases. Still, this method can be used to create a larger number of test cases.
- the accuracy and relevance of test cases depends solely on the model, which adds a level of indirection since test cases are not created directly.
- test case creation and test run are done at the same time, so that test cases are generated and run simultaneously.
- the number of test cases is not pre-defined, since the responses from the tested implementation affect the upcoming test cases. This is called exploration testing, and it is described in J. Helovuo, S. Leppänen: Exploration Testing, Second International Conference on Application of Concurrency to System Design, 2001 .
- regression testing outputs gathered when running an earlier version are used as expected outputs to a newer version (Sommerville, page 564). The purpose of regression testing is to see that the changes introduced to the newer version have not introduced any extra changes to the new version.
- back-to-back testing two implementations of the same protocol are tested by identical inputs to ensure that their behaviour is identical. However, both regression testing and back-to-back testing are limited to situations where results of two implementations are compared to pinpoint differences between them.
- the present invention seeks to provide an improved tester for testing an electronic system, an improved computer program product, and an improved method for testing an electronic system.
- a computer program product as specified in claim 15 comprising software modules, which, when run in a computer, constitute the functionality/structures of the tester for testing an electronic system.
- the invention provides several advantages.
- the expected output is not defined beforehand, but it is collected from the observed behaviour.
- the number of test cases may be high, since creation of the test cases for the preliminary test is relatively cheap. Also, any repeatable set of test cases may be used as the basis of consensus testing. A higher number of test cases may provide a higher coverage than a lower number.
- Consensus testing does not require the tester to have a model of the electronic system.
- the different electronic systems in effect form the model.
- Creation of the consensus test material can be conducted by a tester with reasonable knowledge of the application domain of the electronic system without expert mathematical or modelling skills.
- a tester can compare the behaviour of his/her implementation with other implementations without having direct access to these implementations. This may decrease the need of inter-operability events where live systems brought together are compared with each other.
- consensus voting and verdict assignment may be done separately from the design and/or execution of preliminary test cases, even off-line by using only the recorded traffic. No tested implementation needs to be available at this point.
- the tester 112 implements a novel principle of consensus testing. With the consensus testing protocol conformance, protocol interoperability and other testing goals may be achieved through black-box testing.
- the consensus testing technique may create test material for a protocol by comparing the behaviour of different electronic systems, such as different implementations of a protocol.
- the implementation may be a hardware device, a software program, a simulation, an emulator, an executable model, etc., or a system made of such parts.
- Consensus testing may be suitable for assessing protocols where the protocol implementation does not have a large number of alternative strategies to respond to a set of input.
- TLS Transmission Layer Security
- TLS Transmission Layer Security
- RFC 2246 The TLS Protocol, Version 1.0.
- Many other security and authentication and other protocols have similar handshake functions.
- request-reply-like protocols may be well suited for consensus testing: if requests are identical, replies should be identical or almost identical.
- the tester 112 includes a traffic interface 114 to receive traffic 102 from a test of an electronic system 100.
- the test for the electronic system 100 may be performed in real time, or the traffic 102 may have been recorded earlier from a test of the electronic system 100.
- the tester 112 also includes an element comparator 118 to extract a value from an element of the traffic 102 and to compare the extracted element value with an element value 110 obtained from another test of another electronic system 104, 106, 108.
- the other test for the other electronic system 104, 106, 108 may have been performed earlier with the tester 112. The other test may also be performed later, as the traffic 102 from the test of the electronic system 100 may be saved and processed only after the other test has been performed.
- the other test for the other electronic system 104, 106, 108 may also be performed with another tester, and the relevant information from the other test may be imported to the tester 112 testing the electronic system 100 by any known data transfer means, such as data communication means or transferable data storage means. These embodiments will be described in more detail later.
- the tester 112 also includes a test result generator 122 to generate consensus information 124 on the interoperability of the electronic system 100, based on comparing 120 the extracted element values of the electronic system 100 with the element values obtained from the other test of the other electronic system 104, 106, 108.
- traffic from a test of an electronic system is received.
- a value from an element of the traffic is extracted in 404 and the extracted element value is compared in 406 with an element value obtained from another test of another electronic system.
- operations 404, 406 may be repeated until all elements are processed.
- consensus information on the interoperability of the electronic system is generated based on comparing the extracted element values of the electronic system with the element values obtained from the other test of the other electronic system.
- the method ends in 410.
- Embodiments of the tester 112 may be applied to the method as well.
- the method may additionally include one or more operations, or some part of the seven operations that will be explained next.
- a set of test cases is created. It may be that only the input for each test case has to be defined. The expected outcome for each test case does not need to be defined. This makes the creation of a test case easy and makes it possible to have a larger number of test cases compared to the traditional conformance testing.
- test cases may be run against different implementations of the protocol in question. Different versions of the same implementation or single implementation configured differently may also be used.
- the data sent or received for each test case is recorded. Sometimes only a portion of the traffic may be stored, for example just the output from the tested implementation or a portion of the output.
- the results from different preliminary test runs may be collected into a single repository.
- the repository may contain minimally the recorded traffic for each test case.
- the compared elements used in the consensus calculations may be decided based on the recorded data and other available information, if any. Alternatively, the compared elements may have been decided already before the preliminary test runs. This enables only the compared elements to be recorded.
- the compared elements are the basis of the consensus testing. Different compared element values from different implementations may indicate a meaningful difference in the behaviour between the tested implementations; accordingly, the same compared element values may indicate a similarity between the implementations.
- the element to be compared should remain constant for a test case from one test run to another against the same tested implementation. However, when comparing different implementations, the compared element may show variation if the implementations have some differences in their behaviour.
- a time stamp may not be a good candidate since time is constantly changing unless the clock can be set to a fixed value for testing.
- a random value element is another example of an element that may not be suited for a compared element.
- Examples of potential compared elements are message type identifiers, status codes or error codes. The presence or absence of a specific field in a received message may also be a good compared element. Sometimes only a type of a data field may be used as the compared element. The comparison may also take place on message level without looking into the actual contents of the messages.
- test cases may use the same compared data elements, or the test cases may have different compared data elements.
- the compared elements may be decided manually by applying knowledge of the protocol and by observing the recorded test material. Automation may be used to pick out the elements that have shown a suitable level of variation. The final set of data elements may be finally decided by using user judgment on the results from the automation.
- the selection of compared elements is either left totally to an automated system, or an automation system provides suggestions for the user or consults the user on some issues.
- the automation analysis may be based on the frequency of different elements in the recorded traffic, for example.
- a vote for the consensus value for each test case may be cast.
- a vote may be given to each compared element value recorded from different implementations.
- the compared element value used by most of the implementations gets most of the votes.
- the value is the aggregate from all of the compared elements.
- the consensus strength for each test case may be given based on the number of values getting votes:
- a strong consensus test case may reflect a situation where it would be legal for the implementation to behave differently, and the tested ones just happened to behave identically. In that case, the user may choose to either remove this test case or declare that there is no consensus in this test case.
- test case verdicts for each implementation may be given by using the information about compared elements and the consensus strengths.
- a test case may be given the following verdicts:
- the results may include the number of votes received by the compared element value used by the implementation. The higher the value, the higher the confidence in the implementation to be inter-operable.
- test case verdict may be given such that the test case is passed only if all sub-verdicts are passed, and the test case is inconclusive if any of the sub-verdicts is inconclusive. Otherwise the compiled verdict of the test case is failed.
- Operations 4, 5 and 6 may be repeated several times to create a set of compared elements best meeting the testing goals.
- test results may be packaged to form a consensus testing material.
- the testing material enables the later use of consensus testing without repeating all operations.
- test material may contain the following information:
- the test material may contain all of the test cases used in the preliminary test run or only a subset of them (only test cases with strong consensus, for example).
- test material is created for HTTP (HyperText Transfer Protocol) server testing.
- HTTP HyperText Transfer Protocol
- the results do not reflect results from real servers but are crafted to serve as good sample material.
- the examplary test material is brief; the number of test cases may be much higher in reality.
- test case input is an HTTP GET request; test case #0 is perfectly valid and should fetch the index page of the server.
- test cases #1, #2, #3 and #4 contain different version values, which may or may not be valid.
- Table 1 Preliminary test case inputs Test case Input #0 GET / HTTP/1.0 ⁇ n ⁇ r #1 GET / HTTP/1.00 ⁇ n ⁇ r #2 GET / HTTP/1.01 ⁇ n ⁇ r #3 GET / HTTP/01.0 ⁇ n ⁇ r #4 GET / HTTP/11.0 ⁇ n ⁇ r
- Tables 2, 3, 4 and 5 show the results of four different HTTP servers (A, B, C and D).
- Table 2 Test results of server A Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 bad-request #2 HTTP/1.1 400 bad-request #3 HTTP/1.1 200 OK #4 HTTP/1.1 200 OK
- Table 3 Test results of server B Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 Bad request #2 HTTP/1.1 400 Bad request #3 HTTP/1.1 200 OK #4 HTTP/1.1 200 OK
- Table 4 Test results of server C Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 bad-request #2 HTTP/1.1 200 OK #3 HTTP/1.1 400 bad-request #4 HTTP/1.1 400 bad-request
- Table 5 Test results of server D Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 Bad request #2 HTTP/1.1 400 Bad request #3 HTTP/1.1 200 OK #4 HTTP/1.1 400 bad-request
- the proper element to be compared is a three-digit status code, which is the value after the fixed part "HTTP/1.1".
- the status code expresses the status of the request in a compact form.
- test cases #0, #1, #2 and #3 give strong consensus, although in test cases #2 and #3 the vote is not unanimous.
- the limit used to declare strong consensus may be such that 75% percent or more of the votes must be cast to the same value.
- Table 6 compares test results. Table 6: Comparison of test results Test case Server A Server B Server C Server D Votes for "200" Votes for "400" Consensus strength #0 200 200 200 200 4 0 Strong #1 400 400 400 400 400 0 4 Strong #2 400 400 200 400 1 3 Strong #3 200 200 400 200 3 1 Strong #4 200 200 400 400 2 2 No consensus
- Consensus testing may provide quantitative information about the behaviour of the tested electronic system in the form of consensus level.
- the consensus level indicates how many votes the element value obtained from the test of the electronic system received in the consensus test material.
- the consensus level of test case #2 for server C is only 1.
- the consensus level is a numerical value ready for further processing. For example a vendor of a product may follow the consensus level of their product as new releases of the products are tested, but also when new consensus test material becomes available. The new material contains information about the behaviour of new versions of other implementations of the system.
- test cases #0, #1, #2 and #3 can be used to declare pass or fail, but test case #4 is inconclusive for all.
- Table 7 summarizes the results. On overall, A, B and D passed all the test cases which had consensus, while C failed test cases #2 and #3. Table 7: Verdicts for test cases Test case Server A Server B Server C Server D #0 Pass Pass Pass Pass #1 Pass Pass Pass Pass #2 Pass Pass Fail Pass #3 Pass Pass Fail Pass #4 Inco Inco Inco Inco Inco Inco Inco Inco Inco Inco Inco Inco Inco Inco Inco
- the material may be packaged for testing of HTTP servers.
- the package may contain the following parts:
- Table 8 shows the packaged data.
- test case #4 could be retained for other testing purposes than consensus testing.
- the tester 112 may include some additional components: a test case generator 208, a preliminary test driver 202, a test data recorder 212, an element analyser 218, a consensus vote calculator 222, a test verdict assigner 224, a test material packager 228, and a consensus test driver 232.
- All of the components may be individual entities or some of them may be integrated to form larger entities. Ultimately they all may form the tester 112. Parts from other independent systems may also act in roles of the tester 112 components.
- the test case generator 208 generates the preliminary test cases.
- the test case generator 208 may be integrated in the tester 112 or it may be a separate system for generating test cases. A separate test case generator 208 is not necessarily required if the preliminary test driver 202 generates the test cases.
- the preliminary test cases may be created by hand (manually), using some test automation system, by an executable model, by a software program, by a hardware device, through simulation, by an emulator, etc. or a system made up from them. Traffic recorded for other purposes or test material created for another purpose than consensus testing may also act as the preliminary test results.
- the tester 112 may include a test interface 206 to receive predetermined test cases and their inputs.
- the tester 112 may also include storage 210 to save the test cases and their inputs.
- the preliminary test driver 202 may run the preliminary test cases.
- the test driver is able to run identical sets of test cases for all tested implementations, so that the comparison is based on valid data. Basically, any system capable of interacting with the tested implementations may act as the preliminary test driver 202.
- the preliminary test driver 202 may be testing software, an interpreter, an executable model, a software program, a hardware device, simulation, an emulator, etc. or a system made up from them.
- the tester 112 may include an input interface 204 to feed an input of a test case into the electronic system 100, 104, 106, 108 and an output interface 200 to receive an output of the test case from the electronic system 100, 104, 106, 108.
- the test data recorder 212 may save the traffic from the tested system in test traffic storage 214. It may store full traffic or just a portion of the traffic data, e.g. just responses or a portion of the responses.
- the data may be divided into test cases for later processing. The nature of this division may be dependent on the type of the protocol used in testing and the data available from the test driver: if the test driver divides the traffic into test cases, then they may be used directly; if the protocol is made up of independent sessions, etc., then one session may be a test case; or if the protocol is made up of request-reply pairs, then one pair may be a test case. Naturally, any other logical test case composition may also be used.
- the amount of collected data may be limited by collecting only the elements which are identified beforehand as the compared elements, are candidates to be the compared elements, can be stored to the available space, or are simply available. Naturally, the elements may also be chosen for some other suitable reason.
- the element analyser 218 may determine which elements are the compared elements.
- the element analyser 218 may decompose the traffic into elements.
- the element analyser 218 may also select a portion of the elements for the element comparator 118.
- the element analyser 218 may perform the selection automatically. Such an automatic selection may be based on the number of different elements in the traffic, the number of different element values in the traffic, the frequency of different elements in the traffic, the frequency of different element values in the traffic, the importance or other weight value set for an element, and/or the location of an element in a message, for example.
- Suitable elements include a message type, a field type, a status code, an error code, an enumerated field with predefined values, a version field, an identifier field, any text string, any primitive field (e.g. an integer field or a character field), an XML element, an XML attribute, ASN.1 Basic Encoding Rule type and value elements, ASN.1 Packet Encoding Rule prefix and value elements, a canonical or trimmed value of an element (e.g. white space removed), presence or absence of a message, presence or absence of an optional field in a message, and element selected from a set of optional elements.
- the tester 112 may include an interface 216 to receive a selection of an element from a user of the tester 112.
- the tester 112 may also include storage 220 to save decomposition information on the decomposition of the traffic into the elements.
- the element analyser 218 may be able to break down the traffic data into elements to choose the compared elements.
- the possible methods for this decomposition may be, for example:
- the element analyser 218 may contain automation, which fully or with user interaction determines the most suitable compared elements.
- the element analyser 218 may accept feedback from the element comparator 118, the consensus vote calculator 222 and/or the test verdict assigner 224, in order to determine which set of compared elements produces the most useful compared elements and consensus testing material.
- the element analyser 218 may decide to use all output from the tested electronic system as the compared elements by default.
- An "ignore set" may define which elements are ignored in the comparison. Compared elements are all traffic elements excluding the ignore set.
- An initial ignore set may be automatically collected, e.g. by running the same test case multiple times against the same implementation and including all changing elements in the ignore set. The initial ignore set may thus be expanded step-by-step.
- the element comparator 118 may take the recorded traffic, divided into test cases, and lists the compared element values per test case and per tested implementation.
- the element comparator 118 may need to be able to extract the compared elements from the traffic, as the element analyser 218 did.
- Comparing of compared element values may not always be based on exact values, but other equality criteria may also be used. White space may be ignored or leading zeroes may be removed from an integer value, for example.
- the consensus vote calculator 222 may calculate the votes per test case and determines the strength of the consensus. A vote may be given for each value an element has, so that the compared element value used by most of the electronic systems gets most of the votes. A consensus strength value may be given to a test case based on the vote distribution for an element of the test case. As was explained earlier, the consensus vote calculator 222 may give a strong consensus value to the consensus strength if there is a single dominant compared element value, a weak consensus value to the consensus strength if there are a few dominant compared element values, and a no consensus value to the consensus strength if there are many compared element values.
- a single dominant compared element value means that all the compared element values from all electronic systems are identical or that there is clearly, according to a predetermined limit, a single dominating value.
- a predetermined limit For the choice between the weak consensus value and no consensus values there may be another predetermined limit, i.e. a limit defining the difference between "a few" and "many".
- the tester 112 may include storage 226 to save for each test case the consensus strength value, the compared element values which were voted for, the number of votes per the compared element value, and as a consensus value the single dominant compared element value if the consensus strength has the strong consensus value.
- the test verdict assigner 224 assigns a verdict for each of the test cases for each tested implementation, based on the consensus votes, strength of consensus and the element values from the implementations, i.e. based on information about the compared elements and the consensus strength values.
- the test verdict assigner 224 may give a passed value to the test verdict if the test case has the strong consensus value for the consensus strength and the element value is the same as the single dominant compared element value, a failed value to the test verdict if the test case has the strong consensus value for the consensus strength but the element value of the electronic system is not the same as the single dominant compared element value, and an inconclusive value to the test verdict if the test case has the weak consensus value or no consensus value to the consensus strength.
- the test verdict assigner 224 may give as the result the number of votes received by the compared element value that the electronic system has.
- the test verdict assigner 224 may evaluate an aggregate test case including more than one vote so that it gives a passed value if all sub-verdicts are passed, an inconclusive value if any of the sub-verdicts is inconclusive, and a failed value otherwise.
- the test material packager 228 may be used to pack the consensus testing information for future use.
- the material may be used both to test the implementations used in the preliminary testing or to test new implementations.
- the test material may be a stand-alone entity or a data file or files readable by a separate test driver, for example.
- the preliminary testing results may be anonymised, so results of an individual preliminary test run cannot be assigned to a specific implementation tested.
- the consensus test driver 232 may be used to test an implementation using the packaged consensus test material 230.
- the consensus test driver 232 may be a hardware device, a software program or a combination of both.
- the consensus test driver 232 may be integrated with the testing information or it may read it from a data medium.
- the consensus test driver 232 may form a stand-alone tester 112 for testing an electronic system 100, with the data obtained from the earlier tests of the other electronic systems 104, 106, 108.
- the consensus test driver 232 may implement some of the following functions:
- the consensus test driver 232 may adjust the fed input depending on the protocol. For example, a time stamp may be given a proper up-to-date value. Also, there may be a need to take into account some values from the responses received earlier, such as sequence numbers or session identifiers.
- the consensus test driver 232 or a separate reporting system 302 may compile the result of a consensus test run to a test run report.
- This report may summarize the number of failed, passed or inconclusive test cases.
- the report may contain the consensus levels of the test cases and the total consensus level as the average of the values from the test cases. Several other metrics than ones mentioned here may be derived from the results.
- the report may be created during the test run or after the test run.
- the use of the tester 112 may be distributed.
- a separate remote team or multiple teams may use the tester 112 to run the preliminary tests.
- distributed testing may be performed sequentially as follows: preliminary test cases and/or a preliminary test driver is sent to the remote teams, remote teams run the preliminary tests with their implementations, results may be anonymised, the recorded traffic is received in a centralized location, the consensus calculations are made, and consensus testing material may be packaged and the package may be sent to the remote teams in order to get verdicts for the tested implementations.
- the tester 112 may also utilize parallel processing as follows: the preliminary test cases are executed in parallel against different implementations and the consensus strength and consensus vote calculations are done for each test case during the test case execution. The results may be shown to the tester once they are available.
- the tester 112 is a computer program product for testing an electronic system.
- the tester 112 may be a standard computer running the computer program product for testing the electronic system.
- the described functionality/structures may be implemented as software modules.
- the computer program product may be embodied on a distribution medium.
- the distribution medium may be any means for distributing software to customers, such as a (computer readable) program storage medium, a (computer readable) memory, a (computer readable) software distribution package, a (computer readable) signal, or a (computer readable) telecommunications signal.
- the tester 112 may be implemented as one or more integrated circuits, such as application-specific integrated circuits ASIC.
- Other hardware embodiments are also feasible, such as a circuit built of separate logic components.
- a hybrid of these different implementations is also feasible.
Abstract
Description
- The invention relates to a tester for testing an electronic system, a computer program product, and a method for testing an electronic system.
- The goal of protocol conformance testing is to ensure that different products, often from different vendors, are interoperable. That is, they speak the same language and can work together. Conformance testing is described in lan Sommerville: Software Engineering (7th Edition), 2004, ISBN 0321210263. Conformance testing usually has the following steps (Sommerville, page 539):
- 1. Design and create test cases: A test case is made up of input and expected output, which cover the intended behaviour of the tested product.
- 2. Run the tests and note the differences in the expected behaviour and the tested product.
- The problem with this approach is the difficulty of coming up with correct expected outputs. It is especially hard when specifications are incomplete. The number of test cases tends to be quite low because of the effort required to design the expected outputs. Interoperability tests, where different products are run against each other, are still required since not all relevant input and output patterns are recognized in the test case design.
- Model-based testing is another way to create conformance tests. Model-based testing is described in M. Blackburn, R. Busser, A. Nauman: Why Model-Based Test Automation is Different and What You Should Know to Get Started, 2004, Software Productivity Consortium. Model-based testing usually has the following steps:
- 1. Create a model of the tested system.
- 2. Run automation that creates a set of test cases from the model.
- 3. Run the tests and note the differences in the expected behaviour and the tested product.
- The expected outcome for each test case is determined by executing the model. The problem is that the effort of creating the model can be compared to creating the actual conformance test cases. Still, this method can be used to create a larger number of test cases. The accuracy and relevance of test cases depends solely on the model, which adds a level of indirection since test cases are not created directly.
- Sometimes, test case creation and test run are done at the same time, so that test cases are generated and run simultaneously. The number of test cases is not pre-defined, since the responses from the tested implementation affect the upcoming test cases. This is called exploration testing, and it is described in J. Helovuo, S. Leppänen: Exploration Testing, Second International Conference on Application of Concurrency to System Design, 2001.
- In automated regression testing, outputs gathered when running an earlier version are used as expected outputs to a newer version (Sommerville, page 564). The purpose of regression testing is to see that the changes introduced to the newer version have not introduced any extra changes to the new version. In back-to-back testing, two implementations of the same protocol are tested by identical inputs to ensure that their behaviour is identical. However, both regression testing and back-to-back testing are limited to situations where results of two implementations are compared to pinpoint differences between them.
-
US 6,260,065 discloses conformance testing. ETSI standard ETSITS 102 237-1, December 2003, "Telecommunications and Internet Protocol Harmonization Over Networks (TIPHON) Release 4; Interoperability test methods and approaches; Part 1: Generic approach to interoperability testing" discloses interoperability testing. - The present invention seeks to provide an improved tester for testing an electronic system, an improved computer program product, and an improved method for testing an electronic system.
- According to an aspect of the invention, there is provided a tester for testing an electronic system as specified in claim 1.
- According to another aspect of the invention, there is provided a computer program product as specified in claim 15 comprising software modules, which, when run in a computer, constitute the functionality/structures of the tester for testing an electronic system.
- According to another aspect of the invention, there is provided a method for testing an electronic system as specified in claim 16.
- The invention provides several advantages. The expected output is not defined beforehand, but it is collected from the observed behaviour. The number of test cases may be high, since creation of the test cases for the preliminary test is relatively cheap. Also, any repeatable set of test cases may be used as the basis of consensus testing. A higher number of test cases may provide a higher coverage than a lower number.
- Consensus testing does not require the tester to have a model of the electronic system. The different electronic systems in effect form the model. Creation of the consensus test material can be conducted by a tester with reasonable knowledge of the application domain of the electronic system without expert mathematical or modelling skills.
- By using a consensus testing material a tester can compare the behaviour of his/her implementation with other implementations without having direct access to these implementations. This may decrease the need of inter-operability events where live systems brought together are compared with each other.
- The selection of the elements to be compared, consensus voting and verdict assignment may be done separately from the design and/or execution of preliminary test cases, even off-line by using only the recorded traffic. No tested implementation needs to be available at this point.
- In the following, embodiments of the invention are described, by way of example only, with reference to the accompanying drawings, in which
-
Figure 1 illustrates a tester for testing an electronic system; -
Figure 2 illustrates embodiments of the tester; -
Figure 3 illustrates further embodiments of the tester; and -
Figure 4 is a flow chart illustrating a method for testing an electronic system. - With reference to
Figure 1 , let us examine an overall view of atester 112 for testing anelectronic system 100. Thetester 112 implements a novel principle of consensus testing. With the consensus testing protocol conformance, protocol interoperability and other testing goals may be achieved through black-box testing. The consensus testing technique may create test material for a protocol by comparing the behaviour of different electronic systems, such as different implementations of a protocol. The implementation may be a hardware device, a software program, a simulation, an emulator, an executable model, etc., or a system made of such parts. - Consensus testing may be suitable for assessing protocols where the protocol implementation does not have a large number of alternative strategies to respond to a set of input. One example of such a protocol is TLS (Transmission Layer Security) handshake, where a TLS peer has to respond to messages from another peer in a strict way. TLS is described in RFC 2246, The TLS Protocol, Version 1.0. Many other security and authentication and other protocols have similar handshake functions. Also request-reply-like protocols may be well suited for consensus testing: if requests are identical, replies should be identical or almost identical.
- The
tester 112 includes atraffic interface 114 to receivetraffic 102 from a test of anelectronic system 100. The test for theelectronic system 100 may be performed in real time, or thetraffic 102 may have been recorded earlier from a test of theelectronic system 100. - The
tester 112 also includes anelement comparator 118 to extract a value from an element of thetraffic 102 and to compare the extracted element value with anelement value 110 obtained from another test of anotherelectronic system electronic system tester 112. The other test may also be performed later, as thetraffic 102 from the test of theelectronic system 100 may be saved and processed only after the other test has been performed. The other test for the otherelectronic system tester 112 testing theelectronic system 100 by any known data transfer means, such as data communication means or transferable data storage means. These embodiments will be described in more detail later. Note that there may exist more than one otherelectronic system traffic 102 from theelectronic system 100 will be compared. Instead of different implementations, a single implementation may be used with different configuration settings. Instead of different implementations, multiple versions of a single implementation may also be used. - The
tester 112 also includes atest result generator 122 to generateconsensus information 124 on the interoperability of theelectronic system 100, based on comparing 120 the extracted element values of theelectronic system 100 with the element values obtained from the other test of the otherelectronic system - With reference to
Figure 4 , let us examine a method for testing an electronic system. The method starts in 400. - In 402, traffic from a test of an electronic system is received. After that, a value from an element of the traffic is extracted in 404 and the extracted element value is compared in 406 with an element value obtained from another test of another electronic system. As shown in
Figure 4 ,operations - Finally, in 408, consensus information on the interoperability of the electronic system is generated based on comparing the extracted element values of the electronic system with the element values obtained from the other test of the other electronic system. The method ends in 410. Embodiments of the
tester 112 may be applied to the method as well. - The method may additionally include one or more operations, or some part of the seven operations that will be explained next.
- A set of test cases is created. It may be that only the input for each test case has to be defined. The expected outcome for each test case does not need to be defined. This makes the creation of a test case easy and makes it possible to have a larger number of test cases compared to the traditional conformance testing.
- The test cases may be run against different implementations of the protocol in question. Different versions of the same implementation or single implementation configured differently may also be used. The data sent or received for each test case is recorded. Sometimes only a portion of the traffic may be stored, for example just the output from the tested implementation or a portion of the output.
- The results from different preliminary test runs may be collected into a single repository. The repository may contain minimally the recorded traffic for each test case.
- The compared elements used in the consensus calculations may be decided based on the recorded data and other available information, if any. Alternatively, the compared elements may have been decided already before the preliminary test runs. This enables only the compared elements to be recorded.
- The compared elements are the basis of the consensus testing. Different compared element values from different implementations may indicate a meaningful difference in the behaviour between the tested implementations; accordingly, the same compared element values may indicate a similarity between the implementations. The element to be compared should remain constant for a test case from one test run to another against the same tested implementation. However, when comparing different implementations, the compared element may show variation if the implementations have some differences in their behaviour.
- For example, a time stamp may not be a good candidate since time is constantly changing unless the clock can be set to a fixed value for testing. A random value element is another example of an element that may not be suited for a compared element. Examples of potential compared elements are message type identifiers, status codes or error codes. The presence or absence of a specific field in a received message may also be a good compared element. Sometimes only a type of a data field may be used as the compared element. The comparison may also take place on message level without looking into the actual contents of the messages.
- All test cases may use the same compared data elements, or the test cases may have different compared data elements.
- The compared elements may be decided manually by applying knowledge of the protocol and by observing the recorded test material. Automation may be used to pick out the elements that have shown a suitable level of variation. The final set of data elements may be finally decided by using user judgment on the results from the automation.
- Alternatively, the selection of compared elements is either left totally to an automated system, or an automation system provides suggestions for the user or consults the user on some issues. The automation analysis may be based on the frequency of different elements in the recorded traffic, for example.
- A vote for the consensus value for each test case may be cast. A vote may be given to each compared element value recorded from different implementations. The compared element value used by most of the implementations gets most of the votes.
- If there are multiple compared elements for a test case, then the value is the aggregate from all of the compared elements.
- The consensus strength for each test case may be given based on the number of values getting votes:
- Strong consensus: Compared element values from all implementations are identical or there is a single dominant compared element value. The only or dominant value is called a consensus value.
- Weak consensus: There are a few dominant compared element values for the test case.
- No consensus: There are many compared element values with equal or similar number of votes.
- Test cases with strong consensus indicate where the interoperability of the tested products has already been achieved for most of the implementations. Weak consensus indicates areas where further work is required by product developers. No consensus indicates that the specification in this area is unclear or flawed. Weak or no consensus may also mean that compared elements are not properly defined or the part of the protocol cannot be tested reliably using consensus testing.
- At this stage, a user may review the test cases with strong consensus to see if there are cases not making real sense. A strong consensus test case may reflect a situation where it would be legal for the implementation to behave differently, and the tested ones just happened to behave identically. In that case, the user may choose to either remove this test case or declare that there is no consensus in this test case.
- The test case verdicts for each implementation may be given by using the information about compared elements and the consensus strengths. A test case may be given the following verdicts:
- Passed: The test case has strong consensus and the value of the implementation for the compared element or elements matches the consensus value.
- Failed: The test case has strong consensus, but the value of the implementation for the compared element or elements does not match the consensus value.
- Inconclusive: The test case has weak consensus or no consensus. The test case does not bring information about interoperability of the implementation.
- Alternatively, the results may include the number of votes received by the compared element value used by the implementation. The higher the value, the higher the confidence in the implementation to be inter-operable.
- Sometimes different compared elements may be used to cast several different votes, which results in multiple verdicts per test case. In such a case a final test case verdict may be given such that the test case is passed only if all sub-verdicts are passed, and the test case is inconclusive if any of the sub-verdicts is inconclusive. Otherwise the compiled verdict of the test case is failed.
- Operations 4, 5 and 6 may be repeated several times to create a set of compared elements best meeting the testing goals.
- When desired, the test results may be packaged to form a consensus testing material. The testing material enables the later use of consensus testing without repeating all operations.
- A test material may contain the following information:
- The input for the test case.
- Instructions to extract the compared elements from traffic.
- For each test case:
- Consensus strength.
- The consensus value, if strong consensus.
- Compared element values, which were voted for, and the number of votes per value.
- The test material may contain all of the test cases used in the preliminary test run or only a subset of them (only test cases with strong consensus, for example).
- Next, a simplified consensus testing process is walked through.
- The test material is created for HTTP (HyperText Transfer Protocol) server testing. The results do not reflect results from real servers but are crafted to serve as good sample material. The examplary test material is brief; the number of test cases may be much higher in reality.
- Each test case input is an HTTP GET request; test case #0 is perfectly valid and should fetch the index page of the server. The next four test cases #1, #2, #3 and #4 contain different version values, which may or may not be valid.
- The preliminary test cases are shown in Table 1.
Table 1: Preliminary test case inputs Test case Input #0 GET / HTTP/1.0\n\r #1 GET / HTTP/1.00\n\r #2 GET / HTTP/1.01\n\r #3 GET / HTTP/01.0\n\r #4 GET / HTTP/11.0\n\r - For compactness, only the first HTTP status line returned from the server is retained as the test output. The HTTP header lines and a possible Web page are not stored.
- Tables 2, 3, 4 and 5 show the results of four different HTTP servers (A, B, C and D).
Table 2: Test results of server A Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 bad-request #2 HTTP/1.1 400 bad-request #3 HTTP/1.1 200 OK #4 HTTP/1.1 200 OK Table 3: Test results of server B Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 Bad request #2 HTTP/1.1 400 Bad request #3 HTTP/1.1 200 OK #4 HTTP/1.1 200 OK Table 4: Test results of server C Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 bad-request #2 HTTP/1.1 200 OK #3 HTTP/1.1 400 bad-request #4 HTTP/1.1 400 bad-request Table 5: Test results of server D Test case Result #0 HTTP/1.1 200 OK #1 HTTP/1.1 400 Bad request #2 HTTP/1.1 400 Bad request #3 HTTP/1.1 200 OK #4 HTTP/1.1 400 bad-request - It is concluded from the test results that the proper element to be compared is a three-digit status code, which is the value after the fixed part "HTTP/1.1". The status code expresses the status of the request in a compact form.
- The results indicate that test cases #0, #1, #2 and #3 give strong consensus, although in test cases #2 and #3 the vote is not unanimous. The limit used to declare strong consensus may be such that 75% percent or more of the votes must be cast to the same value. Table 6 compares test results.
Table 6: Comparison of test results Test case Server A Server B Server C Server D Votes for "200" Votes for "400" Consensus strength #0 200 200 200 200 4 0 Strong #1 400 400 400 400 0 4 Strong #2 400 400 200 400 1 3 Strong #3 200 200 400 200 3 1 Strong #4 200 200 400 400 2 2 No consensus - In reality, it may be preferable to use additional implementations in preliminary tests to get more reliable consensus strength values for the test cases.
- Consensus testing may provide quantitative information about the behaviour of the tested electronic system in the form of consensus level. The consensus level indicates how many votes the element value obtained from the test of the electronic system received in the consensus test material. In Table 6, the consensus level of test case #2 for server B is 3, i.e. the number of votes received by value "400" (= value received from server B with test case #2), for example. Correspondingly, the consensus level of test case #2 for server C is only 1. The consensus level is a numerical value ready for further processing. For example a vendor of a product may follow the consensus level of their product as new releases of the products are tested, but also when new consensus test material becomes available. The new material contains information about the behaviour of new versions of other implementations of the system.
- The test cases #0, #1, #2 and #3 can be used to declare pass or fail, but test case #4 is inconclusive for all. Table 7 summarizes the results. On overall, A, B and D passed all the test cases which had consensus, while C failed test cases #2 and #3.
Table 7: Verdicts for test cases Test case Server A Server B Server C Server D #0 Pass Pass Pass Pass #1 Pass Pass Pass Pass #2 Pass Pass Fail Pass #3 Pass Pass Fail Pass #4 Inco Inco Inco Inco - Finally, the material may be packaged for testing of HTTP servers. The package may contain the following parts:
- The input for the test cases #0, #1, #2, and #3 (#4 is omitted since there was no consensus).
- Compared element information: The status code.
- For each test case #0, #1, #2 and #3: the consensus strength, the consensus value, and the compared element values, which were voted for, and the number of votes per value.
- Table 8 shows the packaged data.
Table 8: Packaged data Test case Votes for "200" Votes for "400" Consensus value Strength Input #0 4 0 200 Strong GET / HTTP/1.0\n\r #1 0 4 400 Strong GET / HTTP/1.00\n\r #2 1 3 400 Strong GET / HTTP/1.01\n\r #3 3 1 200 Strong GET / HTTP/01.0\n\r - Note that test case #4 could be retained for other testing purposes than consensus testing.
- Next, embodiments of the
tester 112 will be explained with reference toFigure 2 . Thetester 112 may include some additional components: atest case generator 208, apreliminary test driver 202, atest data recorder 212, anelement analyser 218, aconsensus vote calculator 222, atest verdict assigner 224, atest material packager 228, and aconsensus test driver 232. - All of the components may be individual entities or some of them may be integrated to form larger entities. Ultimately they all may form the
tester 112. Parts from other independent systems may also act in roles of thetester 112 components. - The
test case generator 208 generates the preliminary test cases. Thetest case generator 208 may be integrated in thetester 112 or it may be a separate system for generating test cases. A separatetest case generator 208 is not necessarily required if thepreliminary test driver 202 generates the test cases. The preliminary test cases may be created by hand (manually), using some test automation system, by an executable model, by a software program, by a hardware device, through simulation, by an emulator, etc. or a system made up from them. Traffic recorded for other purposes or test material created for another purpose than consensus testing may also act as the preliminary test results. Thetester 112 may include atest interface 206 to receive predetermined test cases and their inputs. Thetester 112 may also includestorage 210 to save the test cases and their inputs. - The
preliminary test driver 202 may run the preliminary test cases. The test driver is able to run identical sets of test cases for all tested implementations, so that the comparison is based on valid data. Basically, any system capable of interacting with the tested implementations may act as thepreliminary test driver 202. Thepreliminary test driver 202 may be testing software, an interpreter, an executable model, a software program, a hardware device, simulation, an emulator, etc. or a system made up from them. Thetester 112 may include aninput interface 204 to feed an input of a test case into theelectronic system output interface 200 to receive an output of the test case from theelectronic system - The
test data recorder 212 may save the traffic from the tested system intest traffic storage 214. It may store full traffic or just a portion of the traffic data, e.g. just responses or a portion of the responses. The data may be divided into test cases for later processing. The nature of this division may be dependent on the type of the protocol used in testing and the data available from the test driver: if the test driver divides the traffic into test cases, then they may be used directly; if the protocol is made up of independent sessions, etc., then one session may be a test case; or if the protocol is made up of request-reply pairs, then one pair may be a test case. Naturally, any other logical test case composition may also be used. - The amount of collected data may be limited by collecting only the elements which are identified beforehand as the compared elements, are candidates to be the compared elements, can be stored to the available space, or are simply available. Naturally, the elements may also be chosen for some other suitable reason.
- The
element analyser 218 may determine which elements are the compared elements. Theelement analyser 218 may decompose the traffic into elements. Theelement analyser 218 may also select a portion of the elements for theelement comparator 118. Theelement analyser 218 may perform the selection automatically. Such an automatic selection may be based on the number of different elements in the traffic, the number of different element values in the traffic, the frequency of different elements in the traffic, the frequency of different element values in the traffic, the importance or other weight value set for an element, and/or the location of an element in a message, for example. Suitable elements include a message type, a field type, a status code, an error code, an enumerated field with predefined values, a version field, an identifier field, any text string, any primitive field (e.g. an integer field or a character field), an XML element, an XML attribute, ASN.1 Basic Encoding Rule type and value elements, ASN.1 Packet Encoding Rule prefix and value elements, a canonical or trimmed value of an element (e.g. white space removed), presence or absence of a message, presence or absence of an optional field in a message, and element selected from a set of optional elements. Thetester 112 may include aninterface 216 to receive a selection of an element from a user of thetester 112. Thetester 112 may also includestorage 220 to save decomposition information on the decomposition of the traffic into the elements. - The
element analyser 218 may be able to break down the traffic data into elements to choose the compared elements. The possible methods for this decomposition may be, for example: - Mini-Simulation Method, described in R. Kaksonen: A Functional Method for Assessing Protocol Implementation Security, Espoo, Technical Research Centre of Finland, VTT Publications 447. ISBN 951-38-5873-1 (soft back edition), ISBN 951-38-5874-X (on-line edition).
- ASN.1 with any of its encoding rules, described in Oliver Dubuisson: ASN.1 Communication Between Heterogeneous Systems, ISBN 0-12-633361-0.
- TTCN ASPs (Abstract Service Primitives), TTCN PDUs (Protocol Data Units) or TTCN message templates, described in ETSI
ES 201 873-1 - XML element structures, described in Extensible Markup Language (XML), W3C, www.w3.org/XML/.
- Or any other suitable method for structural decomposition of protocol
- The
element analyser 218 may contain automation, which fully or with user interaction determines the most suitable compared elements. - The
element analyser 218 may accept feedback from theelement comparator 118, theconsensus vote calculator 222 and/or thetest verdict assigner 224, in order to determine which set of compared elements produces the most useful compared elements and consensus testing material. - Instead of choosing compared output elements from a set of all elements, the
element analyser 218 may decide to use all output from the tested electronic system as the compared elements by default. An "ignore set" may define which elements are ignored in the comparison. Compared elements are all traffic elements excluding the ignore set. An initial ignore set may be automatically collected, e.g. by running the same test case multiple times against the same implementation and including all changing elements in the ignore set. The initial ignore set may thus be expanded step-by-step. - The
element comparator 118 may take the recorded traffic, divided into test cases, and lists the compared element values per test case and per tested implementation. - The
element comparator 118 may need to be able to extract the compared elements from the traffic, as theelement analyser 218 did. - Comparing of compared element values may not always be based on exact values, but other equality criteria may also be used. White space may be ignored or leading zeroes may be removed from an integer value, for example.
- The
consensus vote calculator 222 may calculate the votes per test case and determines the strength of the consensus. A vote may be given for each value an element has, so that the compared element value used by most of the electronic systems gets most of the votes. A consensus strength value may be given to a test case based on the vote distribution for an element of the test case. As was explained earlier, theconsensus vote calculator 222 may give a strong consensus value to the consensus strength if there is a single dominant compared element value, a weak consensus value to the consensus strength if there are a few dominant compared element values, and a no consensus value to the consensus strength if there are many compared element values. A single dominant compared element value means that all the compared element values from all electronic systems are identical or that there is clearly, according to a predetermined limit, a single dominating value. For the choice between the weak consensus value and no consensus values there may be another predetermined limit, i.e. a limit defining the difference between "a few" and "many". - The
tester 112 may includestorage 226 to save for each test case the consensus strength value, the compared element values which were voted for, the number of votes per the compared element value, and as a consensus value the single dominant compared element value if the consensus strength has the strong consensus value. - The
test verdict assigner 224 assigns a verdict for each of the test cases for each tested implementation, based on the consensus votes, strength of consensus and the element values from the implementations, i.e. based on information about the compared elements and the consensus strength values. As was explained earlier, thetest verdict assigner 224 may give a passed value to the test verdict if the test case has the strong consensus value for the consensus strength and the element value is the same as the single dominant compared element value, a failed value to the test verdict if the test case has the strong consensus value for the consensus strength but the element value of the electronic system is not the same as the single dominant compared element value, and an inconclusive value to the test verdict if the test case has the weak consensus value or no consensus value to the consensus strength. Alternatively, or additionally, thetest verdict assigner 224 may give as the result the number of votes received by the compared element value that the electronic system has. Thetest verdict assigner 224 may evaluate an aggregate test case including more than one vote so that it gives a passed value if all sub-verdicts are passed, an inconclusive value if any of the sub-verdicts is inconclusive, and a failed value otherwise. - The
test material packager 228 may be used to pack the consensus testing information for future use. The material may be used both to test the implementations used in the preliminary testing or to test new implementations. The test material may be a stand-alone entity or a data file or files readable by a separate test driver, for example. - The preliminary testing results may be anonymised, so results of an individual preliminary test run cannot be assigned to a specific implementation tested.
- The
consensus test driver 232 may be used to test an implementation using the packagedconsensus test material 230. Theconsensus test driver 232 may be a hardware device, a software program or a combination of both. Theconsensus test driver 232 may be integrated with the testing information or it may read it from a data medium. - As shown in
Figure 3 , theconsensus test driver 232, together with theconsensus test material 230, may form a stand-alone tester 112 for testing anelectronic system 100, with the data obtained from the earlier tests of the otherelectronic systems - The
consensus test driver 232 may implement some of the following functions: - Feeding the input to the tested implementation with a
test case engine 300. - Receiving the replies from the tested implementation.
- Extracting the compared elements from the traffic and resolving if they match the consensus value or other compared element values stored.
- Reporting if there was a match to a consensus value or to other compared element values.
- Selecting only a subset of test cases for execution.
- Tuning the test run depending on the tested implementation, e.g. to provide address, port number, user names, user password, etc.
- Integration interfaces to other testing systems and testing frameworks.
- The
consensus test driver 232 may adjust the fed input depending on the protocol. For example, a time stamp may be given a proper up-to-date value. Also, there may be a need to take into account some values from the responses received earlier, such as sequence numbers or session identifiers. - The
consensus test driver 232 or aseparate reporting system 302 may compile the result of a consensus test run to a test run report. This report may summarize the number of failed, passed or inconclusive test cases. The report may contain the consensus levels of the test cases and the total consensus level as the average of the values from the test cases. Several other metrics than ones mentioned here may be derived from the results. The report may be created during the test run or after the test run. - The use of the
tester 112 may be distributed. A separate remote team or multiple teams may use thetester 112 to run the preliminary tests. Such distributed testing may be performed sequentially as follows: preliminary test cases and/or a preliminary test driver is sent to the remote teams, remote teams run the preliminary tests with their implementations, results may be anonymised, the recorded traffic is received in a centralized location, the consensus calculations are made, and consensus testing material may be packaged and the package may be sent to the remote teams in order to get verdicts for the tested implementations. - The
tester 112 may also utilize parallel processing as follows: the preliminary test cases are executed in parallel against different implementations and the consensus strength and consensus vote calculations are done for each test case during the test case execution. The results may be shown to the tester once they are available. - One embodiment of the
tester 112 is a computer program product for testing an electronic system. Thetester 112 may be a standard computer running the computer program product for testing the electronic system. The described functionality/structures may be implemented as software modules. The computer program product may be embodied on a distribution medium. The distribution medium may be any means for distributing software to customers, such as a (computer readable) program storage medium, a (computer readable) memory, a (computer readable) software distribution package, a (computer readable) signal, or a (computer readable) telecommunications signal. - In principle, the
tester 112 may be implemented as one or more integrated circuits, such as application-specific integrated circuits ASIC. Other hardware embodiments are also feasible, such as a circuit built of separate logic components. A hybrid of these different implementations is also feasible. When selecting the method of implementation, a person skilled in the art will consider the requirements set for the size and power consumption of thetester 112, necessary processing capacity, production costs, and production volumes, for example. - Even though the invention is described above with reference to an example according to the accompanying drawings, it is clear that the invention is not restricted thereto but it can be modified in several ways within the scope of the appended claims.
Claims (16)
- A tester (112) for testing an electronic system (100), comprising:a traffic interface (114) arranged to receive traffic (102) from a test of an electronic system (100);an element comparator (118) arranged to extract a value from an element of the traffic (102) and to compare the extracted element value with an element value (110) obtained from another test of another electronic system (104, 106, 108); anda test result generator (122) arranged to generate consensus information (124) on the interoperability of the electronic system (100), based on comparing (120) the extracted element values of the electronic system (100) with the element values obtained from the other test of the other electronic system (104, 106, 108), wherein the test result generator (122) further comprisesa consensus vote calculator (222) arranged to give a vote to each value an element has, so that the compared element value used by most of the electronic systems gets most of the votes, and to give a consensus strength to a test case, based on the vote distribution for an element of the test case, in such a manner that a strong consensus value is given to the consensus strength if there is a single dominant compared element value, a weak consensus value is given to the consensus strength if there are a few dominant compared element values, and a no consensus value is given to the consensus strength if there are many compared element values, anda test verdict assigner (224) arranged to give a test verdict to the test case regarding the electronic system, based on information about the compared elements and the consensus strength values, in such a manner that a passed value is given to the test verdict if the test case has the strong consensus value for the consensus strength and the element value is the same as the single dominant compared element value, a failed value is given to the test verdict if the test case has the strong consensus value for the consensus strength but the element value of the electronic system is not the same as the single dominant compared element value, and an inconclusive value is given to the test verdict if the test case has the weak consensus value or no consensus value for the consensus strength.
- The tester of claim 1, further comprising a test interface (206) arranged to receive predetermined test cases and their inputs.
- The tester of claim 2, further comprising storage (210) arranged to save the test cases and their inputs.
- The tester of claim 2 or 3, further comprising an input interface (204) arranged to feed an input of a test case into the electronic system (100, 104, 106, 108), and an output interface (200) arranged to receive an output of the test case from the electronic system (100, 104, 106, 108).
- The tester of any one of the preceding claims, further comprising an element analyser (218) arranged to decompose the traffic into elements.
- The tester of claim 5, wherein the element analyser (218) is further arranged to select a portion of the elements for the element comparator (118).
- The tester of claim 6, wherein the element analyser (218) is arranged to perform the selection automatically.
- The tester of claim 7, wherein the automatic selection by the element analyser (218) is based on the number of different elements in the traffic, the number of different element values in the traffic, the frequency of different elements in the traffic, the frequency of different element values in the traffic, the importance or other weight value set for an element, and/or the location of an element in a message.
- The tester of any one of the preceding claims 5-8, wherein the tester further comprises an interface (216) arranged to receive a selection of an element from a user of the tester.
- The tester of any one of the preceding claims 5-9, further comprising a storage (220) arranged to save decomposition information on the decomposition of the traffic into the elements.
- The tester of any one of the preceding claims, further comprising storage (226) arranged to save for each test case the consensus strength value, the compared element values which were voted for, the number of votes per compared element value, and as a consensus value the single dominant compared element value if the consensus strength has the strong consensus value.
- The tester of any one of the preceding claims, wherein the test verdict assigner (224) is arranged to give as the result the number of votes received by the compared element value that the electronic system has.
- The tester of any one of the preceding claims, wherein the test verdict assigner (224) is arranged to evaluate an aggregate test case including more than one vote so that it gives a passed value if all sub-verdicts are passed, an inconclusive value if any of the sub-verdicts is inconclusive, and a failed value otherwise.
- The tester of any one of the preceding claims, wherein the tester is arranged to test an electronic system including software implementing formatted data input and output, such as a protocol, a file format, or an algorithm.
- A computer program product comprising software modules, which, when run in a computer, constitute the functionality/structures of any of claims 1 to 14.
- A method for testing an electronic system, comprising:receiving (402) traffic from a test of an electronic system;extracting (404) a value from an element of the traffic and comparing (406) the extracted element value with an element value obtained from another test of another electronic system; andgenerating (408) consensus information on the interoperability of the electronic system, based on comparing the extracted element values of the electronic system with the element values obtained from the other test of the other electronic system, wherein the generating (408) comprises:giving a vote to each value an element has, so that the compared element value used by most of the electronic systems gets most of the votes;giving a consensus strength to a test case, based on the vote distribution for an element of the test case, in such a manner that a strong consensus value is given to the consensus strength if there is a single dominant compared element value, a weak consensus value is given to the consensus strength if there are a few dominant compared element values, and a no consensus value is given to the consensus strength if there are many compared element values; andgiving a test verdict to the test case regarding the electronic system, based on information about the compared elements and the consensus strength values, in such a manner that a passed value is given to the test verdict if the test case has the strong consensus value for the consensus strength and the element value is the same as the single dominant compared element value, a failed value is given to the test verdict if the test case has the strong consensus value for the consensus strength but the element value of the electronic system is not the same as the single dominant compared element value, and an inconclusive value is given to the test verdict if the test case has the weak consensus value or no consensus value for the consensus strength.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05110181A EP1780946B1 (en) | 2005-10-31 | 2005-10-31 | Consensus testing of electronic system |
AT05110181T ATE459153T1 (en) | 2005-10-31 | 2005-10-31 | CONSENSUS TEST OF AN ELECTRONIC SYSTEM |
DE602005019580T DE602005019580D1 (en) | 2005-10-31 | 2005-10-31 | Consensus test of an electronic system |
US11/589,484 US7797590B2 (en) | 2005-10-31 | 2006-10-30 | Consensus testing of electronic system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05110181A EP1780946B1 (en) | 2005-10-31 | 2005-10-31 | Consensus testing of electronic system |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1780946A1 EP1780946A1 (en) | 2007-05-02 |
EP1780946B1 true EP1780946B1 (en) | 2010-02-24 |
Family
ID=36294804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05110181A Active EP1780946B1 (en) | 2005-10-31 | 2005-10-31 | Consensus testing of electronic system |
Country Status (4)
Country | Link |
---|---|
US (1) | US7797590B2 (en) |
EP (1) | EP1780946B1 (en) |
AT (1) | ATE459153T1 (en) |
DE (1) | DE602005019580D1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657270A (en) * | 2015-02-28 | 2015-05-27 | 北京嘀嘀无限科技发展有限公司 | Method and system for testing |
US9632921B1 (en) * | 2015-11-13 | 2017-04-25 | Microsoft Technology Licensing, Llc | Validation using scenario runners |
CN110750436B (en) * | 2018-07-23 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Layered testing method and device, computer readable medium and electronic equipment |
US10909013B2 (en) * | 2018-10-16 | 2021-02-02 | Rohde & Schwarz Gmbh & Co. Kg | TTCN-based test system and method for testing test-cases, non-transitory computer-readable recording medium |
CN111475421B (en) * | 2020-05-28 | 2023-05-23 | 南方电网科学研究院有限责任公司 | Power demand response consistency test case generation system and method |
US11783051B2 (en) | 2021-07-15 | 2023-10-10 | Zeronorth, Inc. | Normalization, compression, and correlation of vulnerabilities |
CN115865193B (en) * | 2023-02-27 | 2023-05-09 | 中国人民解放军火箭军工程大学 | Device and method for testing reflective memory networking performance |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6373822B1 (en) * | 1999-01-08 | 2002-04-16 | Cisco Technology, Inc. | Data network protocol conformance test system |
US6260065B1 (en) | 1999-01-13 | 2001-07-10 | International Business Machines Corporation | Test engine and method for verifying conformance for server applications |
US7237014B2 (en) * | 2002-08-01 | 2007-06-26 | Drummond Group | System and method for in situ, real-time, supply chain, interoperability verification |
JP2005339675A (en) * | 2004-05-27 | 2005-12-08 | Hitachi Ltd | Semiconductor integrated circuit device |
-
2005
- 2005-10-31 EP EP05110181A patent/EP1780946B1/en active Active
- 2005-10-31 AT AT05110181T patent/ATE459153T1/en not_active IP Right Cessation
- 2005-10-31 DE DE602005019580T patent/DE602005019580D1/en active Active
-
2006
- 2006-10-30 US US11/589,484 patent/US7797590B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
EP1780946A1 (en) | 2007-05-02 |
DE602005019580D1 (en) | 2010-04-08 |
US7797590B2 (en) | 2010-09-14 |
ATE459153T1 (en) | 2010-03-15 |
US20070118644A1 (en) | 2007-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1780946B1 (en) | Consensus testing of electronic system | |
US10204035B1 (en) | Systems, methods and devices for AI-driven automatic test generation | |
US8103913B2 (en) | Application integration testing | |
US20040153837A1 (en) | Automated testing | |
US20080294740A1 (en) | Event decomposition using rule-based directives and computed keys | |
US20030074423A1 (en) | Testing web services as components | |
US20090276663A1 (en) | Method and arrangement for optimizing test case execution | |
CN103095475B (en) | The method for inspecting and system of multimode communication device | |
US9122789B1 (en) | System and method for testing applications with a load tester and testing translator | |
CN110764980A (en) | Log processing method and device | |
EP2782311A1 (en) | Methods of testing a firewall, and apparatus therefor | |
US20230359934A1 (en) | Intelligent Service Test Engine | |
US20140047276A1 (en) | Model-based testing of a graphical user interface | |
US20050203717A1 (en) | Automated testing system, method and program product using testing map | |
Chaturvedi et al. | Web service slicing: Intra and inter-operational analysis to test changes | |
WO2021151314A1 (en) | Dns automatic performance test method, apparatus, device, and readable storage medium | |
CN112015715A (en) | Industrial Internet data management service testing method and system | |
CN113868116A (en) | Test dependent data generation method and device, server and storage medium | |
CN112766930A (en) | High-efficient wisdom information management system based on instrumentization | |
CN112199229A (en) | Data processing method, device, equipment and storage medium | |
CN114268569A (en) | Configurable network operation, maintenance, acceptance and test method and device | |
Dahl | Using coloured petri nets in penetration testing | |
CN114205276B (en) | Performance test method and device for product management system and electronic equipment | |
US9342522B2 (en) | Computer implemented system for analyzing a screen-based user session of a process in a network environment | |
JP2000010836A (en) | Method for testing client-server type application, and recording medium where program for implementing the method has been recorded |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK YU |
|
17P | Request for examination filed |
Effective date: 20070903 |
|
17Q | First examination report despatched |
Effective date: 20071025 |
|
AKX | Designation fees paid |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REF | Corresponds to: |
Ref document number: 602005019580 Country of ref document: DE Date of ref document: 20100408 Kind code of ref document: P |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: VDEP Effective date: 20100224 |
|
LTIE | Lt: invalidation of european patent or patent extension |
Effective date: 20100224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100624 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100625 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100604 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100525 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100524 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 |
|
26N | No opposition filed |
Effective date: 20101125 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20101031 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20101031 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20101031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20101031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20101031 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100825 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20100224 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 11 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602005019580 Country of ref document: DE Owner name: SYNOPSYS, INC. (N.D.GES.D. STAATES DELAWARE), , US Free format text: FORMER OWNER: CODENOMICON OY, OULU, FI |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20151126 AND 20151202 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: TP Owner name: SYNOPSYS, INC., US Effective date: 20160405 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 12 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 13 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 14 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602005019580 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04L0012260000 Ipc: H04L0043000000 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230528 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20230920 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230920 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20230920 Year of fee payment: 19 |