EP2553584A1 - Verfahren, computerprogramm und vorrichtung zur validierung der aufgabenausführung in skalierbaren computersystemen - Google Patents

Verfahren, computerprogramm und vorrichtung zur validierung der aufgabenausführung in skalierbaren computersystemen

Info

Publication number
EP2553584A1
EP2553584A1 EP11715960A EP11715960A EP2553584A1 EP 2553584 A1 EP2553584 A1 EP 2553584A1 EP 11715960 A EP11715960 A EP 11715960A EP 11715960 A EP11715960 A EP 11715960A EP 2553584 A1 EP2553584 A1 EP 2553584A1
Authority
EP
European Patent Office
Prior art keywords
test
task
execution
cluster
program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP11715960A
Other languages
English (en)
French (fr)
Inventor
Damien Guinier
Patrick Le Dot
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bull SAS
Original Assignee
Bull SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bull SAS filed Critical Bull SAS
Publication of EP2553584A1 publication Critical patent/EP2553584A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Definitions

  • the present invention relates to the validation of execution of software routines and more particularly to a method, a computer program and a device for validating the execution of tasks in evolving computer systems.
  • intermediate software layers are generally used between the hardware layer and the software application layer.
  • Such intermediate layers make it possible to perform tasks as generic as possible, such as data transfers and processing.
  • the use of such tasks often makes it possible to decorrelate the application and hardware layers, thus allowing a software application to be executed by several different hardware layers.
  • these intermediate layers typically include the operating system used, they may also include particular tasks related, in particular, to the optimization of hardware resources.
  • certain tasks may be proposed, in addition to the operating system, in order, in particular, to choose the most efficient mathematical algorithms depending on the applications targeted.
  • testing and validation of the execution of these tasks represents an essential phase of the development of computer systems comprising a hardware layer and an intermediate layer adapted to perform these tasks to detect failures and thus guarantee a level of reliability required.
  • These tests are generally intended to observe the behavior of the system performing these tasks according to predetermined sequences, that is to say according to particular test data, in order to compare the results obtained with expected results.
  • the computer systems implementing these tasks evolve over time to, on the one hand, correct any errors observed and, on the other hand, improve their performance or integrate new functionalities.
  • Such evolutions may concern hardware elements of the computer system, the hardware architecture or the configuration of these elements as well as a part of the intermediate software layer such as an operating system.
  • performing these tasks on these modified systems should be checked again to ensure that performance has not been degraded. This is a non-regression test.
  • the tests aim at observing the behavior of the modified system performing the tasks under test data in order to compare the results obtained with expected results or previous results.
  • test and validation strategies There are many test and validation strategies. However, a test is typically related to an environment, i.e. a hardware and software configuration, test data, i.e. call sequences to the tested tasks and their parameters, and a method of analysis or obtaining results. This results in a number of combinations that can be particularly important. In order to facilitate the test and validation operations, the latter are generally automated according to a mechanism of interleaved loops to perform all the tests exhaustively or according to predetermined scenarios. For these purposes, a material environment is often devoted to this function. It is configured according to the tests and validations to be performed.
  • Figure 1 schematically illustrates a test system for performing tasks in a computer system.
  • the test and validation system 100 comprises a test hardware environment 105, itself comprising a plurality of computers, generally clustered together, and a database 1 10 here containing the test and validation data and configuration parameters of the test environment.
  • the test and validation system 100 further comprises a control system 1 15, for example a computer or a server, to identify the tests and validations to be performed, configure the test environment, transmit the test data, receive the results obtained and compare these results with the expected results.
  • a first step is to identify this test and to obtain the corresponding data.
  • a next step is to configure the test environment. It aims in particular to determine the computers to be used, that is to say, for example, a particular cluster, and their implementation. It also relates to the configuration of the interconnections between these computers, that is to say the configuration of the network or the architecture used.
  • the networks are, for example Ethernet networks or InfiniBand type. It is possible to define a test environment depending on the cluster to be used, a version of the operating system implemented, and a particular architecture.
  • the test data and the definitions or rules for obtaining the desired results are identified to start the test. The results obtained are then processed and compared with the results expected by the control system 1 15.
  • Figure 2 schematically shows results of tests and validations performed for a computer system evolving over time.
  • the tests are performed here at times ti, t 2 , t n .
  • Each test (tests 1 to n) corresponds here to a particular test environment as well as to particular test data, for example test data from different suppliers.
  • test and validation results are compared here with expected results to determine an indication of success () or failure (*).
  • the results are, for example, data transmission rates, response times and processing times. These results can be presented in a table such as that illustrated in FIG. 2 in which each line corresponds to a particular test and each column corresponds to a time or a test date.
  • an error in the sense of a particular test may be an error related to the result of the test, that is to say an erroneous result with respect to a test data, but may also be related to the change.
  • test results if no modification of the computer system theoretically affecting the result has been made.
  • the observation of an indication of success at this test without a modification of the computer system related to this test has been performed can be interpreted as a potential error.
  • test and validation system such as that presented with reference to FIGS. 1 and 2 makes it possible to effectively test and validate the execution of static tasks in computer systems, it does not make it possible to use the environment evolutionary test equipment optimally. This results in particular a loss of time related to the sequence of tests and an underutilization of resources of the test system.
  • the invention solves at least one of the problems discussed above.
  • the invention thus relates to a process for validating the execution of at least one task for an evolving computer system in a test environment comprising at least one cluster, each of said at least cluster comprising a plurality of nodes, at least one node of said plurality of nodes of each of said at least one cluster comprising a program residing in memory, said program residing in memory, called the program, comprising the following steps,
  • test element storage system comprises data representative of least one test of said at least one task compatible with said at least one characteristic, receiving said representative data of said at least one test for executing said at least one task in the cluster to which the node comprising said program belongs;
  • the method according to the invention thus makes it possible to optimize the resources of a test environment comprising a cluster gathering several nodes during the execution of tests by allowing a parallel use of shared resources.
  • said data receiving step allowing the execution of said at least one task comprises a step of receiving at least one configuration datum of the environment of the cluster to which the node comprising said program belongs, the method comprising further a step of configuring the cluster environment to which the node comprising said program according to said received configuration data belongs.
  • the method according to the invention thus makes it possible to configure the test environment according to the tests to be performed.
  • Said data receiving step allowing the execution of said at least one task furthermore preferably comprises a step of receiving at least one data item for determining a result of execution of said at least one task, the method comprising in addition, a step of determining at least one result of executing said at least one task according to said at least one piece of data received to determine a result of execution of said at least one task.
  • the method according to the invention thus makes it possible to configure the way in which the results of test execution are evaluated.
  • the method further comprises a step of creating said at least one test of said at least one task according to said representative received data of said at least one test.
  • the method according to the invention thus makes it possible to create dynamic tests from referent test elements, thus multiplying the test possibilities.
  • the method further comprises a step of creating an entry in a routine execution table in response to said data receiving step allowing the execution of said at least one task, said routine execution table for automatically executing said at least one task.
  • the method further comprises a step of transmitting a command to an external system for ordering the sending of an execution report of said at least one task.
  • the test environment is thus adapted to manage the execution of tests and to control the transmission of test reports independently.
  • said test element storage system comprises a database, said database comprising environment, test, analysis and task data, said step of transmission of said data element. at least one characteristic of said plurality of nodes of the cluster to which the node comprising said program comprising a request for access to said database belongs.
  • a plurality of tests is performed simultaneously.
  • the invention also relates to a computer program comprising instructions adapted to the implementation of each of the steps of the method described above and a device comprising means adapted to the implementation of each of the steps of the method described. previously.
  • FIG. 3 schematically illustrates a test execution validation and execution environment implementing the invention
  • FIG. 4 diagrammatically represents an algorithm implemented in a program residing in memory of an input node of a cluster belonging to a test and validation environment
  • FIG. 5 illustrates an exemplary architecture of a node of a cluster adapted to implement the invention.
  • the object of the invention is notably to manage the execution of tests, that is to say sequences of tasks associated with configuration data and results analysis, from the test clusters themselves, according to their capabilities, to enable the validation of the execution of these tasks and optimize the execution of these sequences.
  • a test environment consists of several clusters each comprising a set of nodes.
  • a particular node of each cluster called an entry node or a login node, includes a memory-resident program to automatically launch task sequences.
  • Such a program When the invention is implemented in an environment based on a Linux operating system (Linux is a brand), such a program typically includes crond.
  • This program combined with a program of shell or binary script type, that is to say a program allowing access to the operating system, makes it possible in particular to identify the resources of the cluster as well as its execution environment in order to launching task sequences that can be executed in the identified configuration. In other words, from the identified configuration information, it is possible to determine the tests that can be performed, to access the data allowing the execution of the sequences of tasks and run when the necessary resources are available.
  • a database includes a list of scheduled tasks and, for each scheduled task, the list of clusters capable of executing the corresponding task sequences.
  • This database also includes referential tests themselves comprising sequences of tasks to be executed, execution environments and information relating to obtaining and analyzing the test results.
  • the program residing in memory of each input node of each cluster can create corresponding dynamic tests from referent tests and launch the associated task sequences according to available resources.
  • Figure 3 schematically illustrates a test execution validation and execution environment 300 implementing the invention.
  • a table 305 is used here to determine the tests to be performed by combination of parameters.
  • a dynamic test to be performed here is defined by the combination of referent test elements, that is to say, for example, the combination of an environment, test data, analysis rules, tasks and a user.
  • a dynamic test can be created from indications for referenced test items and the targeted items themselves.
  • the referenced tests and the scheduled tasks are here stored in the database 310.
  • the referent tests may comprise elements of a different nature, for example hardware targets to be used during the execution of the tests.
  • test environment also includes test clusters, here the test clusters 315-1 and 315-2, also called test clusters A and B.
  • Each test cluster comprises a set of nodes interconnected according to a predefined or configurable architecture and a generically referenced management system 320 linked to a particular node or distributed over several nodes.
  • a particular node 325 called an input node or a node of login, is used to access the cluster from an external system.
  • the input node comprises a memory-resident program, generically referenced 325.
  • the input node is also used to transmit the obtained test results to an external system.
  • the nodes of the cluster used to execute the tasks corresponding to a dynamic test are here generically referenced 335.
  • An application evaluation system is also used to compare the results obtained with the expected results.
  • This is an Apache PHP 345 application server. This is preferably used to exchange data between the test clusters and the database 310.
  • the program residing in memory of the input node 325 is intended in particular to determine the resources of the cluster on which it is implemented and to access the test elements stored in the database 310 in order to create the dynamic tests and start the execution of the task sequences corresponding to these tests when the necessary resources are available and transmit the results obtained to this database.
  • each test may, for example, be stored in a separate file.
  • these files store, for each test, the test environment comprising, for example, a reference to an operating system and to the version to be used, the test data, that is to say here the tasks to perform and their parameters, and the analysis rules to get the results.
  • These files can also be used to store, temporarily, the test results before they are transmitted to the database 310 from which they can be processed and analyzed by the application server 345.
  • the 330 files are for example, stored in the cluster file system.
  • FIG. 4 diagrammatically represents an example of an algorithm implemented in a program residing in memory of an input node a cluster belonging to a test environment to perform dynamic tests based on cluster resources. As indicated, this algorithm is called by the global scheduler of the considered cluster or by its crontab.
  • a first step (step 400) aims at obtaining characteristics of the cluster, in particular those relating to its hardware resources, for example, the number of nodes, their type and the configuration of the interconnections between the nodes. It can also be a predetermined reference, for example the name or a reference of the cluster.
  • These features may also include the operating system implemented and its version. However, such features are preferably associated with each test so that the required operating system is implemented.
  • a next step (step 405) is to obtain scheduled tasks and referent test data according to the characteristics of the cluster.
  • this step can take the form of SQL requests (acronym for Structured! Query Language in English terminology) including the characteristics of the cluster, for example its name, its type of microprocessor and its software configuration. It is here addressed to the database 310 via the application server 345 as previously described.
  • the cluster's input node receives test items to create one, more, or all of the dynamic tests that can be performed and to be performed by the cluster that originated the request. 410).
  • This data received here from the database 310 via the application server 345, advantageously comprises, for each dynamic test, the following information:
  • test environment that is to say, for example, the number of nodes to use, the configuration of their interconnection, the environment variables and the operating system to be used;
  • test data that is to say, in particular, the sequence of tasks to be executed as well as the parameters for executing these tasks; and, - the analysis rules, that is to say the rules allowing to obtain the test results to be transmitted in response to the execution of the test.
  • the data received from referent tests or the dynamic tests created are preferably stored in files (files 330 in FIG. 3).
  • a next step is to determine if there are still dynamic tests to be performed (step 415). If no dynamic test is to be executed, a command is sent to the application server 345 to order it to transmit to the client at the origin of the dynamic tests carried out a final report on the execution of these tests (step 420). This report includes, for example, test results. The algorithm then ends.
  • a next step is to identify the dynamic tests to be performed according to the available resources of the cluster (step 425).
  • the available resources are compared with the resources required to perform the tasks identified through a routine execution table to determine the tests that can be executed.
  • this step is repeated until an interrupt halts the system or until resources are released for testing.
  • the available resources of the cluster allow the execution of a sequence of tasks corresponding to one or more dynamic tests, these are selected in the execution table of routines. It may be, for example, first tests that can be run from an index of the execution table of routines.
  • the data of the test is obtained from the corresponding file in order to configure the environment and compile the test applications (if necessary, according to the operating system used).
  • step 425 is executed as long as there remain dynamic tests to be performed.
  • the resources required to execute the selected tests are then reserved for each of these dynamic tests (step 430-1, 430-n) and the task sequence is started by performing these dynamic tests (step 435-1, 435-n).
  • the results obtained during the execution of the task sequence, according to the analysis rules used, are here stored in the file associated with the test.
  • the resource reservation step may include a step of resetting cluster nodes or changing the operating system, i.e. a complete software cleanup step.
  • the results obtained as well as, preferably, the configuration of the dynamic test are transmitted (step 440-1, 440-n) to the database 310 via the application server. 345 to be processed and analyzed by an evaluation system in order to validate or not the execution of the tasks and to construct a representation of the test and validation results such as that described with reference to FIG. 2.
  • a command is then sent to the application server 345 to order it to transmit to the client at the origin of the dynamic test carried out a report on the execution of this test (step 4445-1, 445-n).
  • the dynamic test performed is then marked as realized (step 450-1, 450-n) so that it is not selected again.
  • step 455 the algorithm then returns to step 415 to determine whether dynamic tests are running or need to be executed. If no dynamic test is running or must be executed, a command is sent to the application server 345 to order it to transmit to the client at the origin of the dynamic tests carried out a final report on the execution of these tests. tests (step 420) and the algorithm ends.
  • FIG. 5 An exemplary architecture of a node of a cluster adapted to put the algorithm described with reference to FIG. 4 is illustrated in FIG. 5.
  • the device 500 here comprises a communication bus 502 to which are connected:
  • each RAM component may be associated with a microprocessor or be common to the elements of the device 500; and,
  • communication interfaces 508 adapted to transmit and receive data.
  • the device 500 furthermore has internal storage means 512, such as hard disks, which can notably comprise the executable code of programs enabling the device 500 to implement the processes according to the invention and data processed or treat according to the invention.
  • internal storage means 512 such as hard disks, which can notably comprise the executable code of programs enabling the device 500 to implement the processes according to the invention and data processed or treat according to the invention.
  • the communication bus allows communication and interoperability between the various elements included in the device 500 or connected to it.
  • the representation of the bus is not limiting and, in particular, the microprocessors are capable of communicating instructions to any element of the device 500 directly or via another element of the device 500.
  • the program or programs implemented can be loaded by one of the storage or communication means of the device 500 before being executed.
  • the microprocessors 504 control and direct the execution of the instructions or portions of software code of the program or programs according to the invention.
  • the program or programs that are stored in a non-volatile memory for example a hard disk, are transferred into the random access memory 506 which then contains the executable code of the program or programs according to the invention, as well as registers for storing the variables and parameters necessary for the implementation of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)
EP11715960A 2010-03-26 2011-03-22 Verfahren, computerprogramm und vorrichtung zur validierung der aufgabenausführung in skalierbaren computersystemen Withdrawn EP2553584A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1052235A FR2958059B1 (fr) 2010-03-26 2010-03-26 Procede, programme d'ordinateur et dispositif de validation d'execution de taches dans des systemes informatiques evolutifs
PCT/FR2011/050584 WO2011117528A1 (fr) 2010-03-26 2011-03-22 Procede, programme d'ordinateur et dispositif de validation d'execution de taches dans des systemes informatiques evolutifs

Publications (1)

Publication Number Publication Date
EP2553584A1 true EP2553584A1 (de) 2013-02-06

Family

ID=42306691

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11715960A Withdrawn EP2553584A1 (de) 2010-03-26 2011-03-22 Verfahren, computerprogramm und vorrichtung zur validierung der aufgabenausführung in skalierbaren computersystemen

Country Status (6)

Country Link
US (1) US20130031532A1 (de)
EP (1) EP2553584A1 (de)
JP (1) JP2013524312A (de)
BR (1) BR112012021145A2 (de)
FR (1) FR2958059B1 (de)
WO (1) WO2011117528A1 (de)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060070033A1 (en) * 2004-09-24 2006-03-30 International Business Machines Corporation System and method for analyzing effects of configuration changes in a complex system
US20150095756A1 (en) * 2013-10-01 2015-04-02 Zijad F. Aganovic Method and apparatus for multi-loop, real-time website optimization
CN104133750A (zh) * 2014-08-20 2014-11-05 浪潮(北京)电子信息产业有限公司 主机与存储设备兼容适配测试方法和系统
JP6684233B2 (ja) * 2017-01-12 2020-04-22 株式会社日立製作所 テスト入力情報検索装置及び方法
CN109766228A (zh) * 2017-11-09 2019-05-17 北京京东尚科信息技术有限公司 一种基于接口的线上验证方法和装置
CN110175130B (zh) * 2019-06-11 2024-05-28 深圳前海微众银行股份有限公司 集群系统性能的测试方法、装置、设备及可读存储介质
US11194699B2 (en) * 2019-09-17 2021-12-07 Red Hat, Inc. Compatibility testing with different environment configurations
CN111913884A (zh) * 2020-07-30 2020-11-10 百度在线网络技术(北京)有限公司 分布式测试方法、装置、设备、系统和可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6711616B1 (en) * 2000-05-01 2004-03-23 Xilinx, Inc. Client-server task distribution system and method
US7426729B2 (en) * 2001-07-11 2008-09-16 Sun Microsystems, Inc. Distributed processing framework system
US7114159B2 (en) * 2001-07-11 2006-09-26 Sun Microsystems, Inc. Processing resource for use in a distributed processing framework system and methods for implementing the same
US6842891B2 (en) * 2001-09-11 2005-01-11 Sun Microsystems, Inc. Dynamic attributes for distributed test framework
US20040015975A1 (en) * 2002-04-17 2004-01-22 Sun Microsystems, Inc. Interface for distributed processing framework system
US8024705B2 (en) * 2003-11-12 2011-09-20 Siemens Product Lifecycle Management Software Inc. System, method, and computer program product for distributed testing of program code
KR20100034757A (ko) * 2007-07-17 2010-04-01 가부시키가이샤 어드밴티스트 전자 디바이스, 호스트 장치, 통신 시스템, 및 프로그램

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2011117528A1 *

Also Published As

Publication number Publication date
US20130031532A1 (en) 2013-01-31
BR112012021145A2 (pt) 2019-09-24
WO2011117528A1 (fr) 2011-09-29
FR2958059B1 (fr) 2012-04-13
JP2013524312A (ja) 2013-06-17
FR2958059A1 (fr) 2011-09-30

Similar Documents

Publication Publication Date Title
WO2011117528A1 (fr) Procede, programme d'ordinateur et dispositif de validation d'execution de taches dans des systemes informatiques evolutifs
EP0820013B2 (de) Verfahren zur Echtzeitüberwachung eines Rechnersystems zu seiner Verwaltung und Hilfe zu seiner Wartung während seiner Betriebsbereitschaft
EP2697936B1 (de) Verfahren und vorrichtung zur verarbeitung von administrationsbefehlen in einem cluster
FR3025909A3 (fr) Audit de video sur le web
US11567735B1 (en) Systems and methods for integration of multiple programming languages within a pipelined search query
US10318369B2 (en) Application performance management system with collective learning
WO2014072628A1 (fr) Procédé, dispositif et programme d'ordinateur de placement de tâches dans un système multi-coeurs
US10848371B2 (en) User interface for an application performance management system
EP2704010A1 (de) Verfahren und Vorrichtung zur Befehlsverarbeitung in einem Satz von Informatikelementen
WO2021089357A1 (fr) Detection d'attaques a l'aide de compteurs de performances materiels
FR3003365A1 (fr) Procede et dispositif de gestion de mises a jour logicielles d'un ensemble d'equipements d'un systeme tel qu'un systeme d'un aeronef
EP3754506B1 (de) Verfarhren und system zur automatischen validierung von cots
EP2721487B1 (de) Verfahren, vorrichtung und computerprogramm zur aksoftware tualisierung von clustern um die verfügbarkeit dieser zu optimieren
EP3729273A1 (de) System und verfahren zum formulieren und ausführen von funktionstests für cluster-de-server
EP3767475A1 (de) Vorrichtung und verfahren für die analyse der leistungen einer web-anwendung
EP2734921B1 (de) Verfahren, computerprogramm und vorrichtung zur unterstützung der bereitstellung von clustern
US10235262B2 (en) Recognition of operational elements by fingerprint in an application performance management system
WO2013088019A1 (fr) Procédé et programme d'ordinateur de gestion de pannes multiples dans une infrastructure informatique comprenant des équipements à haute disponibilité
FR3079648A1 (fr) Acheminement de message de contenu pour partage d'informations au sein d'une chaine d'approvisionnement
EP2727057B1 (de) Verfahren und computerprogramm zur dynamischen identifikation von komponenten eines clusters und zur automatisierung des betriebs für optimiertes management des clusters
WO2020058343A1 (fr) Procédé d'analyse des dysfonctionnements d'un système et dispositifs associés
EP3767476A1 (de) Vorrichtung und verfahren für die analyse der leistungen einer mehrschichtigen anwendung (n-tier-app)
Whitesell et al. Healthy Microservices
FR3108747A1 (fr) Procédé de gestion d’un fichier numérique de description d’un modèle de données, procédé d’utilisation d’un tel fichier par un équipement client, dispositifs, équipement serveur, équipement client, système et programmes d’ordinateur correspondants.
FR2984052A1 (fr) Procede et programme d'ordinateur de configuration d'equipements dans une infrastructure informatique ayant une architecture de type infiniband

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20120928

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20151001