EP1738349B1 - Method and system for volatilely building an image displaceable of a display system from a plurality of objects - Google Patents

Method and system for volatilely building an image displaceable of a display system from a plurality of objects Download PDF

Info

Publication number
EP1738349B1
EP1738349B1 EP05753728.4A EP05753728A EP1738349B1 EP 1738349 B1 EP1738349 B1 EP 1738349B1 EP 05753728 A EP05753728 A EP 05753728A EP 1738349 B1 EP1738349 B1 EP 1738349B1
Authority
EP
European Patent Office
Prior art keywords
line
pixel
pixels
objects
display system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP05753728.4A
Other languages
German (de)
French (fr)
Other versions
EP1738349A1 (en
Inventor
Philippe Hauttecoeur
Hervé Rostan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP1738349A1 publication Critical patent/EP1738349A1/en
Application granted granted Critical
Publication of EP1738349B1 publication Critical patent/EP1738349B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/42Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of patterns using a display memory without fixed position correspondence between the display memory contents and the display position on the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG

Definitions

  • the present invention relates to a method for constructing an image to be displayed on a display system, such as a screen from a plurality of objects.
  • An object is a graphic element that can be displayed on the screen. It can be of type video, bitmap or vector for example.
  • the current display mechanisms are usually content to look for the image already built somewhere in memory, in a predefined area that is usually the size of a video frame, hence its name “frame memory” or “frame buffer” in English.
  • an object copy module is commonly used to retrieve an object in any area and write it in the frame memory.
  • Prior art graphics and video systems use frame memories in which images are fabricated from graphics and video primitives and objects copied into these areas. When there are several memory planes, these are generally recombined at the time of display using multiplexing, chroma keying or transparency merging mechanisms (alpha-blending). "in the English language). This approach is very greedy in memory space, especially for systems wishing to offer a very good visual quality, and which then require multiple memory plans. Moreover, each time a graphic object is replaced by another, deleted or moved, there is total reconstruction of the frame memory in which it is located, resulting in a loss of fluidity.
  • the present invention aims to overcome the aforementioned drawbacks by proposing an image construction method in which the memory useful for this construction and the useful power of the system are reduced compared to the systems of the prior art.
  • the above object is achieved with a method for constructing an image capable of being displayed on a display system from a plurality of objects as defined by claim 1.
  • the present invention makes it possible to directly display objects on a display system such as a screen.
  • the object is at the heart of the mechanism of construction of the image. It is said that the construction of the image is volatile since its mechanism does not use a frame memory and that the image exists otherwise only on the display system. In other words, we build the image line by line in real time from the places where the objects are scattered to the display system. Whereas in the systems of the prior art, the image is first constructed From the objects, we store this image in a frame memory, then we display the already formed image.
  • display system is meant a system comprising one or more screens synchronized in line and in frame, and controlled by a single controller generating timing signals screens, this controller being disposed in the accelerator.
  • this controller being disposed in the accelerator.
  • two side-by-side screens can be considered as a single dual-definition screen.
  • the display system may include a display memory capable of receiving the pixels forming the image. The image is then formed within this display memory rather than directly on a screen.
  • the present invention has a certain advantage over standard systems since it requires little memory (lack of frame memory), therefore a lower cost.
  • the present invention therefore makes it possible to produce images of very high visual quality from low power components.
  • the image that an observer sees exists only on the screen and is not stored, the image must be reconstructed with each new video frame.
  • the standard systems are rarely able to put dynamism in their Human Machine Interface (HMI) or do it at the expense of the real-time aspect since the large number of operations necessary for the preparation of the image in the frame memory may exceed the duration of the video frame of the screen.
  • HMI Human Machine Interface
  • the dynamic aspect of an image (of an HMI) is sizing.
  • a dynamic image can be obtained without resizing since the number of operations to be performed in the frame time will be the same.
  • dynamic image is meant an image composed of 2D objects that can evolve over time, move, transform, change state, etc.
  • a system for constructing an image that can be displayed on a display system from a plurality of objects stored in RAM, as defined by claim 10.
  • the present system includes means for inserting / embedding graphics information in a main video stream. Unlike in this one, the inserted informations are heterogeneous and parametric objects.
  • a module 4 for copying objects constructs said image which is then stored in a frame memory 5.
  • the latter is usually the size of a video frame.
  • a display module 6 then simply fetch the image already built in the frame memory 5 and display it on the screen 1 for the duration of a video frame.
  • the present invention implements a completely different method.
  • On the figure 2 it is also desired to display an image on the screen 1 from a plurality of objects 2 stored in RAM 3.
  • a hardware processor or "hardware" in English, otherwise called hardware graphics accelerator to build on the fly and in real time said image from the objects 2.
  • This accelerator in particular represented on the figure 5 , performs various operations during the video frame and repeats them at each new frame.
  • the system according to the invention builds in real time the pixels to be displayed on this line from the objects 2.
  • the hardware accelerator comprises a module 7 to build the pixels of a current line of the image to be displayed, at least one line memory 8 for temporarily storing the pixels thus constructed, and a line display module 9 on the screen 1.
  • the volatile construction mechanism according to a comparative example uses these time intervals to display an image composed of objects on the screen and stable the time of a frame. Macroscopically, we distinguish the VBI and the VAI. We take advantage of the VBI of the frame period to achieve all the preparation necessary for the smooth running of the volatile construction that will take place in the VAI by a line process.
  • the volatile construct For each active line of the screen, the volatile construct consists of filling a line memory with the line segments of the objects that are active in the line in question. The contents of the line memory are then sent to the screen at the rate of the pixel frequency.
  • the present invention allows that when displaying a line (L-1), during this time, the next line (L) is being built.
  • sync H corresponds to line synchronization (Horizontal)
  • sync V corresponds to frame synchronization (Vertical).
  • the first step may be to decode a hardware descriptor to develop variables that will be used when building lines during the VAI zone.
  • a hardware descriptor is associated with each object.
  • the term "hardware descriptor” means a coherent set of data, created and initialized generally by an application process. This descriptor contains all the information for the hardware accelerator to display the object associated with it. This information, in particular stored in registers or memories, includes graphic parameters describing the nature of the object and the display parameters. These can be separated into essential parameters (position, display attributes such as transparency level, ...) and transformation parameters (partial display or "clipping", resizing or “resizing”, rotation, anamorphism, filters ). Each object can be of different nature in that it belongs to a given class (vector, video, bitmap, ...) or has a given color organization (palette mode, black and white, colors in 16, 24, 32 bits, with transparency or not, ).
  • the descriptor can be local to the hardware accelerator or to an external memory. In the latter case, it is first necessary to recover the descriptor before starting the decoding.
  • the decoding of the descriptor consists of extracting all the variables that will be used during the volatile construction.
  • the decoding of the descriptor will be more or less long, more or less complex, depending on the capabilities of the hardware accelerator to perform advanced and complex functions (filters, "resizing”, anamorphism, "clipping", displacements, ...) .
  • the decoding of certain descriptor information may even be useless if the parameters provided already correspond to the variables useful for the volatile construction. Nevertheless, the idea is to delegate at the hardware level, "hardware", the pretreatment of the raw data so that it can guarantee the real time and be synchronized in the frame with the volatile construction.
  • API Application Program Interface
  • the coordinates of the object are slightly negative compared to the origin of the screen. Only the visible part of the object will have to be processed to be displayed, the rest being clipped. This means that the actual position of the object will have to be determined, that the address of the object in memory will have to be updated to point directly to the first visible pixel.
  • the size of the line segment useful for the volatile construction of a line must also be determined according to the "clipping", but also according to the color organization of the pixels in memory.
  • the decoding of the descriptor results in a series of working variables that will be exploited by the hardware accelerator during the volatile construction. If the descriptors can be stored in an external memory, the useful variables will advantageously be stored locally to the hardware accelerator for reasons of accessibility.
  • the address pointer points to the first active line segment of the object stored in memory. But the address pointer will have to point as line processes on the new active line segments.
  • a second step can be the sorting of objects in descending order of depth. This process allows you to prioritize the order in which the objects will be overlaid on the screen. It takes all its interest when the global transparency of the objects is managed by the hardware accelerator since it then faithfully reproduces the complex transparency between the objects. This means that an object A placed behind an object B will be seen in proportion to the transparency of object B in front of it.
  • the sorting can be done, for example, by first scanning all the objects to determine the most buried object, then starting again the scanning of the objects without taking into account the objects already sorted, until all the objects are sorted.
  • each object may for example contain in one of its registers, the index of the next object in a descending order of depth, so that the process line of construction of the image can pass simply d one object to another, and in the order required by complex transparency.
  • the volatile image construction mechanism is a line process, which means that it executes again for each active line of the screen, that is for each line of the VAI (see figure 3 ).
  • the construction mechanism of a line is synchronized with the display mechanism which simply consists of sending a sequence of pixels at the pixel frequency to the screen. To do this, we can very well use only one line memory for these two mechanisms, knowing that the construction mechanism must then be timed to not update pixels that have not yet been sent to the screen. Two or more line memories can also be used. In this case, some so-called “off screen” used to build the following lines, while a so-called “on screen” is sent to the screen to fill the current line. At the line frequency, for example at the time of the line sync top, the processes are reversed. The memory “off screen” becomes the memory "on screen” and vice versa.
  • the figure 4 illustrates this mechanism on two successive lines Line n and Line n + 1 where we note the role reversal between the two memories 10 and 11.
  • a process is also performed to access the memory where objects are stored from hardware commands.
  • the "hardware" mechanism of volatile construction according to the invention can advantageously be implemented in an FPGA which makes it possible to integrate all the modules and processes necessary for carrying out the method according to the invention.
  • FIG 5 we see how a hardware accelerator 21 can be architected on such an FPGA.
  • a memory 20 (SDRAM, DDRAM, etc.) is advantageously used whose bandwidth is large enough to allow the volatile construction mechanism to recover enough information (useful data of the active objects) during of the process line.
  • This memory 20 is a random access memory external to the accelerator 21.
  • the image building method according to the invention makes it possible to manage depth level between graphical objects and video without limit of the maximum number of layers.
  • This number of layers is not sizing at the hardware resources of the accelerator.
  • This method also makes it possible to manage the transparency between the graphic and video objects as a function of the positioning of the objects on the z-axis, irrespective of the number of graphic layers that is not dimensioning to obtain the overall complex transparency.

Description

La présente invention se rapporte à un procédé pour construire une image à afficher sur un système d'affichage, tel qu'un écran à partir d'une pluralité d'objets. Un objet est un élément graphique qui peut être affiché à l'écran. Il peut être de type vidéo, bitmap ou vectoriel par exemple.The present invention relates to a method for constructing an image to be displayed on a display system, such as a screen from a plurality of objects. An object is a graphic element that can be displayed on the screen. It can be of type video, bitmap or vector for example.

Si l'approche "objet" est une notion répandue au niveau applicatif, les mécanismes d'affichage actuels se contentent traditionnellement d'aller chercher l'image déjà construite quelque part en mémoire, dans une zone prédéfinie qui fait généralement la taille d'une trame vidéo, d'où son nom de "mémoire de trame" ou "frame buffer" en langue anglaise. Entre un module applicatif gérant des objets, et un module d'affichage exploitant une mémoire trame, un module de copie d'objets est communément utilisé permettant de récupérer un objet dans une zone quelconque et de venir l'écrire dans la mémoire trame.If the "object" approach is a widespread concept at the application level, the current display mechanisms are usually content to look for the image already built somewhere in memory, in a predefined area that is usually the size of a video frame, hence its name "frame memory" or "frame buffer" in English. Between an application module managing objects, and a display module using a frame memory, an object copy module is commonly used to retrieve an object in any area and write it in the frame memory.

Les systèmes graphiques et vidéo selon l'art antérieur utilisent des mémoires de trame dans lesquelles les images sont fabriquées à partir de primitives graphiques et vidéo et d'objets copiés dans ces zones. Lorsqu'il existe plusieurs plans mémoire, ceux-ci sont généralement recombinés au moment de l'affichage en utilisant des mécanismes de multiplexage, d'incrustation ("chroma-keying" en langue anglaise) ou de fusion par transparence ("alpha-blending" en langue anglaise). Cette approche est très gourmande en espace mémoire, surtout pour les systèmes désireux d'offrir une très bonne qualité visuelle, et qui requièrent alors de multiples plans mémoire. Par ailleurs, à chaque fois qu'un objet graphique est remplacé par un autre, supprimé ou déplacé, il y a reconstruction totale de la mémoire trame dans laquelle il se trouve, d'où une perte en fluidité.Prior art graphics and video systems use frame memories in which images are fabricated from graphics and video primitives and objects copied into these areas. When there are several memory planes, these are generally recombined at the time of display using multiplexing, chroma keying or transparency merging mechanisms (alpha-blending). "in the English language). This approach is very greedy in memory space, especially for systems wishing to offer a very good visual quality, and which then require multiple memory plans. Moreover, each time a graphic object is replaced by another, deleted or moved, there is total reconstruction of the frame memory in which it is located, resulting in a loss of fluidity.

Pour les systèmes standards, le dynamisme et la qualité visuelle que l'on veut apporter à une Interface Homme-Machine sont complètement dimensionnant pour le coeur processeur du système (matériel et/ou logiciel, "hardware" et/ou "software" en langue anglaise) et pour la quantité de mémoire. Ainsi :

  • plus le processeur sera puissant, plus il pourra réaliser un grand nombre d'opérations lors de la préparation des plans images et gérer des modifications importantes de contenu d'une trame à la suivante, et plus il pourra manipuler un grand nombre de plans mémoire;
  • plus il y a de plans mémoire, plus le nombre de niveaux graphiques est important, ce qui accroît les possibilités en terme de transparence complexe dans l'image finale.
For standard systems, the dynamism and the visual quality that one wants to bring to a Human Machine Interface are completely dimensioning for the core processor of the system (hardware and / or software, "hardware" and / or "software" in language English) and for the amount of memory. So :
  • the more powerful the processor, the more it will be able to perform a large number of operations during the preparation of the image plans and manage important changes in content from one frame to the next, and the more it can handle a large number of memory planes;
  • the more memory layouts, the greater the number of graphic levels, which increases the possibilities in terms of complex transparency in the final image.

La présente invention vise à remédier aux inconvénients précités en proposant un procédé de construction d'image dans lequel la mémoire utile à cette construction et la puissance utile du système sont réduites par rapport aux systèmes de l'art antérieur.The present invention aims to overcome the aforementioned drawbacks by proposing an image construction method in which the memory useful for this construction and the useful power of the system are reduced compared to the systems of the prior art.

On atteint le but précité avec un procédé pour construire une image apte à être affichée sur un système d'affichage à partir d'une pluralité d'objets, comme définit par la revendication 1.The above object is achieved with a method for constructing an image capable of being displayed on a display system from a plurality of objects as defined by claim 1.

Contrairement aux systèmes de l'art antérieur, la présente invention permet d'afficher directement des objets sur un système d'affichage tel qu'un écran. L'objet est au coeur du mécanisme de construction de l'image. On dit que la construction de l'image est volatile puisque son mécanisme n'a pas recours à une mémoire trame et que l'image n'existe nulle par ailleurs que sur le système d'affichage. En d'autres termes, on construit l'image ligne par ligne en temps réel depuis les lieux où sont disséminés les objets jusqu'au système d'affichage. Alors que dans les systèmes de l'art antérieur, on construit d'abord l'image à partir des objets, on stocke cette image dans une mémoire trame, puis on affiche l'image déjà formée.Unlike the systems of the prior art, the present invention makes it possible to directly display objects on a display system such as a screen. The object is at the heart of the mechanism of construction of the image. It is said that the construction of the image is volatile since its mechanism does not use a frame memory and that the image exists otherwise only on the display system. In other words, we build the image line by line in real time from the places where the objects are scattered to the display system. Whereas in the systems of the prior art, the image is first constructed From the objects, we store this image in a frame memory, then we display the already formed image.

Par système d'affichage, on entend un système comprenant un ou plusieurs écrans synchronisés en ligne et en trame, et pilotés par un contrôleur unique générant des signaux de cadencement des écrans, ce contrôleur étant disposé dans l'accélérateur. Par exemple, du point de vue de l'accélérateur, deux écrans côte à côte peuvent être considérés comme un seul écran de définition double.By display system is meant a system comprising one or more screens synchronized in line and in frame, and controlled by a single controller generating timing signals screens, this controller being disposed in the accelerator. For example, from the accelerator point of view, two side-by-side screens can be considered as a single dual-definition screen.

D'autre part, le système d'affichage peut comprendre une mémoire d'affichage apte à recevoir les pixels formant l'image. L'image est alors formée au sein de cette mémoire d'affichage plutôt que directement sur un écran.On the other hand, the display system may include a display memory capable of receiving the pixels forming the image. The image is then formed within this display memory rather than directly on a screen.

Par ailleurs, lorsque les objets graphiques ou vidéo sont stockés en mémoires sous forme compressée, la présente invention présente un avantage certain par rapport aux systèmes standards puisqu'elle nécessite peu de mémoire (absence de mémoire trame), donc un coût de revient inférieur.Furthermore, when graphic or video objects are stored in memory in compressed form, the present invention has a certain advantage over standard systems since it requires little memory (lack of frame memory), therefore a lower cost.

En outre, puisque l'image est construite directement, l'opération de préparation de la trame dans une mémoire trame est évitée, d'où une diminution de la bande-passante nécessaire au niveau de la mémoire. Ayant moins d'opérations à effectuer pour afficher une image à l'écran, le système tout entier peut être moins puissant, donc moins consommateur d'énergie, moins perturbant, ce qui est primordial pour un système embarqué par exemple. La présente invention permet donc de réaliser des images d'une très grande qualité visuelle à partir de composants faible puissance.In addition, since the image is constructed directly, the operation of preparing the frame in a frame memory is avoided, thereby reducing the bandwidth required at the memory. Having fewer operations to perform to display an image on the screen, the entire system can be less powerful, so less energy consuming, less disturbing, which is essential for an embedded system for example. The present invention therefore makes it possible to produce images of very high visual quality from low power components.

Puisque l'image que voit un observateur n'existe que sur l'écran et n'est pas stockée, l'image doit être reconstruite à chaque nouvelle trame vidéo. En conséquence, la réalisation de séquences d'images complexes avec des objets dynamiques ne nécessite aucune puissance supplémentaire par rapport à ce qui est nécessaire pour afficher toujours la même image. En revanche, les systèmes standards sont rarement capables de mettre du dynamisme dans leur Interface Homme-Machine (IHM) ou bien le font au détriment de l'aspect temps réel puisque le nombre important d'opérations nécessaires à la préparation de l'image dans la mémoire trame peut dépasser la durée de la trame vidéo de l'écran. Pour un coprocesseur graphique standard, l'aspect dynamique d'une image (d'une IHM) est dimensionnant. Avec la présente invention, une image dynamique peut être obtenue sans re-dimensionnement puisque le nombre d'opérations à réaliser dans le temps trame sera le même. On entend par image dynamique, une image composée d'objets 2D pouvant évoluer au fil du temps, se déplacer, se transformer, changer d'état, ... Des modes de mise en oeuvre avantageux sont définis par les revendications dépendantesSince the image that an observer sees exists only on the screen and is not stored, the image must be reconstructed with each new video frame. As a result, the realization of complex image sequences with dynamic objects requires no additional power compared to what is necessary to always display the same image. On the other hand, the standard systems are rarely able to put dynamism in their Human Machine Interface (HMI) or do it at the expense of the real-time aspect since the large number of operations necessary for the preparation of the image in the frame memory may exceed the duration of the video frame of the screen. For a standard graphics coprocessor, the dynamic aspect of an image (of an HMI) is sizing. With the present invention, a dynamic image can be obtained without resizing since the number of operations to be performed in the frame time will be the same. By dynamic image is meant an image composed of 2D objects that can evolve over time, move, transform, change state, etc. Advantageous modes of implementation are defined by the dependent claims.

Suivant un autre aspect de l'invention, il est proposé un système pour construire une image apte à être affichée sur un système d'affichage à partir d'une pluralité d'objets notamment stockés en mémoire vive, comme définit par la revendication 10.According to another aspect of the invention, a system is proposed for constructing an image that can be displayed on a display system from a plurality of objects stored in RAM, as defined by claim 10.

Avantageusement, l'accélérateur matériel comprend les éléments suivants :

  • un gestionnaire d'objets pour activer et caractériser des traitements liés aux objets,
  • une mémoire de stockage des paramètres et variables de chaque objet,
  • un module matériel de type DMA ("Direct Memory Accesss") en langue anglaise) contrôlé par l'accélérateur (et non par l'unité de traitement) apte à récupérer des données des objets depuis une mémoire,
  • une mémoire tampon pour stocker temporairement les données provenant du module matériel DMA,
  • un module de décompression et de conversion de données brutes provenant de la mémoire tampon en pixels,
  • un multiplexeur pour sélectionner le pixel à afficher entre celui provenant du module de décompression et de conversion et celui provenant d'une source externe, à la condition que la source externe, l'accélérateur et l'écran soient synchronisés,
  • un mélangeur pour mélanger le pixel provenant du multiplexeur et un pixel actuellement affiché en fonction de la transparence du pixel provenant du multiplexeur, et
  • deux mémoires ligne assurant successivement à tour de rôle l'affichage sur la ligne courante du système d'affichage de pixels préalablement stockés, et le stockage de pixels provenant du mélangeur et qui sera affiché à la ligne suivante.
Advantageously, the hardware accelerator comprises the following elements:
  • an object manager to activate and characterize object-related processes,
  • a storage memory of the parameters and variables of each object,
  • a hardware module of the DMA type ("Direct Memory Accesss") controlled by the accelerator (and not by the processing unit) able to retrieve data from objects from a memory,
  • a buffer for temporarily storing data from the DMA hardware module,
  • a module for decompressing and converting raw data from the buffer memory into pixels,
  • a multiplexer for selecting the pixel to be displayed between the one coming from the decompression and conversion module and that coming from an external source, provided that the external source, the accelerator and the screen are synchronized,
  • a mixer for mixing the pixel from the multiplexer and a currently displayed pixel according to the transparency of the pixel from the multiplexer, and
  • two line memories successively in turn taking the display on the current line of the previously stored pixel display system, and the storage of pixels from the mixer and which will be displayed on the next line.

A la façon d'un OSD (On Screen Display), le présent système comprend des moyens pour insérer/incruster des informations graphiques dans un flux vidéo principal. A la différence de celui-ci, les informations insérées sont des objets hétérogènes et paramétriques.In the manner of an On Screen Display (OSD), the present system includes means for inserting / embedding graphics information in a main video stream. Unlike in this one, the inserted informations are heterogeneous and parametric objects.

D'autres avantages et caractéristiques de l'invention apparaîtront à l'examen de la description détaillée d'un mode de mise en oeuvre nullement limitatif, et des dessins annexés, sur lesquels :

  • La figure 1 est une vue schématique simplifiée d'un processus de construction et d'affichage d'une image selon l'art antérieur;
  • La figure 2 est une vue schématique simplifiée d'un processus de construction et d'affichage d'une image ligne par ligne en temps réel selon la présente invention;
  • La figure 3 est un schéma illustrant deux zones temporelles d'une trame vidéo selon l'invention;
  • La figure 4 est un schéma illustrant la façon dont deux mémoires ligne sont utilisées pour la construction et l'affichage d'image ligne par ligne selon l'invention; et
  • La figure 5 est une vue schématique simplifiée d'un accélérateur matériel mettant en oeuvre le procédé selon la présente invention.
Other advantages and features of the invention will appear on examining the detailed description of an embodiment which is in no way limitative, and the appended drawings, in which:
  • The figure 1 is a simplified schematic view of a process for constructing and displaying an image according to the prior art;
  • The figure 2 is a simplified schematic view of a process for constructing and displaying a line-by-line real-time image according to the present invention;
  • The figure 3 is a diagram illustrating two time zones of a video frame according to the invention;
  • The figure 4 is a diagram illustrating how two line memories are used for the line-by-line image construction and display according to the invention; and
  • The figure 5 is a simplified schematic view of a hardware accelerator implementing the method according to the present invention.

Sur la figure 1, on désire réaliser et afficher sur un écran 1 une image à partir d'une pluralité d'objets 2 contenus dans une mémoire vive 3. Pour ce faire, selon l'art antérieur, un module 4 de copie d'objets construit ladite image qui est ensuite stockée dans une mémoire trame 5. Cette dernière fait généralement la taille d'une trame vidéo. Un module d'affichage 6 se contente alors d'aller chercher l'image déjà construite dans la mémoire trame 5 et de l'afficher sur l'écran 1 pendant la durée d'une trame vidéo.On the figure 1 , it is desired to make and display on a screen 1 an image from a plurality of objects 2 contained in a random access memory 3. To do this, according to the prior art, a module 4 for copying objects constructs said image which is then stored in a frame memory 5. The latter is usually the size of a video frame. A display module 6 then simply fetch the image already built in the frame memory 5 and display it on the screen 1 for the duration of a video frame.

La présente invention met en oeuvre un procédé complètement différent. Sur la figure 2, on désire également afficher une image sur l'écran 1 à partir d'une pluralité d'objets 2 stockés en mémoire vive 3. Pour ce faire, on utilise un processeur matériel ou "hardware" en langue anglaise, autrement appelé accélérateur matériel graphique, pour construire à la volée et en temps réel ladite image à partir des objets 2. Cet accélérateur, notamment représenté sur la figure 5, réalise différentes opérations au cours de la trame vidéo et les répète à chaque nouvelle trame. Ainsi, durant une trame, pour chaque ligne active de la trame, le système selon l'invention construit en temps réel les pixels devant s'afficher sur cette ligne à partir des objets 2. En d'autres termes, l'accélérateur matériel comprend un module 7 pour construire les pixels d'une ligne courante de l'image à afficher, au moins une mémoire ligne 8 pour stocker temporairement les pixels ainsi construits, et un module d'affichage ligne 9 sur l'écran 1.The present invention implements a completely different method. On the figure 2 , it is also desired to display an image on the screen 1 from a plurality of objects 2 stored in RAM 3. To do this, it uses a hardware processor or "hardware" in English, otherwise called hardware graphics accelerator , to build on the fly and in real time said image from the objects 2. This accelerator, in particular represented on the figure 5 , performs various operations during the video frame and repeats them at each new frame. Thus, during a frame, for each active line of the frame, the system according to the invention builds in real time the pixels to be displayed on this line from the objects 2. In other words, the hardware accelerator comprises a module 7 to build the pixels of a current line of the image to be displayed, at least one line memory 8 for temporarily storing the pixels thus constructed, and a line display module 9 on the screen 1.

D'une façon générale, dans la trame vidéo qu'il génère ou sur laquelle il se synchronise et se verrouille, l'accélérateur matériel identifie alors, conformément à la figure 3, deux zones temporelles distinctes qui sont :

  • la zone morte verticale (dite "vertical blanking interval", VBI)
  • la zone active verticale (dite "vertical active interval", VAI)
In a general way, in the video frame that it generates or on which it synchronizes and locks itself, the hardware accelerator then identifies, in accordance with the figure 3 , two distinct time zones that are:
  • the vertical dead zone (so-called "vertical blanking interval", VBI)
  • the vertical active zone (so-called "vertical active interval", VAI)

Le mécanisme de construction volatile selon un example comparatif s'appuie sur ces intervalles de temps pour réaliser l'affichage d'une image composée d'objets à l'écran et stables le temps d'une trame. Macroscopiquement, on distingue le VBI et le VAI. On profite du VBI de la période trame pour réaliser toute la préparation nécessaire au bon déroulement de la construction volatile qui se déroulera dans le VAI par un processus ligne. Pour chaque ligne active de l'écran, la construction volatile consiste à remplir une mémoire de ligne avec les segments de ligne des objets qui sont actifs dans la ligne considérée. Le contenu de la mémoire de ligne est ensuite envoyé à l'écran au rythme de la fréquence pixel. Comme on le verra ci-dessous avec la figure 4, la présente invention permet à ce que lorsqu'on affiche une ligne (L-1), pendant ce temps là, la ligne suivante (L) est en cours de construction.The volatile construction mechanism according to a comparative example uses these time intervals to display an image composed of objects on the screen and stable the time of a frame. Macroscopically, we distinguish the VBI and the VAI. We take advantage of the VBI of the frame period to achieve all the preparation necessary for the smooth running of the volatile construction that will take place in the VAI by a line process. For each active line of the screen, the volatile construct consists of filling a line memory with the line segments of the objects that are active in the line in question. The contents of the line memory are then sent to the screen at the rate of the pixel frequency. As we will see below with the figure 4 , the present invention allows that when displaying a line (L-1), during this time, the next line (L) is being built.

Sur la figure 3, la synchro H correspond à la synchronisation ligne (Horizontale), et la synchro V correspond à la synchronisation trame (Verticale).On the figure 3 , sync H corresponds to line synchronization (Horizontal), and sync V corresponds to frame synchronization (Vertical).

On va maintenant décrire le processus réalisé dans la zone morte verticale VBI pour préparer la construction volatile de la zone active verticale VAI. La première étape peut consister à décoder un descripteur matériel afin d'élaborer des variables qui seront utilisées lors de la construction des lignes pendant la zone VAI.We will now describe the process carried out in the vertical dead zone VBI to prepare the volatile construction of the vertical active zone VAI. The first step may be to decode a hardware descriptor to develop variables that will be used when building lines during the VAI zone.

Un descripteur matériel est associé à chaque objet. On entend par "descripteur matériel" un ensemble cohérent de données, créé et initialisé généralement par un processus applicatif. Ce descripteur contient toutes les informations pour que l'accélérateur matériel puisse afficher l'objet qui lui est associé. Ces informations, notamment stockées dans des registres ou mémoires, comportent des paramètres graphiques décrivant la nature de l'objet et des paramètres d'affichage. Ces derniers peuvent être dissociés en paramètres indispensables (position, attributs d'affichage tel que niveau de transparence,...) et paramètres de transformations (affichage partiel ou "clipping", re-dimensionnement ou "resizing", rotation, anamorphisme, filtres,...). Chaque objet peut être de nature différente en ce qu'il appartient à une classe donnée (vectoriel, vidéo, bitmap, ...) ou comporte une organisation de couleur donnée (mode palette, noir et blanc, couleurs en 16, 24, 32 bits, avec transparence ou non, ...).A hardware descriptor is associated with each object. The term "hardware descriptor" means a coherent set of data, created and initialized generally by an application process. This descriptor contains all the information for the hardware accelerator to display the object associated with it. This information, in particular stored in registers or memories, includes graphic parameters describing the nature of the object and the display parameters. These can be separated into essential parameters (position, display attributes such as transparency level, ...) and transformation parameters (partial display or "clipping", resizing or "resizing", rotation, anamorphism, filters ...). Each object can be of different nature in that it belongs to a given class (vector, video, bitmap, ...) or has a given color organization (palette mode, black and white, colors in 16, 24, 32 bits, with transparency or not, ...).

Le descripteur peut être local à l'accélérateur matériel ou dans une mémoire externe. Dans ce dernier cas, il convient d'abord de récupérer le descripteur avant d'entamer le décodage.The descriptor can be local to the hardware accelerator or to an external memory. In the latter case, it is first necessary to recover the descriptor before starting the decoding.

Le décodage du descripteur consiste à en extraire toutes les variables qui vont servir lors de la construction volatile. Le décodage du descripteur sera plus ou moins long, plus ou moins complexe, en fonction des capacités de l'accélérateur matériel à réaliser des fonctions évoluées et complexes (filtres, "resizing", anamorphisme, "clipping", déplacements,...). Le décodage de certaines informations du descripteur peut même être inutile si les paramètres fournis correspondent déjà aux variables utiles à la construction volatile. Néanmoins, l'idée consiste à déléguer au niveau du matériel, "hardware", le prétraitement des données brutes pour qu'il puisse garantir le temps réel et être synchronisé dans la trame avec la construction volatile. Cela permet aussi de stocker dans le descripteur des paramètres évolués qui seront directement (ou presque) transmis par une application dite API ("Application Program Interface" en langue anglaise) orientée objet. En effet, l'accélérateur matériel est associé à un microprocesseur qui est programmé de manière à offrir un ensemble de fonctions prédéfinies accessibles par l'intermédiaire de cette API.The decoding of the descriptor consists of extracting all the variables that will be used during the volatile construction. The decoding of the descriptor will be more or less long, more or less complex, depending on the capabilities of the hardware accelerator to perform advanced and complex functions (filters, "resizing", anamorphism, "clipping", displacements, ...) . The decoding of certain descriptor information may even be useless if the parameters provided already correspond to the variables useful for the volatile construction. Nevertheless, the idea is to delegate at the hardware level, "hardware", the pretreatment of the raw data so that it can guarantee the real time and be synchronized in the frame with the volatile construction. It also allows to store in the descriptor advanced parameters that will be directly (or almost) transmitted by an application called API ("Application Program Interface" in English) object oriented. Indeed, the hardware accelerator is associated with a microprocessor which is programmed from to provide a set of predefined functions accessible through this API.

Pour illustrer le rôle du décodage, pour un objet composé par exemple d'une image bitmap stockée quelque part en mémoire. Son descripteur contient alors :

  • l'adresse où commence la donnée image en mémoire;
  • la taille de l'image;
  • la position de l'objet à l'écran;
  • l'organisation de couleur de l'image bitmap telle qu'elle est stockée en mémoire;
  • le niveau de transparence global de l'objet;
To illustrate the role of decoding, for an object composed for example of a bitmap stored somewhere in memory. Its descriptor then contains:
  • the address where the image data starts in memory;
  • the size of the image;
  • the position of the object on the screen;
  • the color organization of the bitmap as stored in memory;
  • the overall level of transparency of the object;

Imaginons que les coordonnées de l'objet soient légèrement négatives par rapport à l'origine de l'écran. Seule la partie visible de l'objet devra être traitée pour être affichée, le reste étant rogné ("clipping"). Cela signifie que la position réelle de l'objet devra être déterminée, que l'adresse de l'objet en mémoire devra être mise à jour pour pointer directement sur le premier pixel visible. La taille du segment de ligne utile à la construction volatile d'une ligne doit aussi être déterminée en fonction du "clipping", mais aussi en fonction de l'organisation de couleur des pixels en mémoire. Ainsi, pour chaque objet, le décodage du descripteur aboutit à une série de variables de travail qui sera exploitée par l'accélérateur matériel lors de la construction volatile. Si les descripteurs peuvent être stockés dans une mémoire externe, les variables utiles seront avantageusement stockées localement à l'accélérateur hardware pour des raisons d'accessibilité. Certaines de ces variables subiront des modifications au fur et à mesure des processus ligne. Par exemple, à la fin du décodage de descripteur, le pointeur d'adresse pointe sur le premier segment de ligne actif de l'objet stocké en mémoire. Mais le pointeur d'adresse devra pointer au fur et à mesure des processus ligne sur les nouveaux segments de ligne actifs.Imagine that the coordinates of the object are slightly negative compared to the origin of the screen. Only the visible part of the object will have to be processed to be displayed, the rest being clipped. This means that the actual position of the object will have to be determined, that the address of the object in memory will have to be updated to point directly to the first visible pixel. The size of the line segment useful for the volatile construction of a line must also be determined according to the "clipping", but also according to the color organization of the pixels in memory. Thus, for each object, the decoding of the descriptor results in a series of working variables that will be exploited by the hardware accelerator during the volatile construction. If the descriptors can be stored in an external memory, the useful variables will advantageously be stored locally to the hardware accelerator for reasons of accessibility. Some of these variables will undergo changes as the line processes progress. For example, at the end of the descriptor decoding, the address pointer points to the first active line segment of the object stored in memory. But the address pointer will have to point as line processes on the new active line segments.

Une seconde étape peut être le tri des objets par ordre de profondeur décroissante. Ce processus permet de hiérarchiser l'ordre avec lequel les objets seront superposés à l'écran. Il prend tout son intérêt lorsque la transparence globale des objets est gérée par l'accélérateur matériel puisqu'il permet alors de restituer fidèlement la transparence complexe entre les objets. Cela signifie qu'un objet A placé derrière un objet B sera vu proportionnellement à la transparence de l'objet B qui se trouve devant lui. Le tri pourra se faire, par exemple, en balayant une première fois tous les objets pour déterminer l'objet le plus enfoui, puis en recommençant le balayage des objets sans tenir compte des objets déjà triés, jusqu'à ce que tous les objets soient triés. A l'issue de ce balayage, chaque objet pourra par exemple contenir dans un de ses registres, l'index de l'objet suivant dans un ordre de profondeur décroissante, afin que le processus ligne de construction de l'image puisse passer simplement d'un objet à un autre, et dans l'ordre exigé par la transparence complexe.A second step can be the sorting of objects in descending order of depth. This process allows you to prioritize the order in which the objects will be overlaid on the screen. It takes all its interest when the global transparency of the objects is managed by the hardware accelerator since it then faithfully reproduces the complex transparency between the objects. This means that an object A placed behind an object B will be seen in proportion to the transparency of object B in front of it. The sorting can be done, for example, by first scanning all the objects to determine the most buried object, then starting again the scanning of the objects without taking into account the objects already sorted, until all the objects are sorted. At the end of this scan, each object may for example contain in one of its registers, the index of the next object in a descending order of depth, so that the process line of construction of the image can pass simply d one object to another, and in the order required by complex transparency.

La transparence complexe n'est obtenue que par le principe inventif de construction volatile selon l'invention et décrit ci-dessousThe complex transparency is obtained only by the inventive principle of volatile construction according to the invention and described below.

On va maintenant décrire le processus réalisé dans la zone active verticale VAI. Le mécanisme de construction volatile d'image est un processus ligne, ce qui veut dire qu'il s'exécute de nouveau pour chaque ligne active de l'écran, c'est-à-dire pour chaque ligne de la VAI (voir figure 3).We will now describe the process performed in the vertical active area VAI. The volatile image construction mechanism is a line process, which means that it executes again for each active line of the screen, that is for each line of the VAI (see figure 3 ).

Processus ligne :Line process:

Le mécanisme de construction d'une ligne est synchronisé avec le mécanisme d'affichage qui consiste simplement à envoyer à l'écran une suite de pixels à la fréquence pixel. Pour ce faire, on peut très bien n'utiliser qu'une seule mémoire de ligne pour ces deux mécanismes, sachant que le mécanisme de construction doit alors être temporisé pour ne pas mettre à jour des pixels qui n'ont pas encore été envoyés à l'écran. On peut également utiliser deux ou plusieurs mémoires de ligne. Dans ce cas, certaines dites "off screen" servant à construire les lignes suivantes, pendant que une dite "on screen" est envoyée à l'écran pour remplir la ligne courante. A la fréquence ligne, par exemple au moment du top de synchro ligne, les processus sont inversés. La mémoire "off screen" devient la mémoire "on screen" et vice et versa. La figure 4 illustre ce mécanisme sur deux lignes successives Ligne n et Ligne n+1 où l'on remarque l'inversion des rôles entre les deux mémoires 10 et 11.The construction mechanism of a line is synchronized with the display mechanism which simply consists of sending a sequence of pixels at the pixel frequency to the screen. To do this, we can very well use only one line memory for these two mechanisms, knowing that the construction mechanism must then be timed to not update pixels that have not yet been sent to the screen. Two or more line memories can also be used. In this case, some so-called "off screen" used to build the following lines, while a so-called "on screen" is sent to the screen to fill the current line. At the line frequency, for example at the time of the line sync top, the processes are reversed. The memory "off screen" becomes the memory "on screen" and vice versa. The figure 4 illustrates this mechanism on two successive lines Line n and Line n + 1 where we note the role reversal between the two memories 10 and 11.

Construction de la ligne :Construction of the line:

Pour une ligne de l'écran donnée, il s'agit de récupérer tous les segments des objets qui doivent se trouver dans cette ligne.For a line of the given screen, it is a question of recovering all the segments of the objects which must be in this line.

Pour savoir si un objet se trouve dans la ligne, on exécute un processus permettant successivement pour chaque objet d'identifier si oui ou non l'objet est présent dans la ligne d'écran considéré. Le fait qu'un objet soit présent ou non dans une ligne de l'écran dépend de plusieurs paramètres qui sont propres à chaque objet. Dans le cas d'un objet bitmap rectangulaire stocké comme tel et affiché comme tel, on peut citer :

  • la hauteur de l'objet (en nombre de pixels);
  • la coordonnée "y" de l'objet (en nombre de pixels par rapport à une origine conventionnelle).
To know if an object is in the line, we execute a process allowing successively for each object to identify whether or not the object is present in the screen line considered. The fact that an object is present or not in a line of the screen depends on several parameters that are specific to each object. In the case of a rectangular bitmap object stored as such and displayed as such, there may be mentioned:
  • the height of the object (in number of pixels);
  • the "y" coordinate of the object (in number of pixels compared to a conventional origin).

Si l'objet est affecté d'une zone de "clipping", on tiendra compte également des paramètres de cette zone.If the object is assigned a "clipping" zone, the parameters of this zone will also be taken into account.

Si l'objet doit subir un "resizing" vertical, on tiendra compte de la taille finale de l'objet, c'est-à-dire à l'écran, pour déterminer si un morceau de cet objet se trouve dans la ligne d'écran considérée.If the object has to undergo a vertical "resizing", we will take into account the final size of the object, that is to say on the screen, to determine if a piece of this object is in the line d screen considered.

Lorsque l'on sait qu'un objet est présent dans la ligne active, il faut encore déterminer quelle est la zone utile de cet objet dans la ligne considérée, savoir où et comment récupérer cette zone, et savoir à quel endroit la placer dans la mémoire de ligne. Par exemple, dans le cas d'un objet bitmap rectangulaire stocké en mémoire et affiché anamorphosé à l'écran, la récupération du segment utile oblige entre autre à déterminer :

  • l'adresse mémoire où se trouve le premier pixel du segment d'objet;
  • le nombre de pixels contenu dans le segment de l'objet;
  • la loi de variation horizontale et verticale entre deux pixels.
When we know that an object is present in the active line, it is still necessary to determine what is the useful zone of this object in the line considered, to know where and how to recover this zone, and to know where to place it in the line memory. For example, in the case of a rectangular bitmap object stored in memory and displayed anamorphic on the screen, the recovery of the useful segment requires, among other things, to determine:
  • the memory address where the first pixel of the object segment is located;
  • the number of pixels contained in the segment of the object;
  • the law of horizontal and vertical variation between two pixels.

La détermination de ces variables ne suffit pas à récupérer le segment utile de l'objet. Il faut également considérer les paramètres objet suivants :

  • la largeur de l'objet stocké en mémoire (en octets);
  • l'organisation de couleur des pixels;
  • la loi de compression, si l'objet est stocké de façon compressée.
The determination of these variables is not enough to recover the useful segment of the object. We must also consider the following object parameters:
  • the width of the object stored in memory (in bytes);
  • the color organization of the pixels;
  • the compression law, if the object is stored in a compressed way.

Du point de vue des actions à effectuer : on exécute un processus permettant successivement pour chaque objet :

  1. a) de calculer les variables nécessaires à la récupération du segment utile;
  2. b) de récupérer les paramètres objet nécessaires;
  3. c) de mettre à jour les variables dans les registres en anticipation des prochaines lignes, par exemple le pointeur d'adresse mémoire.
From the point of view of the actions to be carried out: one executes a process allowing successively for each object:
  1. a) calculating the variables necessary for the recovery of the useful segment;
  2. b) retrieve the necessary object parameters;
  3. c) updating the variables in the registers in anticipation of the next lines, for example the memory address pointer.

On exécute également un processus permettant d'accéder à la mémoire où sont stockés les objets, à partir de commandes matérielles. On peut par exemple parler de "DMA hardware", pour "Direct Memory Access" en langue anglaise, géré par l'accélérateur matériel, par comparaison avec un DMA traditionnellement géré par un microprocesseur.A process is also performed to access the memory where objects are stored from hardware commands. One can for example speak of "DMA hardware", for "Direct Memory Access" in English, managed by the hardware accelerator, compared to a DMA traditionally managed by a microprocessor.

On exécute aussi un processus permettant de commander et de surveiller le DMA.There is also a process for controlling and monitoring the DMA.

Le mécanisme "hardware" de construction volatile selon l'invention peut avantageusement être implémenté dans un FPGA qui permet d'intégrer la totalité des modules et processus nécessaires à la mise en oeuvre du procédé selon l'invention. Sur la figure 5, on voit la manière dont un accélérateur matériel 21 peut être architecturé sur un tel FPGA.The "hardware" mechanism of volatile construction according to the invention can advantageously be implemented in an FPGA which makes it possible to integrate all the modules and processes necessary for carrying out the method according to the invention. On the figure 5 we see how a hardware accelerator 21 can be architected on such an FPGA.

Pour le stockage des objets graphiques, on utilise avantageusement une mémoire 20 (SDRAM, DDRAM...) dont la bande-passante est suffisamment importante pour permettre au mécanisme de construction volatile de récupérer suffisamment d'informations (données utiles des objets actifs) lors du processus ligne. Cette mémoire 20 est une mémoire vive externe à l'accélérateur 21.For the storage of graphic objects, a memory 20 (SDRAM, DDRAM, etc.) is advantageously used whose bandwidth is large enough to allow the volatile construction mechanism to recover enough information (useful data of the active objects) during of the process line. This memory 20 is a random access memory external to the accelerator 21.

Sur la figure 5, on distingue :

  1. a) Un gestionnaire d'objet 14 réalisant l'activation et la caractérisation des traitements propres à l'objet t (intra-objet) ainsi que la gestion entre les objets (inter-objets).
  2. b) Un registre 15 de stockage des paramètres et variables de travail pour chaque objet.
  3. c) Un module "DMA hardware" 12 contrôlé par le gestionnaire d'objet 14 et capable d'aller rechercher les données (par exemple des images bitmap) dans la mémoire externe 20.
  4. d) Une mémoire tampon 13 pour stocker temporairement les données brutes en provenance du DMA 12, lorsque les processus DMA et un pipeline 16 de décompression/conversion ne sont pas synchronisés.
  5. e) Deux mémoires de ligne 19 ou "line buffers" en langue anglaise, (l'une "off screen" et l'autre "on screen" conformément à la figure 4.
  6. f) Le pipeline 16 de décompression/conversion des données brutes en pixels. Plus précisément, Une fois que le segment d'objet est récupéré par le DMA, il s'agit de convertir les données brutes issues du DMA en pixels dans le format de sortie retenu pour pouvoir être stockés dans la mémoire de ligne. Dans le cas d'un objet de type image bitmap, cela nécessite de connaître :
    • l'organisation de couleur des pixels;
    • la loi de variation horizontale entre deux pixels; et
    • la loi de compression, si l'image est stockée de façon compressée.

    Ce pipeline comprend également des moyens pour transformer les pixels. Une fois les données brutes converties en pixels, il est possible d'appliquer des traitements numériques, tels que par exemple des filtres ou des effets sur ces pixels. On peut par exemple citer :
    • "chroma-keying" : le pixel devient transparent si sa valeur est égale à une valeur de référence qu'on appelle la valeur de "chroma-key";
    • seuillage par luminance : le pixel devient transparent si la valeur de son niveau de luminance est inférieure à une valeur de référence qu'on appelle valeur de seuil;
    • seuillage par chrominance;
    • niveau de transparence : on modifie le niveau de transparence du pixel dit "alpha" en appliquant la formule alpha_pixel = alpha_global_objet x alpha_pixel_source;
    • filtres de couleur : modification de la chrominance du pixel;
    • filtres passe-bas, passe-haut : influence des pixels voisins au pixel courant par un pipeline adapté;
    • Etc.
  7. g) Un mélangeur de pixel 18 ("blending")suivant la transparence du pixel considéré. L'écriture des pixels en mémoire ligne représente la dernière étape dans le processus ligne de la
    construction volatile. L'écriture d'un pixel dans la mémoire nécessite de connaître :
    • la position du pixel dans la ligne : coordonnée "x" par rapport à l'origine égale au premier pixel de l'écran; et
    • la valeur du pixel déjà présent dans la ligne à la même position, en particulier lorsque l'objet est semi-transparent.
  8. h) Un multiplexeur 17 sélectionne soit le pixel issu du flux vidéo principal (vidéo_in), soit le pixel issu du flux provenant du DMA 12. Ce mécanisme impose que le cadencement de l'accélérateur soit synchronisé et verrouillé sur la source vidéo. La fréquence de fonctionnement du pipeline est égale à plusieurs fois la fréquence pixel de l'écran, et le multiplexeur sélectionne le pixel vidéo à la cadence d'une fois par période pixel (à la fréquence pixel). Le reste du temps, le multiplexeur gère le flux issu du DMA 12. Lorsqu'il n'y a pas de flux vidéo principal, celui-ci est remplacé par une couleur de fond ("background"). Cet exemple d'architecture permet aussi de réaliser une fusion de la vidéo avec la couleur de fond proportionnellement à un niveau de transparence vidéo ( alpha_vidéo_in) sans implémenter un deuxième module de fusion.
On the figure 5 , we distinguish :
  1. a) an object manager 14 performing the activation and characterization of the processing specific to the object t (intra-object) as well as management between objects (inter-objects).
  2. b) A register 15 for storing the parameters and working variables for each object.
  3. c) A "hardware DMA" module 12 controlled by the object manager 14 and capable of searching the data (for example bitmap images) in the external memory 20.
  4. d) A buffer 13 for temporarily storing the raw data from the DMA 12, when the DMA processes and a decompression / conversion pipeline 16 are not synchronized.
  5. e) Two line memories 19 or "line buffers" in English, (one "off screen" and the other "on screen" in accordance with the figure 4 .
  6. f) The decompression / conversion pipeline 16 of the raw data into pixels. More precisely, once the object segment is retrieved by the DMA, it is a question of converting the raw data coming from the DMA into pixels in the output format retained so as to be stored in the line memory. In the case of a bitmap object, this requires knowing:
    • the color organization of the pixels;
    • the law of horizontal variation between two pixels; and
    • the compression law, if the image is stored in a compressed way.

    This pipeline also includes ways to transform the pixels. Once the raw data is converted into pixels, it is possible to apply digital processing, such as for example filters or effects on these pixels. We can for example quote:
    • "chroma-keying": the pixel becomes transparent if its value is equal to a reference value called the value of "chroma-key";
    • luminance thresholding: the pixel becomes transparent if the value of its luminance level is lower than a reference value called a threshold value;
    • chrominance thresholding;
    • level of transparency: we modify the level of transparency of the pixel called "alpha" by applying the formula alpha_pixel = alpha_global_objet x alpha_pixel_source;
    • color filters: change the chrominance of the pixel;
    • low-pass, high-pass filters: influence of neighboring pixels to the current pixel by an adapted pipeline;
    • Etc.
  7. g) A blender 18 according to the transparency of the pixel in question. Writing the pixels in line memory represents the last step in the line process of the
    volatile construction. Writing a pixel in memory requires knowing:
    • the position of the pixel in the line: coordinate "x" with respect to the origin equal to the first pixel of the screen; and
    • the value of the pixel already present in the line at the same position, especially when the object is semi-transparent.
  8. h) A multiplexer 17 selects either the pixel from the main video stream (video_in) or the pixel from the stream from the DMA 12. This mechanism requires that the timing of the accelerator is synchronized and locked on the video source. The operating frequency of the pipeline is several times the pixel frequency of the screen, and the multiplexer selects the video pixel at the rate of once per pixel period (at the pixel frequency). The rest of the time, the multiplexer manages the flow from the DMA 12. When there is no main video stream, it is replaced by a background color ("background"). This architecture example also makes it possible to merge the video with the background color in proportion to a video transparency level (alpha_video_in) without implementing a second merge module.

Ainsi, le procédé de construction d'image selon l'invention rend possible la gestion de niveau de profondeur entre objets graphiques et vidéo sans limite du nombre maximal de couches. Ce nombre de couches n'est pas dimensionnant au niveau des ressources matérielles de l'accélérateur. Ce procédé rend également possible une gestion de la transparence entre les objets graphiques et vidéo en fonction du positionnement des objets sur l'axe des z quel que soit le nombre de couches graphiques qui n'est pas dimensionnant pour obtenir la transparence complexe globale.Thus, the image building method according to the invention makes it possible to manage depth level between graphical objects and video without limit of the maximum number of layers. This number of layers is not sizing at the hardware resources of the accelerator. This method also makes it possible to manage the transparency between the graphic and video objects as a function of the positioning of the objects on the z-axis, irrespective of the number of graphic layers that is not dimensioning to obtain the overall complex transparency.

Claims (12)

  1. Method of constructing an image suitable for being displayed on a display system from a plurality of objects and descriptors, each descriptor being associated with each object, a descriptor including graphic parameters defining the nature of the associated object and display parameters for displaying the associated object, for each video frame of the display system, the image is constructed line-by-line by carrying out the following steps:
    - for each line of the display system, on-the-fly construction of this line by retrieving and storing, in real time, in one line buffer, all the pixels relating to the objects intended to be displayed on said line, in order to do this the method further comprising following step:
    - identification indenpendently of each object that must be present on this line according to a set of variables specific to each object;
    - detection in each object thus identified, a useful zone corresponding to the considered line; and
    - conversion of the raw data coming from said useful zone into pixels compatible with the display format,
    - for each pixel of the useful zone and for each identified object, writing a pixel in the line buffer, this pixel is intended to be displayed on a given position on a line of the display system, firstly the position of this pixel in the considered line is determined, and this first pixel is blended with a second pixel currently stored in the line buffer at said same given position, the blender being applied proportionally to the level of transparency of said first pixel;
    - sending these pixels to the display system according to a pixels sequence, such that the complete image is formed only on said display system; the line construction mechanism being line- and frame-synchronized with the display system.
  2. Method according to claim 1, characterized in that it comprises the step of using two line buffers, each successively carrying out, in a shifted manner, a construction step then a display step such that, when a first line buffer displays pixels on the current line, pixels intended to be displayed on the following line are constructed and stored in the second line buffer.
  3. Method according to claim 1 or 2, characterized in that during the detection step, accessing the memory where the objects are stored from electronic mechanisms and the raw data from the useful intervals is stored temporarily in storage means.
  4. Method according to any one of the preceding claims, characterized in that once the raw data is converted into pixels, application of transformations and effects to these pixels.
  5. Method according to any one of preceding claims, characterized in that the video frame comprises two separate time intervals, a so-called "vertical blanking interval" (VBI) and a so-called "vertical active interval" (VAI), respectively corresponding to the interval between the two active frames and to the period of display of an active frame, any preparation necessary to the progress of the construction, the construction and display steps are carried out instantly during each vertical active interval.
  6. Method according to claim 5, characterized in that said preparation also comprises a sorting of the objects in order of depth so as to hierarchize the order with which the objects will be overlaid on the display system.
  7. Method according to claim 6, characterized in that hardware accelerator is suitable for generating the video frame of the display system.
  8. Method according to claim 7, characterized in that the hardware accelerator is suitable for synchronizing and locking on the video frame of the display system from synchronization information.
  9. Method according to any one of the preceding claims, characterized in that the display system comprises several line- and frame-synchronized screens, and driven by a single controller generating signals that clock from the screens, this controller being located in an accelerator.
  10. System for constructing an image suitable to be displayed on a display system from a plurality of objects and descriptors, and in that each descriptor is associated with each object, a descriptor including graphic parameters defining the nature of the associated object and display parameters for displaying the associated object, this system comprising a processing unit combined with a hardware accelerator; the hardware accelerator being used during the construction for constructing the image line by line directly from the objects by means of electronic mechanisms by sate machines, the hardware accelerator being also configured to carry out steps as defined in preceding claims.
  11. System according to claim 10, characterized in that the hardware accelerator comprises the following elements:
    - an object manager for activating and characterizing operations linked to the objects,
    - a memory for storing the parameters and variables of each object,
    - a DMA (Direct Memory Access)-type hardware module controlled by the accelerator and suitable for retrieving data about the objects from a memory,
    - a buffer for temporarily storing the data from the DMA hardware module,
    - a module decompressing and converting raw data from the buffer as pixels,
    - a multiplexer for selecting the pixel to be displayed between that from the decompression and conversion module and that from an external source,
    - a blender for blending the pixel from the multiplexer and a currently displayed pixel according to the transparency of the pixel from the multiplexer, and
    - two line buffers successively carrying out in turn the display of previously stored pixels on the current line of the display system, and the storage of pixels from the blender which will be displayed on the following line.
  12. System according to claim 10 or 11, characterized in that it comprises means for synchronizing on a video source and inserting graphic information into a video flow of said video source, not stored in the memory.
EP05753728.4A 2004-04-08 2005-04-08 Method and system for volatilely building an image displaceable of a display system from a plurality of objects Not-in-force EP1738349B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0403721A FR2868865B1 (en) 2004-04-08 2004-04-08 METHOD AND SYSTEM FOR VOLATILE CONSTRUCTION OF AN IMAGE TO DISPLAY ON A DISPLAY SYSTEM FROM A PLURALITY OF OBJECTS
PCT/FR2005/000857 WO2005104086A1 (en) 2004-04-08 2005-04-08 Method and system for volatilely building an image displaceable of a display system from a plurality of objects

Publications (2)

Publication Number Publication Date
EP1738349A1 EP1738349A1 (en) 2007-01-03
EP1738349B1 true EP1738349B1 (en) 2016-06-29

Family

ID=34944750

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05753728.4A Not-in-force EP1738349B1 (en) 2004-04-08 2005-04-08 Method and system for volatilely building an image displaceable of a display system from a plurality of objects

Country Status (4)

Country Link
US (1) US20070211082A1 (en)
EP (1) EP1738349B1 (en)
FR (1) FR2868865B1 (en)
WO (1) WO2005104086A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111568648B (en) * 2020-05-25 2022-05-17 常利军 Hybrid electric pneumatic suspension stretcher

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745095A (en) * 1995-12-13 1998-04-28 Microsoft Corporation Compositing digital information on a display screen based on screen descriptor

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2274974A1 (en) * 1974-06-11 1976-01-09 Ibm VIDEO SIGNAL GENERATOR FOR DYNAMIC DIGITAL DISPLAY DEVICE
US4398189A (en) * 1981-08-20 1983-08-09 Bally Manufacturing Corporation Line buffer system for displaying multiple images in a video game
US4679038A (en) * 1983-07-18 1987-07-07 International Business Machines Corporation Band buffer display system
JPH07175454A (en) * 1993-10-25 1995-07-14 Toshiba Corp Device and method for controlling display
US5706478A (en) * 1994-05-23 1998-01-06 Cirrus Logic, Inc. Display list processor for operating in processor and coprocessor modes
JP3227086B2 (en) * 1996-02-01 2001-11-12 基弘 栗須 TV on-screen display device
JPH10207446A (en) * 1997-01-23 1998-08-07 Sharp Corp Programmable display device
JP3169848B2 (en) * 1997-02-12 2001-05-28 日本電気アイシーマイコンシステム株式会社 Graphic display device and graphic display method
US6181300B1 (en) * 1998-09-09 2001-01-30 Ati Technologies Display format conversion circuit with resynchronization of multiple display screens
US6570579B1 (en) * 1998-11-09 2003-05-27 Broadcom Corporation Graphics display system
US6943783B1 (en) * 2001-12-05 2005-09-13 Etron Technology Inc. LCD controller which supports a no-scaling image without a frame buffer

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745095A (en) * 1995-12-13 1998-04-28 Microsoft Corporation Compositing digital information on a display screen based on screen descriptor

Also Published As

Publication number Publication date
EP1738349A1 (en) 2007-01-03
US20070211082A1 (en) 2007-09-13
WO2005104086A1 (en) 2005-11-03
FR2868865B1 (en) 2007-01-19
FR2868865A1 (en) 2005-10-14

Similar Documents

Publication Publication Date Title
CN108600781B (en) Video cover generation method and server
EP1527599B1 (en) Method and system enabling real time mixing of synthetic images and video images by a user
FR2750231A1 (en) APPARATUS AND METHOD FOR SEARCHING AND RETRIEVING MOBILE IMAGE INFORMATION
FR2594241A1 (en) DATA DISPLAY PROCESSOR ON DISPLAY SCREEN AND DATA DISPLAY METHOD USING THE DEVICE
FR2554256A1 (en) APPARATUS AND METHOD FOR REGENERATING A HIGH-SPEED WORKING RANGE BUFFER
FR2585867A1 (en) GRAPHIC VIEWING CONTROL SYSTEM.
WO2007016318A2 (en) Real-time preview for panoramic images
US20140022405A1 (en) Fill with camera ink
EP1738349B1 (en) Method and system for volatilely building an image displaceable of a display system from a plurality of objects
CN104918044B (en) Image processing method and device
EP0055167B1 (en) Method and apparatus for displaying messages on a raster-scanned display system, e.g. a crt screen, using a segmented memory
KR101155564B1 (en) System for cooperative digital image production
EP1515566B1 (en) Apparatus and Method to process video and grafic data
FR3083950A1 (en) METHOD FOR VIEWING GRAPHIC ELEMENTS FROM AN ENCODE COMPOSITE VIDEO STREAM
FR2458863A1 (en) VIDEO DISPLAY TERMINAL AND MIXED GRAPHIC AND ALPHANUMERIC DISPLAY METHOD
GB2391658A (en) Visual media viewing system and method
EP3239826A1 (en) Method for screenshot execution
WO2021214395A1 (en) Methods and devices for coding and decoding a multi-view video sequence
CN111787397A (en) Method for rendering multiple paths of videos on same canvas based on D3D
KR101970787B1 (en) Video decoding apparatus and method based on android platform using dual memory
EP0055168B1 (en) Method and apparatus for displaying messages containing pages on a raster-scanned display system, e.g. a c.r.t. screen
EP0056207A1 (en) Method and apparatus for displaying on a raster-scanned display system, e.g. a CRT screen, messages transmitted by a television-like signal and comprising repeating elements
CN112995711B (en) Frame segmentation and picture processing synthesis method and system for web front-end video
CN111835957B (en) Video processing method, video processing device and video processing equipment
EP0973145A1 (en) Method and system for processing digital images resulting from auxiliary graphical elements blended in main images

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20061103

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20090814

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20151214

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

INTG Intention to grant announced

Effective date: 20160513

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 809648

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160715

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602005049634

Country of ref document: DE

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20160629

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160930

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 809648

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160629

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161029

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161031

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602005049634

Country of ref document: DE

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

26N No opposition filed

Effective date: 20170330

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160929

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170430

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170408

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170430

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20170430

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170430

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20180420

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20180426

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20180418

Year of fee payment: 14

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20050408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160629

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602005049634

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20190408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190408

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160629