WO2003013146A1 - Procede et dispositif de codage d'une scene - Google Patents
Procede et dispositif de codage d'une scene Download PDFInfo
- Publication number
- WO2003013146A1 WO2003013146A1 PCT/FR2002/002640 FR0202640W WO03013146A1 WO 2003013146 A1 WO2003013146 A1 WO 2003013146A1 FR 0202640 W FR0202640 W FR 0202640W WO 03013146 A1 WO03013146 A1 WO 03013146A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- scene
- composition
- textures
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the invention relates to a method and a device for coding and decoding a scene composed of objects whose textures come from different video sources.
- Multimedia broadcasting systems are generally based on the transmission of video information, either via separate elementary streams, or via a transport stream multiplexing the different elementary streams, or a combination of the two.
- This video information is received by a terminal or receiver made up of a set of elementary decoders simultaneously performing the decoding of each of the elementary streams received or demultiplexed.
- the final image is composed from the decoded information. This is for example the case of the transmission of streams of MPEG 4 coded video data.
- This type of advanced multimedia system attempts to offer great flexibility to the end user by offering them possibilities for composing several flows and interactivity at the terminal level.
- the extra processing is actually quite significant if we consider the complete chain, from. the generation of simple flows to the restitution of a final image. It concerns all the levels of the chain: coding, addition of inter-stream synchronization elements and packetization, multiplexing, demultiplexing, taking into account of inter-stream synchronization elements and depacketization, decoding.
- composition system upon reception, which produces the final image of the scene to be viewed according to the information defined by the content creator.
- a great complexity of management at the system level or at the processing level is therefore generated.
- the first systems therefore require the management of numerous data flows both at the transmission and reception levels. It is not possible to achieve in a simple way, a local composition or "scene" from several videos. Expensive devices such as decoders and complex management of these decoders must be put in place for the exploitation of these streams.
- the number of decoders can be a function of the different types of coding used for the data received corresponding to each of the streams, but also the number of video objects that can compose the scene.
- the processing time of the received signals due to centralized management of the decoders, is not optimized. The management and processing of the images obtained, because of their multitude, are complex.
- the invention aims to overcome the aforementioned drawbacks.
- Its subject is a method of coding a scene made up of objects whose textures are defined from images or parts of images from different video sources, characterized in that it comprises the steps:
- auxiliary data comprising information relating to the composition of the composed image and information relating to the textures of the objects.
- the composite image is obtained by spatial multiplexing of the images or parts of images.
- the video sources from which the images or parts of images composing the same composed image are selected have the same coding standards.
- the composite image may also include a still image not from a video source.
- the dimensioning is a reduction in size obtained by subsampling.
- the composed image is coded according to the MPEG 4 standard and the information relating to the composition of the image are the texture coordinates.
- the invention also relates to a method for decoding a scene composed of objects, coded from a composite video image grouping images or parts of images from different video sources and from auxiliary data which is information of composition of the composite video image and of information relating to the textures of the objects, characterized in that it performs the steps of:
- the method is characterized in that the extraction of the textures is carried out by spatial demultiplexing of the decoded image. •.
- the method is characterized in that a texture is processed by oversampling and spatial interpolation to obtain the texture to be displayed in the final image viewing the scene.
- the invention also relates to a device for coding a scene composed of objects whose textures are defined from images or parts of images from different video sources, characterized in that it comprises:
- a video editing circuit receiving the different video sources for dimensioning and positioning on an image, images or parts of images originating from these video sources, for producing a composite image
- an auxiliary data generation circuit connected to the video editing circuit to supply information relating to the composition of the composed image and information relating to the textures of the objects, a coding circuit for the composed image,
- the invention also relates to a device for decoding a scene composed of objects, coded from a composite video image grouping together images or parts of images from different video sources and from auxiliary data which is information. of composition of the composite video image and of information relating to the textures of the objects, characterized in that it comprises:
- auxiliary data a circuit for decoding the auxiliary data - a processing circuit receiving the auxiliary data and the decoded image for extracting textures from the decoded image from the auxiliary data for composing the image and for applying textures to objects of the scene from the auxiliary data relating to the textures.
- the idea of the invention is to group, on an image, elements or texture elements which are images or parts of images coming from: different video sources and necessary for the construction of the scene to be visualized, so to "transport" this video information on a single image or a limited number of images.
- a spatial composition of these elements is therefore produced and it is the overall composite image obtained which is coded instead of coding separate from each video image from video sources.
- a scene . overall, the construction of which usually requires several video streams can be constructed from a more limited number of video streams and even from a single video stream transmitting the composed image.
- the decoding circuits are simplified and the construction of the scene carried out in a more flexible manner .
- QCIF format an English expression Quarter Common Intermediate Format
- GIF Common Intermediate Format
- the image On reception, the image is not simply presented. It is recomposed using transmitted composition information. This makes it possible to present the user with a less frozen image, potentially including an animation resulting from the composition, and to offer him further interactivity, each recomposed object being able to be active.
- Management at the receiver is simplified, the data to be transmitted can be more compressed due to the grouping of video data on an image, the number of circuits necessary for decoding is reduced. Optimizing the number of streams minimizes the resources required in relation to the content transmitted.
- FIG. 1 a coding device according to the invention
- FIG. 1 represents a coding device according to the invention.
- the circuits at 1 n symbolize the generation of: various video signals, available to the encoder for the coding of a scene to be viewed by the receiver. These signals are transmitted to a composition circuit 2 which has the function of composing an overall image from those corresponding to the signals received. The overall image obtained is called the composite image or mosaic.
- This composition is defined on the basis of information exchanged with an auxiliary data generation circuit 4.
- composition information making it possible to define the composed image and thus to extract, at the receiver, the various elements or sub- images composing this image, for example position and shape information in the image such as the coordinates of the vertices of rectangles if the elements constituting the transmitted image are of rectangular shape or shape descriptors.
- This composition information makes it possible to extract textures and it is thus possible to define a library of textures for the composition of the final scene.
- auxiliary data relate to the image composed by the circuit 2 but also to the final image representing the scene to be viewed at the receiver.
- graphic information for example relating to geometric shapes, appearances, the composition of the scene making it possible to configure a scene represented by the final image.
- This information defines the elements to be associated with graphic objects for the mapping of textures. They also define the possible interactivities making it possible to reconfigure the final image from these interactivities ...
- the composition of the image to be transmitted can be optimized according to the textures necessary for the construction of the final scene.
- the composite image generated by the composition circuit 2 is transmitted to a coding circuit 3 which performs coding of this image.
- Auxiliary data 1 from circuit 4. are i transmitted to a coding circuit 5 which realizes, .coding of these data:
- the outputs of coding circuits 3 and 5 are transmitted to the inputs of a multiplexing circuit 6 which multiplexes the received data, ie video data relating to the composed image and auxiliary data
- the output of the multiplexing circuit is transmitted to the input of a transmission circuit 7 for the transmission of the multiplexed data.
- the composite image is produced from images or parts of images of any shape extracted from video sources but may also contain still images or, in general, any type of representation. Depending on the number of sub-images to be transmitted, one or more composed images can be produced for the same instant, that is to say for a final image of the scene. In the case where the video signals use different standards, these signals can be grouped by standard of the same type for the composition of a composite image.
- a first composition is made from all the elements to be coded according to the MPEG-2 standard, a second composition from all the elements to be coded according to the MPEG-4 standard, another from the elements to be coded according to the standard JPEG or GIF images or other, so that a single stream is emitted per type of coding and / or by media type.
- the image composed may be a regular mosaic consisting for example of rectangles or sub-images of the same size or else an irregular mosaic.
- the auxiliary flow transmits the data corresponding to the composition of the mosaic.
- the composition circuit can perform the composition of the overall image 0 from enclosing rectangles or limitation windows defining the elements.
- a choice of the elements necessary for the final scene is made by the composer.
- These elements are extracted from images available to the composer from different video streams.
- a spatial composition is then produced from the selected elements 5 - ; by "placing" them on a global image constituting a single video.
- V The information about the positioning. these various elements, coordinates, dimensions, etc., are transmitted to the auxiliary data generation circuit which processes them to transmit them, on the stream.
- composition circuit is in the known field; This is for example 0 a professional video editing tool, of the "Adobe premiere" type (Adobe
- - .. - is a registered trademark). Thanks to such a circuit, objects can be extracted ... from video sources, for example by selecting parts of images, the images of these objects can be resized and positioned on a global image. A spatial multiplexing is for example carried out to obtain the composite image.
- the means of constructing a scene, from which a part of the auxiliary data is generated, are also in the known field.
- the MPEG4 standard uses the VRML language (Virtual Reality Modeling Language) or more precisely the binary language BIFS 0 (BInary Format for Scenes) which allows to define the presentation of a scene, to change it, to update it .
- the BIFS description of a scene makes it possible to modify the properties of objects and to define their conditional behavior. It follows a hierarchical structure which is a tree description. 5
- the data necessary for the description of a scene concern, among other things, the construction rules, the animation rules for an object, interactivity rules for another object ... They describe the final scenario. Some or all of this data constitutes the auxiliary data for the construction of the scene.
- FIG. 2 represents a receiver for such a coded data stream.
- the signal received at the input of the receiver 8 is transmitted to a demultiplexer 9 which separates the video stream from the auxiliary data.
- the video stream is transmitted to a video decoding circuit 10 which decodes the overall image as it was composed at the level of the coder.
- the auxiliary data at the output of the demultiplexer 9 are transmitted to a decoding circuit 11 which performs decoding of the auxiliary data.
- a processing circuit 12 processes the video data and the auxiliary data coming respectively from the circuits 10 and 11 to extract the elements, the textures necessary for the
- the recomposition information then extracting only these elements from the composed image: - -
- the elements are extracted; - for example, by spatial demultiplexing.
- the construction information therefore makes it possible to select only a part of the elements constituting the composed image. They also allow the user to "navigate" in the constructed scene in order to view objects of interest.
- the navigation information from the user is for example transmitted to an input of the circuit 12 (not shown in the figure) which modifies the composition of the scene accordingly.
- the textures transported by the composed image may not be used directly in the scene. They can, for example, be memorized by the receiver for use in offset time or for the constitution of a library used for the construction of the scene.
- An application of the invention relates to the transmission of video data in MPEG4 standard corresponding to several programs from a single video stream or more generally the optimization of the number of streams in an MPEG4 configuration, for example for a program guide application. If, in a classic MPEG-4 configuration, it is necessary to transmit as many streams as there are videos that can be viewed at the terminal, the method described makes it possible to send a global image containing several videos and to use texture coordinates to build a new scene upon arrival.
- FIG. 3 represents an example of a composite scene constructed from elements of a composite image.
- the global image 14, also called composite texture is composed of several sub-images or elements or sub-textures 15, 16, 17, 18, 19.
- the image 20, at the bottom of the figure, corresponds to the scene at view.
- the positioning of the objects to construct this scene corresponds to the graphic image 21 which represents the graphic objects. . •
- each of MPEG ⁇ 4 coding and according to the prior art each
- video or still image corresponding to elements. 15 to 19 is transmitted in 1 a video or still image stream.
- the graphic data is transmitted in the graphic stream. • ' . • • ; .
- a global image is composed from the images relating to the different videos or still images to " form the composite image 14 represented at the top of the figure. This global image is coded.
- Auxiliary data relating to the composition of the overall image and defining the geometric shapes are transmitted in parallel allowing the elements to be separated. The texture coordinates at the vertices, when these fields are used, allow these shapes to be textured from the composite image.
- Auxiliary data relating to the construction of the scene and defining the graphic image 21 are transmitted.
- the composite texture image is transmitted over the video stream.
- the elements are coded as video objects and their geometric shapes 22, 23 and texture coordinates at the vertices (in the composite image or the composite texture) are transmitted over the graphic stream.
- the texture coordinates are the composition information of the composed image.
- the stream which is transmitted can be coded to the MPEG-2 standard and in this case, it is possible to exploit the functionalities of the circuits of existing platforms integrating the receivers.
- elements supplementing the main programs can be transmitted on an additional video stream
- MPEG-2 or MPEG-4 This flow can contain several visual elements such as logos, advertising banners, animated or not, which can be combined with one or other of the programs broadcast, at the choice of the broadcaster. These items can also be displayed based on user preferences or profile. An associated interaction can be expected.
- Two decoding circuits are used, one for the program, one for the composite image and the auxiliary data. A spatial multiplexing is then possible of the program being broadcast with additional information coming from the composed image. . : '.
- a single annex video stream can be used for a program package, to complete - several programs or several user profiles.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003518188A JP2004537931A (ja) | 2001-07-27 | 2002-07-24 | シーンを符号化する方法及び装置 |
EP02791510A EP1433333A1 (fr) | 2001-07-27 | 2002-07-24 | Procede et dispositif de codage d'une scene |
US10/484,891 US20040258148A1 (en) | 2001-07-27 | 2002-07-24 | Method and device for coding a scene |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0110086A FR2828054B1 (fr) | 2001-07-27 | 2001-07-27 | Procede et dispositif de codage d'une scene |
FR0110086 | 2001-07-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003013146A1 true WO2003013146A1 (fr) | 2003-02-13 |
Family
ID=8866006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FR2002/002640 WO2003013146A1 (fr) | 2001-07-27 | 2002-07-24 | Procede et dispositif de codage d'une scene |
Country Status (5)
Country | Link |
---|---|
US (1) | US20040258148A1 (fr) |
EP (1) | EP1433333A1 (fr) |
JP (1) | JP2004537931A (fr) |
FR (1) | FR2828054B1 (fr) |
WO (1) | WO2003013146A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007143981A2 (fr) * | 2006-06-12 | 2007-12-21 | Attag Gmbh | Procédé et dispositif de production d'un flux de transport numérique pour un programme vidéo |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2438004B (en) | 2006-05-08 | 2011-08-24 | Snell & Wilcox Ltd | Creation and compression of video data |
JP2008131569A (ja) * | 2006-11-24 | 2008-06-05 | Sony Corp | 画像情報伝送システム、画像情報送信装置、画像情報受信装置、画像情報伝送方法、画像情報送信方法、画像情報受信方法 |
TWI382358B (zh) * | 2008-07-08 | 2013-01-11 | Nat Univ Chung Hsing | 虛擬實境資料指示方法 |
KR101791919B1 (ko) | 2010-01-22 | 2017-11-02 | 톰슨 라이센싱 | 예시-기반의 초 해상도를 이용하여 비디오 압축을 위한 데이터 프루닝 |
CN102823242B (zh) | 2010-01-22 | 2016-08-10 | 汤姆森特许公司 | 基于取样超分辨率视频编码和解码的方法和装置 |
WO2012033972A1 (fr) | 2010-09-10 | 2012-03-15 | Thomson Licensing | Procédés et appareil destinés à l'optimisation des décisions d'élagage dans la compression par élagage de données basée sur des exemples |
US20130170564A1 (en) * | 2010-09-10 | 2013-07-04 | Thomson Licensing | Encoding of a picture in a video sequence by example-based data pruning using intra-frame patch similarity |
US8724696B2 (en) * | 2010-09-23 | 2014-05-13 | Vmware, Inc. | System and method for transmitting video and user interface elements |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1996024219A1 (fr) * | 1995-02-02 | 1996-08-08 | Digi-Media Vision Limited | Systeme de transmission |
US5657096A (en) * | 1995-05-03 | 1997-08-12 | Lukacs; Michael Edward | Real time video conferencing system and method with multilayer keying of multiple video images |
JPH1040357A (ja) * | 1996-07-24 | 1998-02-13 | Nippon Telegr & Teleph Corp <Ntt> | 映像作成方法 |
FR2786353A1 (fr) * | 1998-11-25 | 2000-05-26 | Thomson Multimedia Sa | Procede et dispositif de codage d'images selon la norme mpeg pour l'incrustation d'imagettes |
US6075567A (en) * | 1996-02-08 | 2000-06-13 | Nec Corporation | Image code transform system for separating coded sequences of small screen moving image signals of large screen from coded sequence corresponding to data compression of large screen moving image signal |
EP1107605A2 (fr) * | 1999-12-02 | 2001-06-13 | Canon Kabushiki Kaisha | Procédé de codage d'animation dans un fichier d'image |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5325449A (en) * | 1992-05-15 | 1994-06-28 | David Sarnoff Research Center, Inc. | Method for fusing images and apparatus therefor |
US6405095B1 (en) * | 1999-05-25 | 2002-06-11 | Nanotek Instruments, Inc. | Rapid prototyping and tooling system |
US7015954B1 (en) * | 1999-08-09 | 2006-03-21 | Fuji Xerox Co., Ltd. | Automatic video system using multiple cameras |
US6791574B2 (en) * | 2000-08-29 | 2004-09-14 | Sony Electronics Inc. | Method and apparatus for optimized distortion correction for add-on graphics for real time video |
US7827488B2 (en) * | 2000-11-27 | 2010-11-02 | Sitrick David H | Image tracking and substitution system and methodology for audio-visual presentations |
US7027655B2 (en) * | 2001-03-29 | 2006-04-11 | Electronics For Imaging, Inc. | Digital image compression with spatially varying quality levels determined by identifying areas of interest |
IL159537A0 (en) * | 2001-06-28 | 2004-06-01 | Omnivee Inc | Method and apparatus for control and processing of video images |
-
2001
- 2001-07-27 FR FR0110086A patent/FR2828054B1/fr not_active Expired - Fee Related
-
2002
- 2002-07-24 US US10/484,891 patent/US20040258148A1/en not_active Abandoned
- 2002-07-24 WO PCT/FR2002/002640 patent/WO2003013146A1/fr active Application Filing
- 2002-07-24 EP EP02791510A patent/EP1433333A1/fr not_active Withdrawn
- 2002-07-24 JP JP2003518188A patent/JP2004537931A/ja active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1996024219A1 (fr) * | 1995-02-02 | 1996-08-08 | Digi-Media Vision Limited | Systeme de transmission |
US5657096A (en) * | 1995-05-03 | 1997-08-12 | Lukacs; Michael Edward | Real time video conferencing system and method with multilayer keying of multiple video images |
US6075567A (en) * | 1996-02-08 | 2000-06-13 | Nec Corporation | Image code transform system for separating coded sequences of small screen moving image signals of large screen from coded sequence corresponding to data compression of large screen moving image signal |
JPH1040357A (ja) * | 1996-07-24 | 1998-02-13 | Nippon Telegr & Teleph Corp <Ntt> | 映像作成方法 |
FR2786353A1 (fr) * | 1998-11-25 | 2000-05-26 | Thomson Multimedia Sa | Procede et dispositif de codage d'images selon la norme mpeg pour l'incrustation d'imagettes |
EP1107605A2 (fr) * | 1999-12-02 | 2001-06-13 | Canon Kabushiki Kaisha | Procédé de codage d'animation dans un fichier d'image |
Non-Patent Citations (4)
Title |
---|
BOYER D G ET AL: "Multimedia information associations in the Personal Presence System", BELLCORE, 311 NEWMAN SPRINGS RD, RED BANK, NJ 07701 USA, XP010232363 * |
LOUI A ET AL: "VIDEO COMBINING FOR MULTIPOINT VIDEOCONFERENCING", PROCEEDINGS OF IS&T ANNUAL CONFERENCE, XX, XX, 7 May 1995 (1995-05-07), pages 48 - 50, XP000791051 * |
MON-SONG CHEN ET AL: "Multiparty talks", IMAGE PROCESSING, EUROPEAN TECHNOLOGY PUBLISHING, LONDON, GB, vol. 5, no. 3, 1993, pages 23 - 25, XP002101200, ISSN: 1464-1089 * |
PATENT ABSTRACTS OF JAPAN vol. 1998, no. 06 30 April 1998 (1998-04-30) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007143981A2 (fr) * | 2006-06-12 | 2007-12-21 | Attag Gmbh | Procédé et dispositif de production d'un flux de transport numérique pour un programme vidéo |
WO2007143981A3 (fr) * | 2006-06-12 | 2008-02-28 | Attag Gmbh | Procédé et dispositif de production d'un flux de transport numérique pour un programme vidéo |
Also Published As
Publication number | Publication date |
---|---|
JP2004537931A (ja) | 2004-12-16 |
EP1433333A1 (fr) | 2004-06-30 |
US20040258148A1 (en) | 2004-12-23 |
FR2828054B1 (fr) | 2003-11-28 |
FR2828054A1 (fr) | 2003-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1233614B1 (fr) | Système de transmission et de traitement vidéo pour générer une mosaique utilisateur | |
EP2338278B1 (fr) | Méthode pour présenter une application de vidéo / multimédia interactive utilisant des métadonnées tenant compte du contenu | |
US20070005795A1 (en) | Object oriented video system | |
KR102702392B1 (ko) | 컨텐츠의 처리 방법 및 장치 | |
CN109644296A (zh) | 一种视频流传输方法、相关设备及系统 | |
EP1255409A1 (fr) | Conversion d'un format BIFS textuel vers un format BIFS binaire | |
WO2003013146A1 (fr) | Procede et dispositif de codage d'une scene | |
JP4272891B2 (ja) | 相互的な測光効果を発生させる、装置、サーバ、システム及び方法 | |
US20040008198A1 (en) | Three-dimensional output system | |
EP1236352B1 (fr) | Procede de diffusion de television numerique, signal numerique et equipement associes | |
US20040156631A1 (en) | Visual communication signal | |
EP1354479B1 (fr) | Procede et equipement pour la gestion des interactions dans la norme mpeg-4 | |
WO2021109412A1 (fr) | Procédés et appareil de traitement de supports visuels volumétriques | |
RU2282946C2 (ru) | Способ передачи визуальной информации | |
US9131252B2 (en) | Transmission of 3D models | |
Bove | Object-oriented television | |
CN115002470A (zh) | 一种媒体数据处理方法、装置、设备以及可读存储介质 | |
Deshpande et al. | Omnidirectional MediA Format (OMAF): toolbox for virtual reality services | |
KR20030005178A (ko) | 여러 데이터로부터의 비디오 장면 구성을 위한 방법 및 장치 | |
FR2940703B1 (fr) | Procede et dispositif de modelisation d'un affichage | |
Arsov | A framework for distributed 3D graphics applications based on compression and streaming | |
FR2851716A1 (fr) | Procede pour la gestion de descriptions d'animations graphiques destinees a etre affichees, recepteur et systeme mettant en oeuvre ce procede. | |
Lim et al. | MPEG Multimedia Scene Representation | |
CN115061984A (zh) | 点云媒体的数据处理方法、装置、设备、存储介质 | |
Kitson | Multimedia, visual computing, and the information superhighway |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VN YU ZA ZM Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003518188 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002791510 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 2002791510 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10484891 Country of ref document: US |