EP1180307A2 - Verfahren und gerät zur reduzierung von falsch-positiven ergebnissen in einer schnittdetektion - Google Patents

Verfahren und gerät zur reduzierung von falsch-positiven ergebnissen in einer schnittdetektion

Info

Publication number
EP1180307A2
EP1180307A2 EP00991976A EP00991976A EP1180307A2 EP 1180307 A2 EP1180307 A2 EP 1180307A2 EP 00991976 A EP00991976 A EP 00991976A EP 00991976 A EP00991976 A EP 00991976A EP 1180307 A2 EP1180307 A2 EP 1180307A2
Authority
EP
European Patent Office
Prior art keywords
frames
luminance values
luminance
change
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00991976A
Other languages
English (en)
French (fr)
Inventor
Thomas Mcgee
Nevenka Dimitrova
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of EP1180307A2 publication Critical patent/EP1180307A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/785Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/7864Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using domain-transform features, e.g. DCT or wavelet transform coefficients
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/179Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/87Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/60Solid state media
    • G11B2220/65Solid state media wherein solid state memory is used for storing indexing information or metadata

Definitions

  • the present invention is in general related to an apparatus that detects significant scenes of a source video and selects representative keyframes therefrom.
  • the present invention in particular relates to determining whether a detected scene change is really a scene change or merely a uniform change in intensity of the image such as when camera flashes occur during a news broadcast etc.
  • Video content analysis uses automatic and semi-automatic methods to extract information that describes contents of the recorded material.
  • Video content indexing and analysis extracts structure and meaning from visual cues in the video.
  • a video clip is taken from a TV program or a home video by selecting frames which reflect the different scenes in a video.
  • Zhang may produce skewed results if the differences between respective blocks of two frames are approximately the same with respect to color or intensity. In such a case the system may detect a scene change when in fact the change is only due to camera flashes which occur during a news broadcast.
  • a system which will create a visual index for a video source that was previously recorded or while being recorded, which is useable and more accurate in selecting significant keyframes, while providing a useable amount of information for a user.
  • This system will detect scene changes and select a key frame from each scene but ignore the detection of scene changes and the selection of key frames where the changes between two frames result from only a substantially uniform change in luminance of substantially all blocks or macroblocks within the frame.
  • Figure 1 illustrates a video archival process
  • Figures 2A and 2B are block diagrams of devices used in creating a visual index in accordance with a preferred embodiment of the invention ;
  • Figure 3 illustrates a frame, a macroblock, and several blocks
  • Figure 4 illustrates several DCT coefficients of a block
  • Figure 5 illustrates a macroblock and several blocks with DCT coefficients
  • Figures 6 illustrates a stream of video where a change in luminance has occurred.
  • Two phases exist in the video content indexing process Two phases exist in the video content indexing process: archival and retrieval.
  • video content is analyzed during a video analysis process and a visual index is created.
  • automatic significant scene detection is a process of identifying scene changes, i.e., "cuts" (video cut detection or segmentation detection) and identifying static scenes (static scene detection).
  • a particular representative frame called a keyframe is extracted. Therefore it is important that correct identification of scene changes occurs otherwise there will be too many keyframes chosen for a single scene or not enough key frames chosen for multiple scene changes.
  • Uniform luminance detection is the process of identifying a change in luminance between two frames and is explained in further detail below.
  • FIG. 1 A video archival process is shown in Figure 1 for a source tape with previously recorded source video, which may include audio and or text, although a similar process may be followed for other storage devices with previously saved visual information, such as an MPEG file.
  • a visual index is created based on the source video.
  • Figure 1 illustrates an example of the first process (for previously recorded source tape) for a videotape.
  • the source video is rewound, if required, by a playback/recording device such as a VCR.
  • the source video is played back. Signals from the source video are received by a television, a VCR or other processing device.
  • a media processor in the processing device or an external processor receives the video signals and formats the video signals into frames representing pixel data (frame grabbing).
  • a host processor separates each frame into blocks, and transforms the blocks and their associated data to create DCT (discrete cosine transform) coefficients; performs significant scene detection, uniform change in luminance detection and keyframe selection; and builds and stores keyframes as a data structure in a memory, disk or other storage medium.
  • DCT discrete cosine transform
  • the source tape is rewound to its beginning and in step 106, the source tape is set to record information.
  • the data structure is transferred from the memory to the source tape, creating the visual index. The tape may then be rewound to view the visual index. (Instead of a tape, any storage medium can be used or the index could be stored and/or created at the server.)
  • step 112 of Figure 1 the frame grabbing process of step 103 occurs as the video (film, etc.) is being recorded.
  • Steps 103 and 104 are more specifically illustrated in Figures 2 A and 2B.
  • Video exists either in analog (continuous data) or digital (discrete data) form.
  • the present example operates in the digital domain and thus uses digital form for processing.
  • the source video or video signal is a series of individual images or video frames displayed at a rate high enough (in this example 30 frames per second) so the displayed sequence of images appears as a continuous picture stream.
  • These video frames may be uncompressed (NTSC or raw video) or compressed data in a format such as MPEG, MPEG 2, MPEG 4, Motion JPEG or such.
  • the information in an uncompressed video is first segmented into frames in a media processor 202, using a frame grabbing technique 204 such as present on the Intel Smart Video Recorder III.
  • a frame 302 represents one television, video, or other visual image and includes 352 x 240 pixels.
  • the frames 302 are each broken into blocks 304 of, in this example, 8 x 8 pixels in the host processor 210 ( Figure 2A).
  • a macroblock creator 206 Figure 2A
  • the luminance and chrominance blocks form a macroblock 308.
  • 4:2:0 is being used although other formats such as 4: 1 : 1 and 4:2:2 could easily be used by one skilled in the art.
  • a macroblock 308 has six blocks, four luminance, Yl, Y2, Y3, and Y4; and two chrominance Cr and Cb, each block within a macroblock being 8x8 pixels.
  • the video signal may also represent a compressed image using a compression standard such as Motion JPEG (Joint Photographic Experts Group) and MPEG (Motion Pictures Experts Group).
  • a compression standard such as Motion JPEG (Joint Photographic Experts Group) and MPEG (Motion Pictures Experts Group).
  • the MPEG signal is broken into frames using a frame or bitstream parsing technique by a frame parser 205.
  • the frames are then sent to an entropy decoder 214 in the media processor 203 and to a table specifier 216.
  • the entropy decoder 214 decodes the MPEG signal using data from the table specifier 216, using, for example, Huffman decoding, or another decoding technique.
  • the decoded signal is next supplied to a dequantizer 218 which dequantizes the decoded signal using data from the table specifier 216. Although shown as occurring in the media processor 203, these steps (steps 214-218) may occur in either the media processor 203, host processor 211 or even another external device depending upon the devices used.
  • the DCT coefficients could be delivered directly to the host processor. In all these approaches, processing may be performed in real time.
  • the host processor 210 which may be, for example, an Intel ⁇ Pentium chip or other processor or multiprocessor, a Philips ⁇ Trimedia chip or any other multimedia processor; a computer; an enhanced VCR, record/playback device, or television; or any other processor, performs significant scene detection, key frame selection, and building and storing a data structure in an index memory, such as, for example, a hard disk, file, tape, DVD, or other storage medium.
  • an index memory such as, for example, a hard disk, file, tape, DVD, or other storage medium.
  • the present invention attempts to detect when a scene of a video has changed or a static scene has occurred.
  • a scene may represent one or more related images.
  • significant scene detection two consecutive frames are compared and, if the frames are determined to be significantly different, a scene change is determined to have occurred between the two frames; and if determined to be significantly alike, processing is performed to determine if a static scene has occurred.
  • uniform luminance change detection if a scene change has been detected then the luminance values of the two frames are compared and if a uniform change in luminance is the only major change between the two frames then it is determined that a scene change has not occurred between the two frames.
  • FIG. 2 A shows an example of a host processor 210 with luminance change detector 240.
  • the DCT blocks are provided by macroblock creator 206 and DCT transformer 220.
  • Fig. 2B shows an example of host processor 211 with significant scene detector 230 and luminance charge detector 240.
  • the DCT blocks are provided by dequantizer 218.
  • the significant scene processor 230 detects scene changes between two frames and then the luminance detector 240 determines whether in fact a scene change has occurred or whether the differences between the two f ames are due to a uniform change in luminance. If a scene change occurred a keyframe is se ected and provided to frame memory 234 and then provided to the index memory 260. If a uniform change in luminance is detected, another keyframe is not selected from this same scene.
  • the present invention addresses the concern where two frames are compared and there is a substantial difference detected between two frames. There are many reasons why this substantial difference may not be due to a scene change.
  • the video may be a news broadcast where the videographer is taping a press briefing. During this press briefing many camera flashes flash which cause the luminance between two frames to change. Instead of this being detected as a scene change and another keyframe chosen, the present invention detects the uniform change in luminance and treats it as an image from the same scene. Similarly, if the lights are turned on in a room, or the lights flash in a disco, a scene change should not be detected as the difference between the two frames is merely a uniform change in luminance.
  • the present method and device uses comparisons of DCT (Discrete Cosine
  • each received frame 302 is processed individually in the host processor 210 to create 8 x 8 blocks 440.
  • the host processor 210 processes each 8 x 8 block which contains spatial information, using a discrete cosine transformer 220 to extract DCT coefficients and create the macroblock 308.
  • the DCT coefficients may be extracted after dequantization and need not be processed by a discrete cosine transformer. Additionally, as previously discussed, DCT coefficients may be automatically extracted depending upon the devices used.
  • the DCT transformer provides each of the blocks 440 ( Figure 4), Yl , Y2, Y3,
  • each block contains DC information (DC value) and the remaining DCT coefficients contain AC information (AC values).
  • DC value DC information
  • AC values AC information
  • the AC values increase in frequency in a zig-zag order from the right of the DC value, to the DCT coefficient just beneath the DC value, as partially shown in Figure 4.
  • the Y values are the luminance values.
  • processing is limited to detecting the change in DC values between corresponding blocks of two frames to more quickly produce results and limit processing without a significant loss in efficiency; however, clearly one skilled in the art could compare the difference in the luminance of corresponding macroblocks or any other method which detects a change in luminance.
  • the method and device in accordance with a preferred embodiment of the instant invention compares the DC values of respective blocks of two frames to determine whether a substantially uniform change in luminance has occurred.
  • n is the number of blocks within a frame.
  • Fi is the first frame and F 2 is the second frame where F ⁇ [i] is the ith block in the first frame and F [i] is the ith block in the second frame.
  • the above computation computes the absolute value of the difference between the DC coefficient of each block in the first frame with its respective DC coefficient in the second frame. This difference is then compared to diffmin and diffmax to find the minimum difference and the maximum difference between corresponding DC coefficients between the two frames. If the difference between the maximum difference (diffmax) and minimum difference (diffmin) is less than a certain threshold then all DC values have changed by approximately the same amount indicating a change in luminance.
  • the threshold value is chosen anywhere between 0 and 10% of the final diffmax value, but depending on the application this threshold may vary.
  • a key frame is not chosen from both frame sequences. It should be noted that other methods of detecting changes in luminance can be used such as using histograms and wavelets etc. and the invention is not limited to the embodiment described above.
  • the ratios of the luminance changes compared to the ratios of the chrominance changes could be used to determine the change in luminance, or any other formula for determining luminance change.
  • Figs. 6A-D illustrates two scenarios where a scene change is detected but the difference between the two frames is merely a change in luminance.
  • Fig. 6A is an example of an image during a camera flash.
  • Fig. 6B shows this same image after the camera flash.
  • a top view of a disco scene is shown in Fig. 6C during a time period when the lights are off.
  • Fig. 6D shows this same scene when the lights are on.
  • the present invention is shown using DCT coefficients; however, one may instead use representative values such as wavelet coefficients, histograms etc. or a function which operates on a sub-area of the image to give representative values for that sub-area.
  • the present invention has been described with reference to a video indexing system, however it pertains in general to detecting a uniform change in luminance between two frames and therefore can be used as a search device to detect scenes where there are camera flashes, or alternatively as an archival method to pick representative frames.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Closed-Circuit Television Systems (AREA)
EP00991976A 1999-12-30 2000-12-15 Verfahren und gerät zur reduzierung von falsch-positiven ergebnissen in einer schnittdetektion Withdrawn EP1180307A2 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US47708599A 1999-12-30 1999-12-30
US477085 1999-12-30
PCT/EP2000/012864 WO2001050737A2 (en) 1999-12-30 2000-12-15 Method and apparatus for reducing false positives in cut detection

Publications (1)

Publication Number Publication Date
EP1180307A2 true EP1180307A2 (de) 2002-02-20

Family

ID=23894478

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00991976A Withdrawn EP1180307A2 (de) 1999-12-30 2000-12-15 Verfahren und gerät zur reduzierung von falsch-positiven ergebnissen in einer schnittdetektion

Country Status (4)

Country Link
EP (1) EP1180307A2 (de)
JP (1) JP2003519971A (de)
CN (1) CN1252982C (de)
WO (1) WO2001050737A2 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005091050A1 (en) 2004-03-12 2005-09-29 Koninklijke Philips Electronics N.V. Multiview display device

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6766098B1 (en) 1999-12-30 2004-07-20 Koninklijke Philip Electronics N.V. Method and apparatus for detecting fast motion scenes
US7333712B2 (en) 2002-02-14 2008-02-19 Koninklijke Philips Electronics N.V. Visual summary for scanning forwards and backwards in video content
CA2540575C (en) 2003-09-12 2013-12-17 Kevin Deng Digital video signature apparatus and methods for use with video program identification systems
CN100496112C (zh) * 2004-01-13 2009-06-03 英业达股份有限公司 多层次影片浏览系统及其方法
CN100496113C (zh) * 2004-01-14 2009-06-03 英业达股份有限公司 影片的特征影像撷取系统及其方法
KR100825737B1 (ko) * 2005-10-11 2008-04-29 한국전자통신연구원 스케일러블 비디오 코딩 방법 및 그 코딩 방법을 이용하는코덱
CN100428801C (zh) * 2005-11-18 2008-10-22 清华大学 一种视频场景切换检测方法
CN102724385B (zh) * 2012-06-21 2016-05-11 浙江宇视科技有限公司 一种视频智能分析方法及装置
CN108769458A (zh) * 2018-05-08 2018-11-06 东北师范大学 一种深度视频场景分析方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2231746B (en) * 1989-04-27 1993-07-07 Sony Corp Motion dependent video signal processing
US5969755A (en) * 1996-02-05 1999-10-19 Texas Instruments Incorporated Motion based event detection system and method
US5767922A (en) * 1996-04-05 1998-06-16 Cornell Research Foundation, Inc. Apparatus and process for detecting scene breaks in a sequence of video frames
US5920360A (en) * 1996-06-07 1999-07-06 Electronic Data Systems Corporation Method and system for detecting fade transitions in a video signal
US6137544A (en) * 1997-06-02 2000-10-24 Philips Electronics North America Corporation Significant scene detection and frame filtering for a visual indexing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0150737A2 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005091050A1 (en) 2004-03-12 2005-09-29 Koninklijke Philips Electronics N.V. Multiview display device

Also Published As

Publication number Publication date
WO2001050737A2 (en) 2001-07-12
WO2001050737A3 (en) 2001-11-15
CN1349711A (zh) 2002-05-15
JP2003519971A (ja) 2003-06-24
CN1252982C (zh) 2006-04-19

Similar Documents

Publication Publication Date Title
US6496228B1 (en) Significant scene detection and frame filtering for a visual indexing system using dynamic thresholds
EP0944874B1 (de) Bedeutsame szenenfeststellung und rahmenfilterung für ein visuelles indexierungssystem
US6125229A (en) Visual indexing system
US6766098B1 (en) Method and apparatus for detecting fast motion scenes
JP4942883B2 (ja) 動き記述子およびカラー記述子を用いてビデオを要約化する方法
CN100544416C (zh) 基于景物变化距离的视听内容中的商业检测
KR100915847B1 (ko) 스트리밍 비디오 북마크들
US6469749B1 (en) Automatic signature-based spotting, learning and extracting of commercials and other video content
US5719643A (en) Scene cut frame detector and scene cut frame group detector
CN1240218C (zh) 用于替换不希望的广告中断或其它视频序列的视频内容的方法和装置
KR20030026529A (ko) 키프레임 기반 비디오 요약 시스템
Faernando et al. Scene change detection algorithms for content-based video indexing and retrieval
EP1180307A2 (de) Verfahren und gerät zur reduzierung von falsch-positiven ergebnissen in einer schnittdetektion
Lie et al. News video summarization based on spatial and motion feature analysis
KR100812041B1 (ko) 개선된 장면 전환 검출 방법을 이용한 자동 인덱싱 방법
Lee et al. Automatic video summarizing tool using MPEG-7 descriptors for STB

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20020515

RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20070629