WO2006129622A1 - Dispositif de détection de point de coupure de film selon une prévision d’erreur de valeur caractéristique - Google Patents

Dispositif de détection de point de coupure de film selon une prévision d’erreur de valeur caractéristique Download PDF

Info

Publication number
WO2006129622A1
WO2006129622A1 PCT/JP2006/310706 JP2006310706W WO2006129622A1 WO 2006129622 A1 WO2006129622 A1 WO 2006129622A1 JP 2006310706 W JP2006310706 W JP 2006310706W WO 2006129622 A1 WO2006129622 A1 WO 2006129622A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
feature
cut point
current frame
prediction error
Prior art date
Application number
PCT/JP2006/310706
Other languages
English (en)
Japanese (ja)
Inventor
Kouta Iwamoto
Original Assignee
Nec Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nec Corporation filed Critical Nec Corporation
Priority to JP2007518981A priority Critical patent/JP4924423B2/ja
Publication of WO2006129622A1 publication Critical patent/WO2006129622A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content

Definitions

  • the present invention relates to an apparatus for detecting a cut point of a moving image.
  • a cut point is a boundary where a shot (video section continuously shot with one camera) and a shot are instantaneously switched.
  • a conventional moving image cut point detection device detects a cut point of a moving image based on comparison of feature amounts between frames.
  • this conventional moving image cut point detection apparatus includes a frame feature quantity extraction unit 11, an inter-frame difference value calculation unit 12, and a cut point determination unit 13.
  • the frame feature quantity extraction unit 11 extracts each feature quantity of each frame force of the moving image and outputs it to the inter-frame difference value calculation unit 12.
  • the inter-frame difference value calculation unit 12 compares the feature quantities between frames, calculates the difference value (or similarity), and outputs the difference value (or similarity) to the cut point determination unit 13.
  • the cut point determination unit 13 determines the frame as a cut point when the inter-frame difference value is large (or the similarity is small).
  • Patent Document 1 describes a cut point detection method using pixel values (luminance information / color information) as feature amounts.
  • Patent Document 2 describes a cut point detection method using a histogram as a feature amount.
  • Patent Document 3 uses a cut point detection method that uses a motion vector as a feature quantity,
  • Patent Document 4 uses a cut point detection method that uses frequency information as a feature quantity, and
  • Patent Document 5 uses edge information as a feature quantity. Cut point detection method Force S Each is described.
  • a method has been proposed in which encoded information is used as a feature value or a difference value between frames for an encoded moving image.
  • Patent Document 6 describes a cut point detection method that uses a motion vector in the sign ⁇ as a feature quantity.
  • Patent Document 7 describes a cut point detection method that uses sign key mode information as a difference value between frames. [0005]
  • the conventional cut point detection methods described in Patent Document 1 to Patent Document 7 based on comparison of feature quantities between frames have the following problems.
  • Patent Document 8 describes a method of detecting a flash using lightness information of a frame and eliminating detection of an excessive cut point caused by a flash. Yes.
  • this method cannot cope with temporary fluctuations of video other than flash, such as momentary video disturbance.
  • the cut points are compared with the frames before and after the detected frames, and when the similarity is high, it is determined that the excessive cut points due to the temporary fluctuation of the video have been detected.
  • a method is described. However, with this method, temporary fluctuations in the video occur when the video changes continuously (for example, when the flash occurs continuously) or in a moving scene. When this occurs, it is very difficult to distinguish between correct cut points and excessively detected cut points.
  • Patent Document 1 Japanese Patent Laid-Open No. 5-37853 (paragraphs 0011-0016)
  • Patent Document 2 Japanese Unexamined Patent Publication No. 2000-36966 (paragraphs 0028-0034)
  • Patent Document 3 Japanese Patent Laid-Open No. 2003-196662 (paragraphs 0034-0041)
  • Patent Document 4 Japanese Patent Laid-Open No. 2002-133420 (paragraphs 0021-0031)
  • Patent Document 5 JP-A-6-237414 (paragraphs 0017-0018)
  • Patent Document 6 Japanese Patent Laid-Open No. 2002-281505 (paragraphs 0019-0025)
  • Patent Document 7 Japanese Patent Laid-Open No. 11-252509 (paragraphs 0026-0028)
  • Patent Document 8 Column 2002-101337 (paragraphs 0031-0034)
  • Patent Document 9 Japanese Patent Laid-Open No. 11-252509 (paragraphs 0023-0024)
  • An object of the present invention is to detect a cut point of a moving image with high accuracy without excessively detecting a cut point even in a moving scene or when a temporary change of a video occurs.
  • An object of the present invention is to provide a moving image cut point detection apparatus and method.
  • a moving image cut point detection apparatus selects a frame feature amount extracting unit that extracts a feature amount of each frame constituting an input moving image, and selects a determination target frame in order for each frame. Then, the predicted value of the feature value of each frame after the current frame including the current frame that is the current determination target frame is calculated for a predetermined number of frames past the current frame extracted by the frame feature value extraction unit.
  • the prediction means calculated using the feature value, the feature values of each frame after the current frame extracted by the frame feature value extraction means, and the feature values of the corresponding frame after the current frame calculated by the prediction means
  • a prediction error calculation unit that compares the prediction value and calculates a prediction error between the feature value and the prediction value for each frame after the current frame, and a prediction error calculation unit. It is determined whether the prediction error of each frame after the current frame calculated in this way is a force that satisfies the predetermined criterion, and if each prediction error satisfies the criterion, it is determined to be the current frame cut point. And a cut point judging means.
  • Feature amount power of a group of frames past the current frame The predicted value of the feature amount of each frame after the current frame, that is, the feature amounts of a plurality of frames past the current frame
  • the predicted value of the feature value of each frame after the current frame is calculated considering the transition.
  • the cut point is determined based on the calculated predicted value of the feature value and the prediction error of the actual feature value. Therefore, it is possible to follow changes in the feature amount in a moving scene (that is, to absorb a difference in feature amount between frames in a scene having movement). Therefore, it is possible to detect the cut point of the moving image with high accuracy without excessively detecting the cut point in a moving scene.
  • the current frame is cut based on prediction errors of a plurality of frames after the current frame.
  • determining whether or not a point is a point even if a large change in the feature amount due to a temporary change in video occurs in the current frame or in some frames after the current frame including the current frame, an error occurs.
  • the current frame is not determined as a cut point. Therefore, it is possible to detect the cut point of the moving image with high accuracy without excessively detecting the cut point even when temporary fluctuation of the image such as flash or instantaneous disturbance of the image occurs.
  • the cut point determination means compares the prediction error of each frame after the current frame calculated by the prediction error calculation means with a threshold value given as an input, and each prediction error is
  • the current frame may be determined as the cut point.
  • the threshold value may be given as a different value for each number of frames from the current frame.
  • the threshold is determined from the probability distribution of the prediction error observed in advance given as input and the rejection rate given as input, and the threshold for realizing the rejection rate in the probability distribution of prediction error is determined. It may be given by an output threshold value determining means. According to this configuration, a desired rejection rate can be specified.
  • the feature amount is defined in the brightness information, color information, dispersion value information, edge information, texture information, shape information, motion information, and international standard ISOZIEC 15938-3 of the frame image.
  • the prediction means may calculate a predicted value of the feature amount by a linear prediction method based on an autoregressive model.
  • the prediction unit may calculate a predicted value of the feature value by a Kalman filter.
  • the prediction error calculation means may calculate the Euclidean distance between the feature quantity and the predicted value of the feature quantity as the prediction error.
  • the prediction error calculating means obtains a prediction error vector that is a difference value for each dimension between the feature value and the predicted value of the feature value, and for the average vector of the prediction error vector given by brute force learning.
  • the Mahalanobis distance of the obtained prediction error vector is taken as the prediction error. May be calculated.
  • FIG. 1 is a block diagram of a moving image cut point detection device of a conventional example.
  • FIG. 2 is a block diagram of the moving image cut point detection device according to the first embodiment of the present invention.
  • FIG. 3 is a flowchart showing the operation of the moving image cut point detection apparatus according to the first embodiment of the present invention.
  • FIG. 4 is a block diagram of a moving image cut point detection device according to a second embodiment of the present invention.
  • the moving image cut point detection apparatus includes a frame feature quantity extraction unit 21, a prediction unit 22, a prediction error calculation unit 23, and a cut point determination unit 24. ing.
  • the frame feature quantity extraction unit 21 also extracts a feature quantity from each frame force of a moving image given as an input.
  • the feature quantity is at least one piece of information of multiple types of feature quantity that can be extracted by image processing and multiple types of feature quantity information that is described in a predetermined format as information accompanying the frame. is there.
  • the feature amount of each frame that can be extracted by image processing may be a feature amount obtained by image processing of only the frame, or image processing is performed on a plurality of frames including frames in the vicinity of the frame. This may be a feature quantity obtained by this (for example, a motion vector obtained from the frame and an adjacent frame), and is arbitrary.
  • the number of dimensions of the feature quantity is also arbitrary.
  • Examples of feature amounts extracted by the frame feature amount extraction unit 21 include brightness information, color information, dispersion value information, histogram information, edge information, texture information, shape information, and motion information of a frame image. It is not limited to these.
  • the international standard ISOZIE C 15938-3 [This is specified! Dominant Color, Color Layout, Scalable Color, Color Structure ⁇ Edge Histogram ⁇ Homogeneous Texture ⁇ Textu Features such as re Browsings Contour Shape, Shape 3D, Camera Motion, and Motion Ativity may be used. A combination of two or more of these feature quantities may be used as the feature quantity extracted by the frame feature quantity extraction unit 21.
  • the feature amount may be extracted from the entire image! / ⁇ .
  • the image may be divided into a plurality of small regions, feature amounts may be extracted from the respective small regions, and a set of these may be extracted by the frame feature amount extraction unit 21.
  • the principal component features obtained by performing the principal component analysis on the various extracted feature amounts are projected onto the eigenspace obtained by the principal component analysis.
  • the feature amount obtained by the shadow may be a feature amount extracted by the frame feature amount extraction unit 21.
  • the frame feature quantity extraction unit 21 synthesizes several pieces of feature quantity information, which are correlated by principal component analysis, from a plurality of types of extracted feature quantity information.
  • the characteristic values may be aggregated, and a small number of total characteristic values may be used as principal component features for cut point determination.
  • the prediction unit 22 receives a feature amount of a frame group (that is, a plurality of frames) past the current frame from the frame feature amount extraction unit 21, and includes the current frame using the received feature amount. The predicted value of the feature amount of each frame after the current frame is calculated.
  • the frame number of the current frame is represented by N
  • frames past the current frame N used for calculating the feature value prediction value are frames from N-1 to N-M. (Where M is any integer greater than or equal to 2)
  • the prediction unit 22 uses the frame feature value extraction unit 21 , Frame N—1, frame N—2,..., Frame N—M feature values are received, and frame N—l, frame N—2,..., Frame N—M feature values are received.
  • the predicted value of the feature amount up to the current frame N force frame N + T is calculated.
  • T 0, that is, the frame for which the predicted value of the feature quantity is obtained is only the current frame ⁇ . May be.
  • Examples of the prediction method include a linear prediction method based on an autoregressive model.
  • each feature quantity of frame N-1, frame N-2, ..., frame N-M is weighted using the autoregressive coefficient obtained by learning, and the sum is obtained.
  • the predicted value is calculated.
  • each autoregressive coefficient is expressed as Al, A2, A3,..., AM
  • the predicted value of the feature value of the current frame N can be calculated as follows.
  • Kalman filter can be used as a prediction method.
  • the prediction error calculation unit 23 receives the feature amount from the frame feature amount extraction unit 21 to the current frame N force and the feature amount from the current frame N to the frame N + T from the prediction unit 22. The predicted value is received, and the feature value of each frame is compared with the predicted value to calculate the prediction error from the current frame N to frame N + T.
  • the prediction error is a numerical value indicating how much the predicted value of the feature value input from the prediction unit 22 is deviated from the actual value of the feature value input from the frame feature value extraction unit 1. This is the value obtained.
  • the prediction error can be obtained by calculating the distance between the actual feature amount input from the frame feature amount extraction unit 21 and the predicted value of the feature amount also input to the prediction unit 22 force.
  • the Euclidean distance between the actual feature value input from the frame feature value extraction unit 21 and the predicted value of the feature value input from the prediction unit 22 may be used as the prediction error.
  • a prediction error vector that is a difference value for each dimension between the actual feature amount input from the frame feature amount extraction unit 21 and the predicted value of the feature amount input from the prediction unit 22 is obtained, and Average beta of the prediction error vector given by learning in advance.
  • the Mahalanobis distance to the distance may be used as the prediction error.
  • the cut point determination unit 24 receives the prediction error up to the current frame N force frame N + T from the prediction error calculation unit 23, compares each prediction error with a threshold given as an input, and compares the current error with the current frame. If each of the prediction errors from N to frame N + T is greater than the threshold, the current frame N is determined as a cut point. If it is determined that the current frame is a cut point, the cut point determination unit 24 outputs the frame number of the current frame as a cut point detection result.
  • the threshold value may be given as a different value for each number of frames from the current frame N.
  • the threshold value may be increased as the current frame force is further away. In this way, different threshold values can be set according to the probability that a prediction error occurs.
  • the frame number of the current frame that is the target of cut determination is represented as N.
  • the past frames from the current frame N used to calculate the predicted value of the feature value are frames up to the frame N—M (where M is an integer constant greater than or equal to 2), and the feature value is predicted.
  • the frame after the current frame N for which the “willow” value is to be calculated is the frame up to frame N + T (where T is an integer constant greater than or equal to 0).
  • step 102 it is determined whether N + T has exceeded the end frame number of the moving image. If it has exceeded, the process is terminated.
  • step 102 If N + T does not exceed the end frame number of the moving image in step 102, the frame feature quantity extraction unit 21 proceeds to step 103 as an initial process in a new shot from frame N-M to frame N. Extract features from each frame up to + T—1.
  • the frame feature quantity extraction unit 21 extracts a feature quantity from the frame N + T that is the latest frame in Step 104.
  • step 105 the prediction unit 22 receives the feature quantities of the frame N—1, the frame N—2,..., The frame N—M from the frame feature quantity extraction unit 21, and the frame N— l, frame The predicted value of the feature value from the current frame N to the frame N + T is calculated using the feature value of frame N-2,.
  • the prediction error calculation unit 23 receives the feature amounts from the current frame N to the frame N + T from the frame feature amount extraction unit 21, and from the prediction unit 22 to the current frame N to the frame N.
  • the prediction value of the feature value up to + T is received, the feature value of each frame is compared with the prediction value, and the prediction error up to the current frame N force frame N + T is calculated.
  • the cut point determination unit 24 receives the prediction errors from the current frame N to the frame N + T from the prediction error calculation unit 23, and compares each prediction error with a threshold given as an input. Then, it is determined whether each prediction error from the current frame N to the frame N + T is larger than a threshold value. If both the current frame N force and the prediction error up to frame N + T are both greater than the threshold, the cut point determination unit 24 determines that the current frame N is a cut point in step 108, and determines the frame number. Is output as the cut point detection result.
  • step 107 If any of the prediction errors from the current frame N to the frame N + T is smaller than the threshold value in step 107, the cut point determination unit 24 returns to step 110, and the current frame N is not a hot point. And the current frame N is updated to the next frame.
  • step 111 it is determined whether N + T exceeds the end frame number of the moving image. In step 111, if N + T does not exceed the end frame number of the moving image, the processing from step 104 is executed.
  • this embodiment can detect a cut point of a moving image with high accuracy without excessively detecting the cut point even in a scene with motion in the moving image.
  • the cut point of the video can be detected with high accuracy without excessive cut points being detected. There is an advantage that you can.
  • the moving image cut point detection apparatus is further characterized by the fact that it further includes a threshold value determination unit 25 according to the first embodiment of the present invention shown in FIG. This is different from the moving image cut point detection device based on the above.
  • the threshold value determination unit 25 determines a threshold value such that the probability that a prediction error larger than the threshold value occurs in the probability distribution of the prediction error given as input matches the rejection rate, and the determined threshold value is a cut point determination unit. Supply to 24. Expressing the prediction error as E (E ⁇ 0), the probability distribution of the prediction error as P (E), the rejection rate as R (0 ⁇ R ⁇ 1), and the threshold as Th,
  • the moving image cut point detection device can specify a desired rejection rate in addition to the advantages of the moving image cut point detection device according to the first embodiment. There is an advantage that you can.
  • the moving image cut point detection apparatus of the present invention records a program for realizing the function on a computer-readable recording medium, and the program recorded on the recording medium.
  • the program may be read by a computer and executed.
  • a computer-readable recording medium refers to a recording medium such as a flexible disk, a magneto-optical disk, a CD-ROM, or a storage device such as a hard disk device built in a computer system.
  • a computer-readable recording medium is one that dynamically holds a program (transmission medium or transmission wave) for a short time, such as when transmitting a program via the Internet, and a server in that case. Includes a program that holds a program for a certain period of time, such as volatile memory in a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L’invention concerne un dispositif de détection de point de coupure de film comprenant une section d’extraction de valeur caractéristique de trame (21), une section de prédiction (22), une section de calcul d’erreur de prédiction (23), et une section de détermination de point de coupure (24). La section de prédiction (22) calcule une valeur prédite pour la valeur caractéristique de chacune des trames dont la trame courante en cours de détermination après la trame courante en utilisant la valeur caractéristique d’un nombre prédéterminé de trames avant la trame courante extraite par la section d’extraction de valeur caractéristique de trame (21). La section de calcul d’erreur de prédiction (23) compare la valeur caractéristique de chacune des trames après la trame courante et la valeur prédite pour la valeur caractéristique de la trame correspondante calculée par la section de prédiction (22) après la trame courante et calcule l’erreur de prédiction entre la valeur caractéristique et la valeur prédite. Une section de détermination de point de coupure (24) détermine si oui ou non chaque erreur de prédiction calculée satisfait à un critère de détermination prédéterminé, établit la trame courante comme point de coupure si les erreur de prédiction satisfont toutes au critère de détermination, et génère le numéro de trame.
PCT/JP2006/310706 2005-06-01 2006-05-30 Dispositif de détection de point de coupure de film selon une prévision d’erreur de valeur caractéristique WO2006129622A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007518981A JP4924423B2 (ja) 2005-06-01 2006-05-30 特徴量の予測誤差に基づいて動画像のカット点を検出する装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-161875 2005-06-01
JP2005161875 2005-06-01

Publications (1)

Publication Number Publication Date
WO2006129622A1 true WO2006129622A1 (fr) 2006-12-07

Family

ID=37481551

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/310706 WO2006129622A1 (fr) 2005-06-01 2006-05-30 Dispositif de détection de point de coupure de film selon une prévision d’erreur de valeur caractéristique

Country Status (2)

Country Link
JP (1) JP4924423B2 (fr)
WO (1) WO2006129622A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010246103A (ja) * 2009-04-01 2010-10-28 Nhn Corp 動画重複検出方法およびシステム
CN112735164A (zh) * 2020-12-25 2021-04-30 北京智能车联产业创新中心有限公司 测试数据构建方法及测试方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06153155A (ja) * 1992-11-09 1994-05-31 Matsushita Electric Ind Co Ltd 動画像の拾い見装置

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06153155A (ja) * 1992-11-09 1994-05-31 Matsushita Electric Ind Co Ltd 動画像の拾い見装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AKIYAMA M. ET AL.: "Gazo Kaiseki Handbook", vol. 1ST ED., 17 January 1991, pages: 40,524 *
SATO T. ET AL.: "MPEG2 Eizo karano Cut-Ten to Telop no Konoritsu Kenshutsuho", IEICE TECHNICAL REPORT, PATTERN NINSHIKI.MEDIA RIKAI PRMU96-93 TO 103, vol. 96, no. 385, 22 November 1996 (1996-11-22), pages 47 - 54 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010246103A (ja) * 2009-04-01 2010-10-28 Nhn Corp 動画重複検出方法およびシステム
CN112735164A (zh) * 2020-12-25 2021-04-30 北京智能车联产业创新中心有限公司 测试数据构建方法及测试方法
CN112735164B (zh) * 2020-12-25 2022-08-05 北京智能车联产业创新中心有限公司 测试数据构建方法及测试方法

Also Published As

Publication number Publication date
JP4924423B2 (ja) 2012-04-25
JPWO2006129622A1 (ja) 2009-01-08

Similar Documents

Publication Publication Date Title
CN112990191B (zh) 一种基于字幕视频的镜头边界检测与关键帧提取方法
JP4725690B2 (ja) 映像識別子抽出装置
JP5573131B2 (ja) 映像識別子抽出装置および方法、映像識別子照合装置および方法、ならびにプログラム
CN106937114B (zh) 用于对视频场景切换进行检测的方法和装置
EP2071514A2 (fr) Modélisation d'arrière-plan pour la compression de données vidéo
JP5445467B2 (ja) クレジット情報区間検出方法、クレジット情報区間検出装置及びクレジット情報区間検出プログラム
EP2165525A1 (fr) Procédé et appareil pour le traitement d'images animées
KR101281850B1 (ko) 영상 디스크립터 생성 장치
JP5644505B2 (ja) 照合加重情報抽出装置
JP2014110020A (ja) 画像処理装置、画像処理方法および画像処理プログラム
US20050002569A1 (en) Method and apparatus for processing images
JP4620126B2 (ja) 映像識別装置
Chen et al. Modelling of content-aware indicators for effective determination of shot boundaries in compressed MPEG videos
KR101667011B1 (ko) 입체 영상의 장면 전환 검출 장치 및 방법
WO2006129622A1 (fr) Dispositif de détection de point de coupure de film selon une prévision d’erreur de valeur caractéristique
JP4167245B2 (ja) デジタル映像処理方法及びその装置
US20090268822A1 (en) Motion vector detection by stepwise search
JP2009049667A (ja) 情報処理装置、その処理方法およびプログラム
JP4979029B2 (ja) 動画像データのシーン分割装置
JP4662169B2 (ja) プログラム、検出方法、及び検出装置
JP2006260237A (ja) マハラノビスの距離を用いた包括判定方式による特定シーン抽出方法および装置

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007518981

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06756702

Country of ref document: EP

Kind code of ref document: A1