WO2003053231A2 - Etablissement du profil d'interet d'une personne a l'aide d'une unite neurocognitive - Google Patents
Etablissement du profil d'interet d'une personne a l'aide d'une unite neurocognitive Download PDFInfo
- Publication number
- WO2003053231A2 WO2003053231A2 PCT/DE2002/004604 DE0204604W WO03053231A2 WO 2003053231 A2 WO2003053231 A2 WO 2003053231A2 DE 0204604 W DE0204604 W DE 0204604W WO 03053231 A2 WO03053231 A2 WO 03053231A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- person
- arrangement according
- neurocognitive
- recognition unit
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
Definitions
- the gaze direction of the person is determined using a saccade tracker.
- the viewing direction of the person is assigned the objects located in the detected viewing direction, for example text passages or
- the object of the invention is to expand the possibilities for the automated creation of a person's interest profile.
- the Internet and e-business activities should also be taken into account.
- the arrangement for creating an interest profile of a person accordingly has gaze direction detection means for detecting the gaze direction of the person. Furthermore, it has a neurocognitive unit that does this in the Detected object's gaze direction analyzed object based on visual information about the object.
- the visual information is supplied to the neurocognitive unit for the analysis.
- Analyzing includes, for example, recognizing what the object is and where it is. Recognizing what the object is corresponds to a class division in which the object is assigned to different classes, such as cars, trees, etc.
- Intentions of the person can be estimated directly from their viewing behavior on websites.
- the interest profile is thus obtained from a much more original data pool than surfing behavior.
- An iterative estimation process is used to determine and save the person's preferences or current focus of interest from the content of the sections of the website viewed.
- the neurocognitive unit preferably contains an object representation subunit in which the object is represented.
- the object representation subunit is designed to use an adaptive multi-scale feature representation for the object. This is preferably done by a wavelet transformation, in particular a Gabor wavelet transformation. In a bottom-up analysis, the
- Object representation subunit made up of a large number of objects with typical contents, for example images extracted randomly from the Internet, the multiscale representation of the typical image contents.
- the representation combines the advantages of a sparse, distributed code with those of a compact code.
- a sparsely distributed code represents the important features of its length scale with the help of specialized nodes in a high-dimensional space and can thus create complex statistical structures in the
- a compact code strives for representation in as few dimensions as possible, i.e. data compression.
- the training of the object representation subunit of the neurocognitive unit is carried out by means of a learning rule which maximizes the sparseness of the code via the objects under the condition of an optimal object reconstruction.
- the object representation subunit has nodes, compactness is achieved by neglecting the least active nodes at each level.
- the next more global representation is obtained from the previous representation using the same principle.
- the neurocognitive unit preferably has a recognition unit that is associated with a
- Object recognition unit analyzes what the object is.
- the object recognition unit is preferably designed in such a way that it uses prototypical feature vectors for classes of typical objects.
- the recognition subunit also has a location recognition unit that analyzes where the object is.
- the recognition subunit of the neurocognitive unit is based on the recurrent, multiareal visual signal processing in the mammalian vision system. Accordingly, the object recognition unit and the location recognition unit of the recognition subunit are preferably designed separately and interact only with the object representation subunit. During a training phase, neural connections that are now working in the feed forward are switched off by the
- Object representation subunit trained to the object recognition unit and the location recognition unit. They then store prototypical feature vectors for classes of typical objects and typical locations in a recurrent network.
- the trained connections are used as feed-back connections, i.e. in the opposite direction to the training phase, and can be derived from a given object representation of the object in the
- Object representation subunit determine its most likely location and the confidence of its identity through an iterative calculation process in the recurrent network.
- the visual information about the object can be obtained either by evaluating a display on which the object is displayed or by taking a picture in the direction of the person's view.
- the object can be displayed on a display and the visual information about the object is determined by evaluating the display. This procedure is particularly recommended if a person's interest profile is to be created with regard to the websites viewed.
- the objects are, in particular, images displayed on the Internet pages or elements of the images displayed on the Internet pages.
- the arrangement has recording means, in particular in the form of a camera, with which the visual information about the object can be determined by an optical recording. For this purpose, the recording means take a picture in the line of sight of the person.
- This variant is technically more complex to implement, since the recording means cannot generally be arranged directly on the person's eye, so that paralax compensation must be carried out.
- an interest profile of the person can finally be created.
- the analysis preferably determines both what the object is and where it is located.
- the direction of the person's gaze is detected and an object located in the detected direction of the person's gaze is analyzed with the aid of visual information about the object with the aid of a neurocognitive unit.
- the method can be designed in accordance with the advantageous embodiments specified for the arrangement.
- a program product for a data processing system which contains software code sections with which one of the described methods can be carried out on the data processing system, can be implemented by suitable ones
- a program product is understood to mean the program as a tradable product. It can be in any form, such as Example on paper, a computer-readable data medium or distributed over a network.
- Figure 1 shows an arrangement for creating an interest profile of a person
- Figure 2 shows the structure of a neurocognitive unit
- FIG. 1 shows an arrangement for creating an interest profile for a person.
- This arrangement contains virtual reality glasses 11.
- the virtual reality glasses 11 have a display 12 on which, for example, Internet pages, films or any other content can be displayed in two and / or three dimensions.
- the person's head coordinate system is fixed relatively well and the direction of view can be detected by means of direction of view detection 13 in the form of pupil trackers together with a miniaturized CCD camera.
- the use of viewing direction detection means 3 is usually sufficient for only one pupil.
- a movement sensor for example in the form of a gyro sensor, is preferably arranged on the glasses 11 or otherwise on the head of the person, which measures the movements of the head of the person and thus together with the Pupil trackers 13 the
- the person's line of sight is detected.
- the display takes place then externally on a monitor or the person looks at real objects in the room.
- the arrangement preferably also contains optical recording means in the form of a camera, not shown, with which visual information about the object can be determined by an optical recording.
- the optical recording means are preferably arranged in the immediate vicinity of the pupils of the person in order to be able to control the recording direction of the optical recording means directly as a function of the detected viewing direction of the person and to avoid parallax errors.
- the arrangement has an electronic display 12 which can be evaluated directly in order to determine the visual information about an object displayed on the display 12 and viewed by the person.
- the display 12 like the detection of the
- the data processing system 14 is connected to the virtual reality glasses 1 via cables or a radio link.
- the data processing system 14 thus has the viewing direction of the person and the visual information about the object located in the detected viewing direction of the person.
- the data processing system 14 has a neurocognitive unit, which analyzes the object located in the detected line of sight of the person on the basis of visual information about the object.
- the structure of the neurocognitive unit will now be described in detail with reference to Figure 2, which is a schematic diagram of the unit.
- the neurocognitive unit 20 has an object representation subunit 21 that conceptually mimics the early visual areas of mammals, such as VI and V2.
- the neurocognitive unit 20 has a recognition subunit, which in turn has an object recognition unit 22 and a location recognition unit 23.
- the recognition subunit 22 is based on the recurrent multiareal visual signal processing in the mammalian vision system.
- the object recognition unit 22 analyzes what the object is and simulates the ventral current in a simplified form.
- the location detection unit analyzes where the object is and simulates the dorsal current in a simplified form.
- the object representation unit 21 contains orientation-selective, complex cells and hypersplits, as can also be found in the primary visual cortex.
- the object recognition unit 22 contains neural pools which represent specific classes of objects, as happens in the inferotemporal cortex.
- the location recognition unit 23 contains a map which reproduces the positions in retionotropic coordinates.
- the object representation subunit 21 and the object recognition unit 22 are connected to symmetrical connections 24 which are formed by Hebbian learning.
- the object representation subunit 21 and the location recognition unit 23 are connected to symmetrical, localized connections 25, which are modeled by Gaussian weights.
- a competitive interaction within each unit is mediated by inhibitory pools.
- the connections between the units are exciting, whereby a default is made to shape the competitive dynamics in each module.
- the concentration of neural activities in an individual pool in the object recognition unit 22 corresponds to the recognition of an object.
- Concentration of neural activities in a small number of neighboring pools in the location recognition unit 23 corresponds to a localization of the object.
- the object representation subunit 21 provides a buffer on which the object recognition unit 22 and the location recognition unit 23 interact.
- the object representation subunit 21 receives visual information about the object, comparable to a retinal input, and carries out a Gabor wavelet transformation of the input visual information.
- Each pool of neurons encodes a specific spatial frequency
- the responses of the neurons are modeled using complex cell responses.
- the exciting neuronal pools inhibit each other with competitive interaction or lateral inhibition.
- Competitive dynamics are mediated by a number of inhibitory neural pools in each unit.
- the neurocognitive object recognition unit 22 receives a top-down preset that specifies the object class.
- the neurocognitive location recognition unit 23 receives a top-down preset in which the spatial localization is specified.
- the object representation subunit 21 is connected to the object recognition unit 22 and the location recognition unit 23 via feed-forward and feed-back
- the feed-forward connections introduce bottom-up inputs into each unit, while the feed-back connections provide top-down presets for each exciting neural pool in the object representation sub-unit 21.
- the competition in the object representation subunit is carried out with neurons that encode both location and object information.
- the location recognition unit 23 abstracts location information and mediates competition on the spatial level.
- the object recognition unit 22 abstracts information from classes of objects and mediates a competition at the level of the classes of objects.
- the activities of the neural pools are modeled using the mean field approximation. Many areas of the brain organize groups of neurons with similar properties in columns or field assemblies, such as orientation columns, in the primary visual cortex and in the somatosensory cortex. These groups of neurons, called pools, are composed of a large and homogeneous population of neurons that receive similar external input, are mutually coupled, and are likely to function together as a unit. These pools can form a more robust processing and coding unit because of their current
- each pool is modeled by an element.
- the activity of each pool i is characterized by two variables: its activation or immediate mean rate of fire XJ_ and an input current I j _ which is characteristic of all cells in the pool and which fulfills the following input / output relationship:
- Exciting cell pools in each unit compete against each other, which is mediated by an inhibitory pool that receives the exciting input from all exciting pools and a uniform inhibitory feedback to everyone exciting pools.
- the temporal development of the activity of one of the exciting pools as a function of the inhibitory and exciting inputs for the pool is given by the following dynamic equation:
- the first term is a disintegration term through habituation.
- the second term records the recurrent self-excitation to maintain the activity of the neural pool. It mediates the cooperative, exciting interaction in the pool.
- the third term is the inhibitory input from the inhibitory pool.
- Ij_ E is the specific excitatory bottom-up input is for the pool i from a lower cortical unit and Ij_ A specific exciting top-down preset input to the pool of higher cortical module.
- IQ and v are the diffuse spontaneous background input and an additive noise that has a Gaussian mean of zero.
- the inhibitory pool integrates information from all the exciting pools in the unit and returns non-specific inhibition uniformly to all the exciting pools. It mediates the normalization of lateral inhibition or competitive interaction between the exciting pools in the module. Its dynamism is given by:
- the object representation subunit 21 contains a 33 x 33 grid of hyper columns. Each of the hypercolumns contains 24 elements that represent 24 complex pools with 8 different orientations and three scales. The complex pools are modeled by Gabor wavelet filters. The wavelength of the Gabor filters of the three scales is given by 8 pixels, 16 pixels and 32 pixels.
- the retinotopic Map of the object representation subunit 21 covers a visual area of 256 x 256 pixels.
- the fourth term is the bottom-up input
- the fifth term is the top-down feed-back from the location recognition unit 23
- the sixth term is the top-down feed-back from the object recognition unit 22.
- This feed-back Terms are described in the description of the object recognition unit 22 and the location recognition unit 23.
- I 0 and v are spontaneous inputs and noise with the Gaussian mean zero. In the implementation presented here, although there is an exciting pool for any spatial location,
- n the scale index
- the first term is a decay term
- the second term mediates self-excitement between the members of the pool
- the third term is a function of the sum of the activities of all exciting pools on a given scale in the whole unit.
- the inhibitory pool receives input from neurons on a particular scale and inhibits neurons on the same scale.
- the location recognition unit 23 reproduces the spatial localization and ensures the spatial
- Attention selection It is implemented by a grid of 33 x 33 pools, each of which is an input from
- Object representation subunit 21 receives.
- the connection between the pool Ij_j DM in the location recognition unit 23 and a pool I ⁇ -lq ⁇ i n of the object representation subunit 21 is symmetrical and is modeled by Gaussian weights:
- the output current Ij_j of a specific pool is used to display the name of the pool itself.
- the dynamic equation representing the output current activity of the exciting pools in the location detection unit 23 is defined in practically the same way as that in the object representation subunit 21:
- Attention preset that is applied to the pool of the location recognition unit 23.
- the feed forward input i ⁇ DM-EM of, - ⁇ it to the pool of local recognition unit 23 at location (i, j) is given object representation subunit 21 by:
- Ig «- *» ® ⁇ W mlpq F ⁇ I TM q (t)). m, l, p, q
- Object representation subunit 21 is also given by the following Gaussian connection:
- the object recognition unit 22 contains only 5 pools of neurons in the present implementation. Each pool of the object recognition unit 22 is complete with each pool in the object representation subunit 21. Each of these pools represents a specific object.
- the memory of a specific object class c is in the connection weight w cm] _p g between the pool I c of the object recognition unit 22 and the pools I m lpq
- Object representation subunit 21 is reproduced and is trained by supervised Hebbian learning in the following manner: a top-down object attention presetting is imposed on the pool c of the object recognition unit 22 and a top-down attention presetting is imposed on the pool of the location recognition unit 23, which indicates the retinotopic localization, at which the object appears in the overall scene shown on the display 12.
- the active pool of the location recognition unit 23 highlights the corresponding hyperspaces in the object representation subunit 21.
- the co-activation of the corresponding part of the object representation subunit 21 and the object recognition unit 22 reinforce the association of the pool c of the object recognition unit and the reproduced image pattern which is represented in the object representation subunit 21. With each presentation of the stimulus and the top-down preset signal, the system is allowed to enter a stable state. After convergence, all relevant EM-VM connections are updated using the following Hebbian learning rule:
- the object detection unit 22 is similar to the other units in that it has a number of exciting pools and an inhibitory pool that allows competition between the exciting pools.
- the dynamics of a pool of the object recognition unit 22 are given by:
- the feed forward input Object representation subunit 21 for pool c of object recognition unit 22 is given by:
- the feedback from the object recognition unit 22 to the object representation subunit 21 is also established by symmetrical, reciprocal connections:
- the dynamic of the inhibitory pool is given by:
- the interest profile consists of a list of objects that the person viewed particularly often, sorted by and provided with the frequency of the observation. If objects are viewed in the form of images or parts of images, the stored object consists of a feature vector of the neurocognitive unit with the aim of the visual content to characterize the part of the image under consideration, for example to identify the image in question, that is, to recognize it.
- codebook vectors that is, prototypical representatives of a class of similar considered content, are created by a cluster analysis, and the frequency of the cluster members is provided.
- the frequency of viewing is also stored for entire images together with their complete representation, i.e. their feature vector.
- the list and code book vectors are updated throughout the session. A change of interest in the person can result from the increased appearance of new codebook vectors, for example by looking at scientific ones
- Line drawings instead of landscape images, are detected and, if necessary, used to create a new interest profile.
- the person's behavior can be used directly to estimate his current focus of interest and to predict his immediate intentions.
- the use of a neurocognitive unit based on the principles of biological signal processing enables a generalization of the interest profile of the person to image content.
- a keyword search with a conventional search engine can be used to search for websites with similar content, store them in a memory for the pages and make them available to the person.
- a keyword search with a conventional search engine can be used to search for websites with similar content, store them in a memory for the pages and make them available to the person.
- the presentation of existing products in the e-commerce area can be optimized by evaluating the interest profile. This can be done as follows:
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Biomedical Technology (AREA)
- Ophthalmology & Optometry (AREA)
- Entrepreneurship & Innovation (AREA)
- Molecular Biology (AREA)
- Game Theory and Decision Science (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Percussion Or Vibration Massage (AREA)
Abstract
L'invention concerne l'établissement du profil d'intérêt d'une personne à l'aide d'une unité neurocognitive.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10163002.6 | 2001-12-20 | ||
DE2001163002 DE10163002A1 (de) | 2001-12-20 | 2001-12-20 | Erstellen eines Interessenprofils einer Person mit Hilfe einer neurokognitiven Einheit |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2003053231A2 true WO2003053231A2 (fr) | 2003-07-03 |
WO2003053231A3 WO2003053231A3 (fr) | 2003-09-25 |
Family
ID=7710197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DE2002/004604 WO2003053231A2 (fr) | 2001-12-20 | 2002-12-16 | Etablissement du profil d'interet d'une personne a l'aide d'une unite neurocognitive |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE10163002A1 (fr) |
WO (1) | WO2003053231A2 (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102019000201A1 (de) | 2019-01-15 | 2020-07-16 | Michael Skopnik | System zur Regelung von Abläufen als Funktion einer Intensität, bevorzugt einer Interaktionsintensität oder Zeitintensität und ein jeweiliges darauf gerichtetes computerimplementiertes Verfahren |
DE202019000168U1 (de) | 2019-01-15 | 2019-06-06 | Michael Skopnik | Vorrichtung für ein System zur Regelung von Abläufen als Funktion einer lntensität, bevorzugt einer Interaktionsintensität oder Zeitintensität |
EP3912122A1 (fr) | 2019-01-15 | 2021-11-24 | Clickle GmbH | Système de régulation de déroulement en tant que fonction d'une intensité, de manière préférée d'une intensité d'interaction ou d'intensité de temps, et procédé respectif mis en oeuvre par ordinateur s'y conformant |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6293904B1 (en) * | 1998-02-26 | 2001-09-25 | Eastman Kodak Company | Management of physiological and psychological state of an individual using images personal image profiler |
WO2001086585A1 (fr) * | 2000-05-09 | 2001-11-15 | Siemens Aktiengesellschaft | Procede et dispositif de determination d'un objet dans une image |
-
2001
- 2001-12-20 DE DE2001163002 patent/DE10163002A1/de not_active Ceased
-
2002
- 2002-12-16 WO PCT/DE2002/004604 patent/WO2003053231A2/fr not_active Application Discontinuation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6293904B1 (en) * | 1998-02-26 | 2001-09-25 | Eastman Kodak Company | Management of physiological and psychological state of an individual using images personal image profiler |
WO2001086585A1 (fr) * | 2000-05-09 | 2001-11-15 | Siemens Aktiengesellschaft | Procede et dispositif de determination d'un objet dans une image |
Also Published As
Publication number | Publication date |
---|---|
WO2003053231A3 (fr) | 2003-09-25 |
DE10163002A1 (de) | 2003-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE112017002799B4 (de) | Verfahren und system zum generieren multimodaler digitaler bilder | |
DE112012001984B4 (de) | Integrieren von Video-Metadaten in 3D-Modelle | |
DE69217047T2 (de) | Verbesserungen in neuronalnetzen | |
DE10306294B4 (de) | Evaluierung von Benutzerfreundlichkeitskenngrößen für ein Dialog-Anzeigegerät | |
DE102017220307B4 (de) | Vorrichtung und Verfahren zum Erkennen von Verkehrszeichen | |
DE102019008142A1 (de) | Lernen von Darstellungen unter Nutzung gemeinsamer semantischer Vektoren | |
DE69033681T2 (de) | Kategorisierungsautomatismus, der neuronale gruppenauswahl mit wiedereingabe benutzt | |
DE69730811T2 (de) | Anlage zur Bilderkennung | |
DE112016001796T5 (de) | Feinkörnige bildklassifizierung durch erforschen von etiketten von einem bipartiten graphen | |
DE112005000569T5 (de) | System und Verfahren zur Patientenidentifikation für klinische Untersuchungen unter Verwendung von inhaltsbasiertem Erlangen und Lernen | |
EP3332284A1 (fr) | Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement | |
EP3847578A1 (fr) | Procédé et dispositif de classification d'objets | |
EP3557487B1 (fr) | Génération de données de validation au moyen de réseaux génératifs contradictoires | |
DE102019209644A1 (de) | Verfahren zum Trainieren eines neuronalen Netzes | |
DE102017219282A1 (de) | Verfahren und Vorrichtung zum automatischen Erzeugen eines künstlichen neuronalen Netzes | |
DE102020129018A1 (de) | Tiefe benutzermodellierung durch verhalten | |
DE102019107064A1 (de) | Anzeigeverfahren, elektronische Vorrichtung und Speichermedium damit | |
EP2679147A2 (fr) | Procédé et dispositif de codage de données de suivi de l'oeil et du regard | |
DE102018100315A1 (de) | Erzeugen von Eingabedaten für ein konvolutionelles neuronales Netzwerk | |
DE10306304B4 (de) | Vorrichtung zur Unterstützung der Benutzerfreundlichkeits-Evaluierung | |
DE102020122979A1 (de) | Verfahren zum Bereitstellen eines komprimierten, robusten neuronalen Netzes und Assistenzeinrichtung | |
WO2003053231A2 (fr) | Etablissement du profil d'interet d'une personne a l'aide d'une unite neurocognitive | |
EP1359539A2 (fr) | Modèle neurodynamique de traitement d'informations visuelles | |
DE102020213253A1 (de) | Computerimplementierte konsistente klassifikationsverfahren | |
CN112749797A (zh) | 一种神经网络模型的剪枝方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): CN JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |