WO2005004084A1 - Knowledge acquisition system, apparatus and processes - Google Patents
Knowledge acquisition system, apparatus and processes Download PDFInfo
- Publication number
- WO2005004084A1 WO2005004084A1 PCT/AU2003/000876 AU0300876W WO2005004084A1 WO 2005004084 A1 WO2005004084 A1 WO 2005004084A1 AU 0300876 W AU0300876 W AU 0300876W WO 2005004084 A1 WO2005004084 A1 WO 2005004084A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- user
- ear
- signal
- presented
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/04—Electrically-operated educational appliances with audible presentation of the material to be studied
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
Definitions
- the present invention relates to apparatus, systems and processes relating to enhancing specific forms of learning.
- Background art Many devices and processes have been proposed over the past 50 years in order to provide some enhancement or improvement in learning processes.
- One train of such processes purports to rely on neurophysiology, and in particular, certain aspects of the division of functions between the left and right hemispheres of the brain.
- An example of this is so called phonics and similar systems, in which the retention of intellectual content is asserted to be enhanced by the simultaneous playing to both ears of certain types of music while learning.
- one aspect of the present invention relates to presenting information via a headset or similar arrangement to a user, in which the left and right ears are receiving entirely distinct information.
- the discrete left and right ear signals are not in the form of stereo sound, or with the intention of creating some common auditory effect.
- the right ear receives preselected intellectual content, whilst the left ear receives non intellectual content, for example music .
- the left ear content may be mixed with aural tags or labels, or include some intellectual content.
- the left side is fed only with aural tags arranged in a patterned way.
- the left and right ear signals are in each implementation distinct signals.
- the left ear is subjected to predetermined intellectual content, whilst the right ear is arranged and connected to a microphone so that the user hears the left ear content, repeats it aloud and this is fed to the right ear.
- the content for each ear is generated by the desired information being processed to produce the two distinct sound channels. It is theorised by the inventor that all intellectual information is processed by the brain's auditory systems, whether it is read or heard aloud. The brain processes, for example, a visually read word into a series of sounds, which are then recognised. It is well established that the different hemispheres of the brain process information in different and in some respects complimentary ways.
- logical intellectual content is generally processed by the left-brain and intuitive, creative and emotional content by the right-brain. It is further theorised by the inventor that the right and left brains when acquiring information to be learned by being either read or heard become distracted and so effectively unable to function cooperatively when content, particularly audible content, is boring, linear, monologic or monotonous. It is the present inventor's contention that applying the proper sound stimulation to each hemisphere can assist in the acquisition of discrete information.
- the right ear is functionally connected to the left brain, so that intellectual information in the first instance (for example the names of the countries in South America) is supplied to the right ear.
- the right-brain may become distracted or more generally act to trigger a process to seek for more interesting input, and therefore detract from primary acquisition processing and effective recall of the information. It is further believed that the timing and pace of the stimulation should be varied to assist in this process. Accordingly, by providing a suitable discrete and appropriate stimulus to each ear, especially non-linear or varied input, the distraction impulse is reduced, and so neural information processing and recall is improved. It is important that the ears receive the intended content, and not a mixture of left and right ear content delivered over, say, a speaker system in a room. The use of headphones or similar devices is preferred, in order to achieve the desired separate content.
- FIG. 1 is a general block diagram of one form of the inventive system
- Figure 2 is more detailed block diagram providing more detail on the processing operations
- Figure 3 is a block diagram illustrating signal synthesis
- Figure 4 is a block diagram showing a second implementation
- Figure 5 is a timing graph. Detailed Description of the Drawings The present invention will be described with reference to various practical implementations.
- FIG. 1 illustrates the general arrangement of one embodiment of the present invention.
- Personal computer generally designated as 20, includes a display 22 and keyboard 23. This allows for the desired intellectual content to be input.
- the data may be text or a list of the names of the countries of South America. The data will be explained in more detail below.
- the data is then converted to speech, using a text to speech converter TTS 24.
- the speech data is then sent to a designated website, for processing.
- a designated website for processing.
- the website returns a coaural data set which includes non- linear stimulus audio 28, intended as a left ear signal 26, and intellectual content 29, intended as an audio stimulus 27 for the right ear.
- This coaural data 21 is then sent back to the PC 20. This may be a real time or delayed process.
- the audio data may be in any suitable form. For example, it may be in the MP3 format widely used for portable music players, or any suitable analogue or digital format.
- the coaural data 21 is preferably downloaded onto a medium suitable for an audio player 13.
- the audio player then reproduces the coaural signal as discrete signals to the left and right headphones 12, 11.
- the coaural signal could be directly output to speakers from PC 20.
- the PC 20 could in a suitable implementation contain all the software necessary to compile the coaural signal.
- a dedicated computer could be used to carry out the required processing and produce an audio signal on suitable media.
- essentially all functionality could be carried out at a website or in a networked remote server, with no substantial local software being required. It is also contemplated that in addition to fully user defined content as described above, suitable pre-defined data could be made available for known subject matter.
- the step of producing the coaural data from the subject matter input would already have been performed when the user selects the desired data.
- the pre-defined data may, for example, be stored on a website or on storage media, and "State geography syllabus year 8" may be selected.
- Figure 2 describes in more detail the process by which the coaural data is produced.
- Content 30 is input to PC 20. This is then sent via network 32 to server 33. This may be via any suitable network, for example the internet, a dial up connection, or even an offline mechanism.
- the content is preferably input as text into PC 20. However, in alternative implementations the content could be a spoken audio signal, or any other input which the server 33 is adapted to process. In this implementation, the text is converted to speech 24 at the server.
- a voice modelling system 38 is used to enhance, modulate and add expression and variety to TTS signal or other human or computer-generated inputs though server 33 as a means of increasing attention and engagement of left brain and /or inhibiting boredom or preventing distraction of right brain.
- a content assembly processor 42 may select by algorithms the intellectual content 37 as pre-processed by modelling system 38 and assemble this with silences, audible tags, null signals, or other features intended to add variety to the signal as a further means of inhibiting boredom or distraction of both right and left brains.
- the above audible tags may in one embodiment link sets or subsets of audible data content to other previous or later sets or subsets of audible data in the same content assembly as a means of aiding co-location in the brain.
- audible tags may link sets or subsets of audible data content to visual user interface alphanumeric or visually coded tags on the screen of PC 20 or in other places whereby both aural and visual data may be identified as connected by the brain as an aid to neurological processing and subsequent co-location in the brain.
- a bank of preselected audio material 35 is used as the basis for the left ear signal. This material may be pre-prepared spoken content, music, rhythmic sounds, or other data as will be described below in more detail.
- a suitable clock 39 and time base algorithm 40 provide, a signal to ensure that the timing of the assembled signal is appropriate to the desired user outcome.
- the assembler 42 prepares the separate left and right ear signals as a composite but twin discrete channel dataset.
- the output signal 39 is then output to the user 40, via mechanisms discussed above. It is emphasised that the coaural audio signal is entirely different from conventional audio signals delivered via headphones or the like. It is not a stereo or other signal which seeks to produce an illusion of depth or sound space in the user.
- the intention in general is that the signals for each ear be monaural, and that the content be quite distinct. It is not the same mono channel content in each ear.
- the nature of the signal will be more apparent from the example below, however, the separateness of the channels - that they are in fact two signals, not two aspects of one signal - is important to understanding the present invention.
- FIG. 3 describes in more detail one implementation of the audio processing system.
- the required content is supplied to server 33.
- the TTS 24 processes the text content as previously discussed. However, the output is also processed to detect phonemes at detector 25.
- Audio source 44 provides a basic human voice or text-converted signal or a computer generated voice signal, which is further converted and combined with the voice data. The purpose of this step is to enhance, modulate and add expression and variety to the voice signal as a means of increasing attention and engagement of left brain and/or inhibiting boredom or preventing distraction of the right brain.
- a voice tempo and pitch controller 36 inputs a rhythmic or arrhythmic time base in digital voice stream and in some versions balances this with decoded voice phonemes, feeding this stream to a music compiler 37 which establishes composite voice formats and digital base tracks on preparation for voice modelling in a DSP voice processor 38.
- the voice modeller 38 modifies the digital voice stream by imposing tone, modulation, voice style, voice gender, increases in pace and delivery, tonal and pitch variation to enhance and make the voice tracks fed to it more engaging to the users, adding interest and variety to prevent boredom and maintain brain engagement.
- a further processor 45 may select from the several discrete streams of content and assemble this with silences, audible tags, null signals, or other features intended to add variety to the signal, and pass to processor 28 .
- Figure 5 shows representations of a time domain signal 12 of a type imposed by blocks 36 and 37 of Figure 3, which time base indicates a typical 4 beats to the bar synchronised by midi time clock protocol on the data stream running between units 36 and 38 of Figure 3.
- the beat imposed is used to compile and insert melodic and/or staggered prose, song, words, numbers, null spaces and other content in a stream of content to modulate delivery and content variation so as to enhance the track and make it more engaging to the user.
- Figure 5 further extracts section 13 as a representation of oscilloscope screens shown at 14 and 15 where the magnified section 13 indicates subdivisions of beats and assembly of phoneme controlled voice as song, prose, words, numbers, null spaces and other content.
- the snap-to-grid system of midi phoneme assembly represented at 14 and 14 of Figure 5 as controlled by units 36 and 37 of Figure 3 above thereby assembles the mixed voice, space and related variety of content tracks.
- snap to grid is meant that the time domain signals are locked to the beat structure.
- Figure 4 shows an implementation using an intellectual content feed for the right ear 11 which comes from a microphone 16.
- the concept is that the user hears a spoken prompt in the left ear 12 , repeats it aloud and that and spoken signal is detected by the microphone 16. The detected signal is then passed to the PC 20, and either fed directly or with modification to the right ear speaker 11. The signal may be modified in speed, delayed, or otherwise altered if desired.
- This is a deliberate, technology aided form of the self-talk which is a part of the neurophysiologic mechanism for information processing and consciousness. Forcing the same information to be processed through both hearing and speaking pathways is believed to enhance recall and the perception of relevance by the brain of the information so presented. It will be appreciated that the software and hardware requirements may be met in large part using conventional modules and packages. The actual content to be provided in various implementations will now be described with reference to the following tables.
- the timing of the stimuli may be presented in a variety of ways. In one form differing or regular time periods between each series of units of intellectual and non-intellectual content may be composed and delivered, which may vary in spacing either randomly, pseudo-randomly or in a predetermined pattern. In another embodiment regular spacings between each series of units of content may be used, or in other cases an irregular mixture of time spacing and signal insertion parameters.
- beat signal is used in some of the examples below.
- a beat signal may be an audible code forming a series whereby the brain is enabled to recognize both sets or a sub-set of related content elements. This is an aid to information uptake by the user, to encouraging the information to be sited in a related or linked brain locus, and so to assisting recall of knowledge in sets or subsets of related information.
- Each set of audio units may be vertically alternated within the same right or left channel field to provide variety, maintain the interest and reduce level of predictability, and so reduce boredom or distraction when listening to repeated content. For the avoidance of doubt, it is emphasised that some intellectual content may be provided or either or both channels. Some content may be best presented as a discrete list on the right ear side, and leading or trailing mnemonic labels on the left ear side.
- the left ear channel has a zero or null signal mixed with beats or random audible tags inserted.
- Some content is preferably delivered using a first leading spoken list or assembly of information on the right channel, followed by a separately trailing or critique list on the left channel. This may be called left critique signal format.
- the left channel is spatially configured with a varied time base having critique signal which replicates and so follows right hand signal with beat signal or random tags interspersed. This approach is considered most suitable for content such as mathematical formulae, chemical formulae, geographic information, and complex arrangement listings such as biological organ mapping or aircraft instrument locations.
- Table 2
- Table 3 show a typical step table for composition and compilation of two discrete channel split monaural knowledge acquisition method. As in other implementations, a varied time base is preferred. Table 3
- aural marker codes or "mnemonic aural labels” on the left ear channel which are followed (i) by a discrete normally compiled or aurally -diverse trailing or reprised version of the same list or other information assemblage on the right channel interspersed with zero signal feed on one or both sides occurring at (pseudo)random spacing at time periods predetermined by experiment according to content type but typically between 0.1 sees and 5 sees.
- This method is outlined in Table Four. This example illustrates some additional techniques.
- a space or silence occurs simultaneously in left channel and right channel units as exampled by lines 2 to 6 inclusive; lines 10 to 13 inclusive of Table 4.
- This example has state space inserted to allow reflecto-referencing mixed with beats or random tags.
- the left and right channels are spatially configured with a varied time base having zero signals interspersed with other left and right signals.
- left channel right brain audible content either leads left channel or critiques left channel.
- non-audible content or " silent space" allows brain reflection or referencing.
- a mixture of both audible and non-audible right and left channel content may be employed.
- regular or irregular cadence, rhythm, beat, or musical or tonal variations may be employed in composing audible content in left channel.
- Other variations and possibilities for timing and content are possible within the general scope of the present invention.
- all or parts of the functions of normal right and left brain are transposed. There is a conventional, simple user- administered test which allows this to be established and thus the headset channels reversed.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2003243822A AU2003243822A1 (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and processes |
CN03826757.8A CN1802679A (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and course |
CA002531622A CA2531622A1 (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and processes |
PCT/AU2003/000876 WO2005004084A1 (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and processes |
EP03817307A EP1649437A1 (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and processes |
US11/327,635 US20060177072A1 (en) | 2003-07-08 | 2006-01-06 | Knowledge acquisition system, apparatus and process |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/AU2003/000876 WO2005004084A1 (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and processes |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/327,635 Continuation-In-Part US20060177072A1 (en) | 2003-07-08 | 2006-01-06 | Knowledge acquisition system, apparatus and process |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005004084A1 true WO2005004084A1 (en) | 2005-01-13 |
Family
ID=33556912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2003/000876 WO2005004084A1 (en) | 2003-07-08 | 2003-07-08 | Knowledge acquisition system, apparatus and processes |
Country Status (6)
Country | Link |
---|---|
US (1) | US20060177072A1 (en) |
EP (1) | EP1649437A1 (en) |
CN (1) | CN1802679A (en) |
AU (1) | AU2003243822A1 (en) |
CA (1) | CA2531622A1 (en) |
WO (1) | WO2005004084A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2930671A1 (en) * | 2008-04-28 | 2009-10-30 | Jacques Feldman | DEVICE AND METHOD FOR VOICE REPRODUCTION WITH CONTROLLED MULTI-SENSORY PERCEPTION |
EP2373062A3 (en) * | 2010-03-31 | 2015-01-14 | Siemens Medical Instruments Pte. Ltd. | Dual adjustment method for a hearing system |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060053132A1 (en) * | 2004-09-07 | 2006-03-09 | Steve Litzow | System and method for dynamic price setting and facilitation of commercial transactions |
CN103680231B (en) * | 2013-12-17 | 2015-12-30 | 深圳环球维尔安科技有限公司 | Multi information synchronous coding learning device and method |
CN115294990B (en) * | 2022-10-08 | 2023-01-03 | 杭州艾力特数字科技有限公司 | Sound amplification system detection method, system, terminal and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4710130A (en) * | 1986-12-24 | 1987-12-01 | Louis Aarons | Dichotic-diotic paired-association for learning of verbal materials |
US5061185A (en) * | 1990-02-20 | 1991-10-29 | American Business Seminars, Inc. | Tactile enhancement method for progressively optimized reading |
US5895220A (en) * | 1992-01-21 | 1999-04-20 | Beller; Isi | Audio frequency converter for audio-phonatory training |
US20010046659A1 (en) * | 2000-05-16 | 2001-11-29 | William Oster | System for improving reading & speaking |
KR20030005922A (en) * | 2001-07-10 | 2003-01-23 | 류두모 | Head Set System of music control method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3415966A1 (en) * | 1984-04-28 | 1985-10-31 | Therapy Products Müller oHG, 2401 Ratekau | PLAN FOR LEARNING BY THE SUPERLEARNING METHOD |
EP0349599B2 (en) * | 1987-05-11 | 1995-12-06 | Jay Management Trust | Paradoxical hearing aid |
US6199076B1 (en) * | 1996-10-02 | 2001-03-06 | James Logan | Audio program player including a dynamic program selection controller |
-
2003
- 2003-07-08 AU AU2003243822A patent/AU2003243822A1/en not_active Abandoned
- 2003-07-08 EP EP03817307A patent/EP1649437A1/en not_active Withdrawn
- 2003-07-08 WO PCT/AU2003/000876 patent/WO2005004084A1/en active Application Filing
- 2003-07-08 CN CN03826757.8A patent/CN1802679A/en active Pending
- 2003-07-08 CA CA002531622A patent/CA2531622A1/en not_active Abandoned
-
2006
- 2006-01-06 US US11/327,635 patent/US20060177072A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4710130A (en) * | 1986-12-24 | 1987-12-01 | Louis Aarons | Dichotic-diotic paired-association for learning of verbal materials |
US5061185A (en) * | 1990-02-20 | 1991-10-29 | American Business Seminars, Inc. | Tactile enhancement method for progressively optimized reading |
US5895220A (en) * | 1992-01-21 | 1999-04-20 | Beller; Isi | Audio frequency converter for audio-phonatory training |
US20010046659A1 (en) * | 2000-05-16 | 2001-11-29 | William Oster | System for improving reading & speaking |
KR20030005922A (en) * | 2001-07-10 | 2003-01-23 | 류두모 | Head Set System of music control method |
Non-Patent Citations (1)
Title |
---|
DATABASE WPI Derwent World Patents Index; AN 2003-378541, XP002984332 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2930671A1 (en) * | 2008-04-28 | 2009-10-30 | Jacques Feldman | DEVICE AND METHOD FOR VOICE REPRODUCTION WITH CONTROLLED MULTI-SENSORY PERCEPTION |
WO2009133324A1 (en) * | 2008-04-28 | 2009-11-05 | Jacques Feldmar | Device and method for vocal reproduction with controlled multi-sensorial perception |
EP2373062A3 (en) * | 2010-03-31 | 2015-01-14 | Siemens Medical Instruments Pte. Ltd. | Dual adjustment method for a hearing system |
Also Published As
Publication number | Publication date |
---|---|
US20060177072A1 (en) | 2006-08-10 |
CA2531622A1 (en) | 2005-01-13 |
EP1649437A1 (en) | 2006-04-26 |
AU2003243822A1 (en) | 2005-01-21 |
AU2003243822A2 (en) | 2005-01-21 |
CN1802679A (en) | 2006-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bhide et al. | A rhythmic musical intervention for poor readers: A comparison of efficacy with a letter‐based intervention | |
Galvin III et al. | Melodic contour identification and music perception by cochlear implant users | |
US20070105073A1 (en) | System for treating disabilities such as dyslexia by enhancing holistic speech perception | |
Wayland et al. | Effects of musical experience and training on pitch contour perception | |
Keller | The Music of Language | |
Hasuo et al. | Effects of temporal shapes of sound markers on the perception of interonset time intervals | |
der Nederlanden et al. | Familiarity modulates neural tracking of sung and spoken utterances | |
Shannon | Speech and music have different requirements for spectral resolution | |
US20060177072A1 (en) | Knowledge acquisition system, apparatus and process | |
Kittler | The God of Ears | |
Hidalgo-Barnes et al. | Read my lips: An animated face helps communicate musical lyrics. | |
Williamson et al. | Music in working memory? Examining the effect of pitch proximity on the recall performance of nonmusicians | |
Innes-Brown et al. | New music for the Bionic Ear: An assessment of the enjoyment of six new works composed for cochlear implant recipients | |
PURICH et al. | Musicality, Embodiment, and Recognition of Randomly Generated Tone Sequences are Enhanced More by Distal than by Proximal Repetition | |
Walter | ONG.“ | |
Yamamoto et al. | Towards Improving the Correct Lyric Detection by Deaf and Hard of Hearing People | |
Gertner et al. | Music for children with hearing loss | |
Harris | Dub Writing in Marcia Douglas’ The Marvellous Equations of the Dread: A Novel in Bass Riddim | |
Stewart | Clinical Applications of Audio Effects in Music Therapy and Related Fields: A Scoping Review | |
Huttunen et al. | Enhancing Independent Auditory and Speechreading Training–Two Finnish Free Mobile Applications Constructed for Deaf and Hard of Hearing Children and Adults | |
Lloyd | Music's Role in the American Oralist Movement, 1900-1960 | |
Johnson | Composing music more accessible to the hearing-impaired | |
Li | Perception of foreign-accented clear speech by younger and older English listeners | |
Jaya | Listening to Music: Tuning in to How the Deaf Perceive Music | |
Bruno | Vocal Synthesis and Deep Listening |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 03826757.8 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 544517 Country of ref document: NZ |
|
ENP | Entry into the national phase |
Ref document number: 2531622 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003243822 Country of ref document: AU Ref document number: 11327635 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003817307 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 142/MUMNP/2006 Country of ref document: IN |
|
WWP | Wipo information: published in national office |
Ref document number: 2003817307 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11327635 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: JP |