WO2013021322A1 - Apparatus for performing relaxing activities and solving psychological problems - Google Patents

Apparatus for performing relaxing activities and solving psychological problems Download PDF

Info

Publication number
WO2013021322A1
WO2013021322A1 PCT/IB2012/053961 IB2012053961W WO2013021322A1 WO 2013021322 A1 WO2013021322 A1 WO 2013021322A1 IB 2012053961 W IB2012053961 W IB 2012053961W WO 2013021322 A1 WO2013021322 A1 WO 2013021322A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
playing
acoustic
voice signal
user
Prior art date
Application number
PCT/IB2012/053961
Other languages
English (en)
French (fr)
Inventor
Felice SAGRILLO
Original Assignee
Cavalli, Manuele
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cavalli, Manuele filed Critical Cavalli, Manuele
Publication of WO2013021322A1 publication Critical patent/WO2013021322A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3375Acoustical, e.g. ultrasonic, measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/80General characteristics of the apparatus voice-operated command

Definitions

  • the present invention relates to an apparatus for performing relaxing activities and solving psychological problems.
  • NLP Neuro-Linguistic Programming
  • NLP was created in the US in the 70s by a linguist and a mathematician, and its purpose is to help individuals modify the state of their undesired emotions.
  • the assumption at the basis of NLP is that there are automatic behaviour procedures based on a sequence of sensory accesses (visual, auditory or based on proprioceptive sensations), and that such procedures can be modelled in order to help people give up their limitations and undesired behaviour (phobias, negative emotions, depressions, etc.).
  • the technique is based on the ability to dissociate the person from an undesired situation and to "teach again" to his/her own brain a new procedure to face and solve such problem.
  • NLP there are some techniques that use structures that are beneath the sensory perceptions.
  • the memory of a specific event referred to a single individual may be formed by a substantially static image or by a sort of filmed sequence.
  • the image that the individual can see in his/her mind may be big or small.
  • the image or the filmed sequence may be visualized, in his/her mind, in colour or in black and white. The definition of all these factors identifies a part of the structure of the syntax used by the individual to mentally represent the event.
  • NLP The elaboration, in NLP, does not change the content of the thought, but it changes the type of emotion that is associated to such thought.
  • the resource is in general represented by an event whose nature is different from the source of the problem and wherein the person felt well.
  • the apparatus that is object of the invention provides a very important contribution to face and solve such problem of holding the representation: as it will be clearer below, the voice is positioned at the right place of the perceptive space, thus becoming a sort of modern "mantra", which guides the used to hold the problem and the resource for such problem in order to carry out the desired change.
  • the individual may improve his/her action and boost his/her performance.
  • the invention facilitates the application of the process and, in particular, it allows to any individual to intervene on himself/herself in full autonomy without necessarily need the direct intervention of a specialist.
  • One of the purposes of the apparatus that is object of the invention is to amplify the effects produced by the thought process of the individual: in practice, the system helps the mind to localize in the space the source of a previously recorded sound of a positive sentence (containing a "positive representation/resource") or of a negative sentence. The orientation in the space of such sounds facilitates the application of therapeutic or relaxing mental techniques, deriving in general from NLP. If needed, the tool allows to add the recorded sound of a lot of voices that represent a positive resource.
  • the aim of the present invention is to provide an apparatus for performing relaxing activities and solving psychological problems that allows to apply an NLP technique in a simple and versatile way.
  • a further aim of the present invention is to allow a single individual to apply on himself/herself psychotherapeutic and/or relaxing techniques, without a direct guidance from a specialist in the field.
  • Another aim of the present invention is to provide an apparatus for performing the aforementioned activities which has a not very complex structure and has moderate costs.
  • - figure 1 shows a block diagram of the apparatus according to the present invention
  • - figure 2 schematically shows some parts of a control panel that can be used in the apparatus of figure 1.
  • number 1 refers to the apparatus for performing relaxing activities and solving psychological problems as a whole, according to the present invention.
  • the apparatus 1 comprises first of all a receiving device 10 for receiving voice signals.
  • the device 10 may be, for example, a microphone, or another similar device, through which it is possible to detect and to record the voice of a user.
  • the apparatus 1 further comprises a first memory register 20 associated to the receiving device and configured for storing a first voice signal S .
  • the first voice signal S1 is representative of a voice of a user in a first situation.
  • the first voice signal S1 is representative of a voice of the user in a first situation in which said user is in a positive mood.
  • the user expresses, by means of his/her own voice, a positive emotion, represented, for example, by the content and/or by the tone of the vocal expression.
  • a positive emotion represented, for example, by the content and/or by the tone of the vocal expression.
  • the receiving device 10 and the first memory register 20 allow to detect and to store what is expressed by the user.
  • the first voice signal S1 is a monophonic signal, namely it is detected in monophonic mode.
  • the apparatus 1 is provided with a data request module (not shown), appropriately set to ask the user to insert a first voice signal S1 , representative of a positive sensation of the user himself/herself.
  • a data request module (not shown), appropriately set to ask the user to insert a first voice signal S1 , representative of a positive sensation of the user himself/herself.
  • the request module may ask the user, by an audio and/or a visual signal, to express, by means of his/her voice, a sentence, a message, an exclamation that may represent and recall a positive sensation for the user himself/herself, such as, for example, success, satisfaction, happiness, etc.
  • the user may express himself/herself, so that the receiving device 10 may detect what is said, and the first memory register 20 may store it.
  • the apparatus 1 comprises a processing unit 30 for determining a first main virtual position Plmain of a first virtual acoustic source AS1 for playing a first determinate audio signal.
  • a processing unit 30 for determining a first main virtual position Plmain of a first virtual acoustic source AS1 for playing a first determinate audio signal.
  • processing unit 30 may be made as a traditional computer, appropriately programmed in order to perform the operations that are described and claimed herein.
  • the apparatus 1 further comprises an acoustic player 40, associated to at least the first memory register 20 and to the processing unit 30 in order to play the first voice signal S1.
  • the first voice signal S1 is played simulating the first virtual acoustic source AS1 playing the same first voice signal S1 , namely as if the first voice signal S1 is played from an actual acoustic source that is physically placed in the first main virtual position P1 main.
  • the acoustic player 40 comprises 8-way headphones.
  • Each half-part comprises a plurality of small loudspeakers; by appropriately regulating the sound played by each loudspeaker it is possible to simulate that the sound itself comes from different points in the space.
  • the apparatus 1 further comprises a user interface 50, in order to allow the user to interact with the apparatus 1 itself.
  • the aforementioned data request module may be part of the user interface 50.
  • the user interface 50 may comprise a set of hardware/software components which allow the user to insert data/commands in the apparatus 1 , and allow the apparatus 1 , in its turn, to provide data/information/feedback to the user.
  • the user interface 50 may be partly made by the aforementioned computer, coupled with one or more peripherals (monitor, mouse, etc.) which are directly employed by the user.
  • the user interface 50 is advantageously configured in order to allow the user to define the aforementioned first main virtual position Plmain with the highest possible precision.
  • the user interface 50 is configured to perform a series of operations: a) cooperating with the processing unit 30 for sending to the acoustic player 40 a command for playing the first voice signal S1 simulating the first virtual acoustic source AS1 positioned in a first initial virtual position P1 start; in practice, a first initial virtual position P1 start is defined, wherein the first virtual acoustic source AS1 is initially positioned in a virtual way.
  • the user can listen to the first audio signal S1 as if the latter is emitted by an actual acoustic source positioned in the physical position corresponding to the first initial virtual position P1 start;
  • first modifying signals MS1 for modifying the first initial virtual position P1 start; such signals MS1 may be inserted by the user after listening to the first audio signal S1 , played by the first virtual source AS1 in the first initial virtual position P1 start.
  • the modifying signals MS1 thus allow to obtain one or more first virtual modified positions P1 changed for the first virtual acoustic source AS1.
  • the user through the user interface 50, can modify the position from which he/she hears the first voice signal S1 coming; in particular the user will identify, by one or more attempts, the position that gives him/her stronger feelings after the first audio signal S1 is played.
  • Figure 2 schematically shows a possible control panel that can be used by the user to modify the position of the first virtual acoustic source AS1. It can be noted how the user, by means, for example, of a conventional mouse, may define the first main virtual position Plmain in terms of "front/rear”, “right left” and “up/down” with respect to his/her listening point;
  • the real activity can start.
  • Such activity consists in listening repeatedly to the first determinate audio signal, according to modalities defined by the user himself/herself.
  • the first determinate audio signal coincides with the first voice signal S1.
  • a listening to the first voice signal S1 from said first main virtual position P1 main is performed according to the modalities defined by the user.
  • the first determinate audio signal can be different from the first voice signal S1 , it may be, for example, a piece of music previously selected and stored by the user.
  • the modalities defined by the user comprise, above all, the identification of the first main virtual position Pimain.
  • the user interface 50 is configured for receiving one or more set-up commands Y1 , for modifying playing tones, volume and/or speed of said first determined voice signal S1.
  • the user may define, according to different points of view, the playing characteristics of the first voice signal S1 , so that the latter may provoke the most intense sensations in the user himself/herself.
  • the user interface 50 is also configured to allow the user to insert a plurality of first activation signals X1.
  • Each activation signal X1 start a cooperation between the user interface 50 and the processing unit 30 for sending a command to the acoustic player 40.
  • the first voice signal S1 is thus played simulating the first acoustic source AS1 positioned in the first main virtual position Pimain.
  • the user can press a key in a keyboard or can exert a pressure on a pad in order to start playing the first voice signal S1.
  • the first voice signal is automatically repeated in a countinuous way; the user, through the user interface 50, can adjust the frequency of such repetition, by increasing or decreasing the latter depending on the emotions provoked in the user himself/herself.
  • the activity performed is based not only on listening to an expression connected to a positive sensation, but also to a message connected to a negative sensation.
  • the "positive" signal may be advantageously alternated with the "negative” signal, so that the positive sensation recalled by the first signal can reduce or even nullify the negative sensation recalled by the second signal.
  • the apparatus 1 further comprises a second memory register 22 associated with the receiving device 10 and configured for storing a second voice signal S2, representative of a voice of said user in a second situation.
  • the second voice signal S2 is representative of a voice of the user in a second situation in which said user is in a negative mood (for example sadness, disappointment, fear due to an event or to a phobia, etc.).
  • the second voice signal S2 may be detected by the first receiving device 10 and stored in the aforementioned memory register 22.
  • the second voice signal S2 is a monophonic signal, namely it is detected in monophonic mode.
  • the second voice signal S2 may be acquired by the aforementioned data request module, which can be appropriately set to ask the user to insert also the second voice signal S2, representative of a negative sensation of the user himself /herself.
  • the request module may ask the user, by an audio and/or a visual signal, to express, by means of his/her voice, a sentence, a message, an exclamation that may represent and recall a negative sensation for the user himself/herself, such as, for example, failure, discontent, sadness , etc.
  • the user may express himself/herself, so that the receiving device 0 may detect what is said, and the second memory register 22 may store it.
  • the processing unit 30, indeed, is configured for determining a second main virtual position P2main of a second virtual acoustic source AS2 for playing a second determined voice signal.
  • the acoustic player 40 is operatively associated to the second memory register 22 and to the processing unit 30 for playing said second voice signal S2 simulating said second virtual acoustic source AS2 playing said second voice signal S2.
  • the user can define the second main virtual position P2main, by performing a series of attempts.
  • the user interface 50 is thus configured for:
  • the second main virtual position P2main is then used for subsequently playing the second determinate audio signal through the acoustic player 40 for performing the activity of relaxing and/or of solving psychological problems.
  • the second determinate audio signal coincides with the second voice signal S2.
  • a listening to the second voice signal S2 from said second main virtual position P2main is performed according to the modalities defined by the user.
  • the second determinate audio signal can be different from the second voice signal S2, it may be, for example, a piece of music previously selected and stored by the user.
  • the user can insert one or more second set-up commands Y2, for modifying playing tones, volume and/or speed of playing of the second determined voice signal S2. Also these modifications are performed depending on the sensations perceived by the user after listening to the second voice signal S2.
  • the user interface 50 is also configured for receiving a plurality of second activation signals X2, following each of which it cooperates with the processing unit 30 for sending to said acoustic player 40 a command for playing the second determined voice signal S2 simulating the second acoustic virtual source AS2 positioned in the second main virtual position P2main.
  • the user can press a key in a keyboard or can exert a pressure on a pad in order to start playing the second voice signal S2.
  • the playing of the first voice signal S1 automatically alternated with the playing of the second voice signal S2.
  • the user interface is 50 conveniently configured for receiving from said user an alternation command Z for regulating the alternated playing of the first voice signal S1 simulating the first acoustic virtual source AS1 positioned in the first main virtual position Plmain and the second voice signal S2 simulating the second acoustic virtual source AS2 positioned in the second main virtual position P2main.
  • the alternation command Z may refer to the playing frequency of the first and of the second voice signal S1 , S2.
  • the alternation command Z may be simply inserted by the user pressing on a key of a keyboard or on an appropriate pad, in order to start playing the subsequent signal (the first or the second, depending on which was the last signal played, respectively the second or the first one).
  • the processing unit 30 is configured to control the acoustic player 40 after having received the alternation command Z from the user interface 50, in order to:
  • auxiliary voice signals Saux in addition to the first voice signal S1 , preferably stored in the aforementioned first memory register 20.
  • auxiliary voice signals Saux are stored similarly to the first voice signal S1 : each auxiliary voice signal corresponds to a respective positive situation of the user.
  • the user may not only alternate the first voice signal S1 with the second voice signal S2 (as described above), but he/she can also listen to one or more auxiliary voice signals Saux, in addition to the first voice signal S1 , before switching to the second voice signal S2.
  • the listening to the second voice signal S2 may be repeated until the user needs a "positive" support, in this circumstance the user may then start listening again to the first voice signal S1 and to the auxiliary signals Saux.
  • processing unit 30 is configured to perform the following operations:
  • the harmonic components having a frequency higher than 5 KHz of the second determinate signal, in particular of the second voice signal S2, so that such processed signal is played from the second main virtual position P2main.
  • the second determinate audio signal and in particular the second voice signal S2, has, for frequencies higher than 5 KHz, a profile substantially equal to the one of the first determinate signal, in particular of the first voice signal S1.
  • the apparatus 1 may be used, as said, for performing relaxing activities and solving psychological problems.
  • the first main virtual position P1 main is identified.
  • the second voice signal S2 is recorded, and also the second main virtual position S2 is defined.
  • the user may command the playing of the first voice signal S1 , possibly alternating the latter with the second voice signal S2, according to the modalities (volume, frequency, etc.) he/she prefers.
  • the apparatus 1 may allow to reach remarkable results in terms of improving of the user's conditions, such as, for example, relaxing, overcoming of traumas or of phobias, etc.
  • Important advantages of the invention are the fact that the apparatus 1 has a simple and cost-effective structure.
  • the apparatus 1 allows to obtain very good results also when it is used by the user alone, without any supervision of an expert or of a therapist.

Landscapes

  • Health & Medical Sciences (AREA)
  • Anesthesiology (AREA)
  • Pain & Pain Management (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Psychology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Hematology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Stereophonic System (AREA)
  • Electrochromic Elements, Electrophoresis, Or Variable Reflection Or Absorption Elements (AREA)
PCT/IB2012/053961 2011-08-05 2012-08-02 Apparatus for performing relaxing activities and solving psychological problems WO2013021322A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ITMI2011A001509 2011-08-05
IT001509A ITMI20111509A1 (it) 2011-08-05 2011-08-05 Apparecchiatura per svolgere attività di rilassamento e di risoluzione di problemi psicologici

Publications (1)

Publication Number Publication Date
WO2013021322A1 true WO2013021322A1 (en) 2013-02-14

Family

ID=44584386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/053961 WO2013021322A1 (en) 2011-08-05 2012-08-02 Apparatus for performing relaxing activities and solving psychological problems

Country Status (2)

Country Link
IT (1) ITMI20111509A1 (it)
WO (1) WO2013021322A1 (it)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3216480A3 (de) * 2016-03-09 2017-10-25 "Wortkampfkunst" - Kommunikationsschule für Dienstleistungen, Industrie und Handwerk e.K. Einrichtung zur psychischen selbstschulung

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6702767B1 (en) * 2001-09-25 2004-03-09 Nelson R. Douglas Multisensory stimulation system and method
US20060252979A1 (en) * 2005-05-09 2006-11-09 Vesely Michael A Biofeedback eyewear system
US20080319252A1 (en) * 2005-10-17 2008-12-25 Diversionary Therapy Technologies Pty Ltd Diversionary Therapy Apparatus and Methods and Interactive Devices
WO2009052490A1 (en) * 2007-10-18 2009-04-23 Carnett John B Method and apparatus for soothing a baby
US20110152729A1 (en) * 2009-02-03 2011-06-23 Tsutomu Oohashi Vibration generating apparatus and method introducing hypersonic effect to activate fundamental brain network and heighten aesthetic sensibility

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6702767B1 (en) * 2001-09-25 2004-03-09 Nelson R. Douglas Multisensory stimulation system and method
US20060252979A1 (en) * 2005-05-09 2006-11-09 Vesely Michael A Biofeedback eyewear system
US20080319252A1 (en) * 2005-10-17 2008-12-25 Diversionary Therapy Technologies Pty Ltd Diversionary Therapy Apparatus and Methods and Interactive Devices
WO2009052490A1 (en) * 2007-10-18 2009-04-23 Carnett John B Method and apparatus for soothing a baby
US20110152729A1 (en) * 2009-02-03 2011-06-23 Tsutomu Oohashi Vibration generating apparatus and method introducing hypersonic effect to activate fundamental brain network and heighten aesthetic sensibility

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3216480A3 (de) * 2016-03-09 2017-10-25 "Wortkampfkunst" - Kommunikationsschule für Dienstleistungen, Industrie und Handwerk e.K. Einrichtung zur psychischen selbstschulung

Also Published As

Publication number Publication date
ITMI20111509A1 (it) 2013-02-06

Similar Documents

Publication Publication Date Title
US11625994B2 (en) Vibrotactile control systems and methods
US8638966B2 (en) Haptic chair sound enhancing system with audiovisual display
JP7380775B2 (ja) 音響機器、音響機器の制御方法及び制御プログラム
US9679546B2 (en) Sound vest
KR100941135B1 (ko) 뇌파 유도 장치 및 신호 발생 방법
TW201820315A (zh) 改良型音訊耳機裝置及其聲音播放方法、電腦程式
KR101540561B1 (ko) 능동형 뉴로 피드백 장치 및 그 제어방법
Palmer et al. Vibrational Music Therapy with D/deaf clients
Fontana et al. An exploration on the influence of vibrotactile cues during digital piano playing
US11386920B2 (en) Interactive group session computing systems and related methods
WO2013021322A1 (en) Apparatus for performing relaxing activities and solving psychological problems
KR20150134561A (ko) 스트레스 완화 및 집중력 향상을 위한 백색소음 발생 헤드셋 및 그를 이용한 백색소음 발생 방법
Williams I’m not hearing what you’re hearing: The conflict and connection of headphone mixes and multiple audioscapes
JP5947438B1 (ja) 演奏技術描画評価システム
JP2018064216A (ja) 力覚データ生成装置、電子機器、力覚データ生成方法、および制御プログラム
KR102070300B1 (ko) 보청기 튜닝 방법, 컴퓨터 프로그램 및 시스템
US20230253004A1 (en) Apparatus and a system for speech and/or hearing therapy and/or stimulation
Morita Sonic art for intersensory listening experience
WO2024080009A1 (ja) 音響装置、音響制御方法及び音響制御プログラム
US20230351868A1 (en) Vibrotactile control systems and methods
US20240181201A1 (en) Methods and devices for hearing training
US12008892B2 (en) Vibrotactile control systems and methods
KR20130096339A (ko) 뇌파 유도 장치 및 신호 발생 방법
JP2024517047A (ja) ヒアリングトレーニングのための方法および装置
JP2009000248A (ja) ゲーム機

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12761671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12761671

Country of ref document: EP

Kind code of ref document: A1