WO2024014089A1 - Information processing device, information processing program, and information processing system - Google Patents

Information processing device, information processing program, and information processing system Download PDF

Info

Publication number
WO2024014089A1
WO2024014089A1 PCT/JP2023/016285 JP2023016285W WO2024014089A1 WO 2024014089 A1 WO2024014089 A1 WO 2024014089A1 JP 2023016285 W JP2023016285 W JP 2023016285W WO 2024014089 A1 WO2024014089 A1 WO 2024014089A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
pressure
information processing
audio data
sensor section
Prior art date
Application number
PCT/JP2023/016285
Other languages
French (fr)
Japanese (ja)
Inventor
隆俊 横山
Original Assignee
株式会社サイドピーク
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社サイドピーク filed Critical 株式会社サイドピーク
Publication of WO2024014089A1 publication Critical patent/WO2024014089A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G9/00Bed-covers; Counterpanes; Travelling rugs; Sleeping rugs; Sleeping bags; Pillows
    • A47G9/10Pillows
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H5/00Musical or noise- producing devices for additional toy effects other than acoustical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present invention relates to an information processing device, an information processing program, and an information processing system.
  • the information processing device disclosed in Patent Document 1 includes a control section that controls sound output by the device based on the detected state of the device, and the control section controls whether the device normally outputs sound depending on the amount of change in the state. Continuously changes the output mode of the synthesized sound that can be output in the state. Further, the processor controls the output of the sound by the device based on the detected state of the device, and controlling the synthesized sound that the device can output in the normal state according to the amount of change in the state. Continuously change the output mode.
  • a pressure sensor is cited as an example of a sensor that detects the state of the device, and audio output is continuously performed according to the amount of change in the pressure value detected by the pressure sensor.
  • the state of the device is detected using a one-dimensional parameter called pressure value, and it even detects how the device is being touched, such as by stroking, kneading, or pinching.
  • the problem is that it cannot be done.
  • An object of the present invention is to provide an information processing device, an information processing program, and an information processing system that emit sounds depending on the way the object is touched.
  • one aspect of the present invention provides the following information processing device, information processing program, and information processing system.
  • [1] Obtain pressure data from multiple pressure sensors provided in the sensor section, selecting a way of touching the sensor portion from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensor;
  • An information processing device including a control unit that reproduces audio data corresponding to the selected touch method.
  • the sensor section has a pressure sensor in a surface layer and a deep layer of the sensor section, The control unit selects at least stroking, pinching, and kneading as the plurality of touch options based on a combination of changes in pressure data of the superficial pressure sensor and changes in pressure data of the deep pressure sensor.
  • [3] The information processing device wherein the audio data is pre-stored audio data or externally downloaded audio data.
  • Acquisition means for acquiring pressure data from a plurality of pressure sensors provided in the sensor section; selection means for selecting a way of touching the sensor section from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensor; an information processing program comprising: a transmitting means for outputting audio data corresponding to the selected touch method for external playback;
  • a sensor section provided with a plurality of pressure sensors Obtaining pressure data from a plurality of pressure sensors provided in the sensor section, selecting a way of touching the sensor section from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensors, and selecting the method of touching the sensor section.
  • An information processing system comprising: a terminal that reproduces and processes audio data corresponding to the type of touch;
  • FIG. 1 is a schematic diagram showing an example of the configuration of an information processing system according to an embodiment.
  • FIG. 2 is a block diagram illustrating a configuration example of an information processing system according to an embodiment.
  • FIG. 3 is a block diagram showing an example of the configuration of a portable information terminal.
  • FIG. 4 is a schematic diagram showing an example of the configuration of the sensor section.
  • FIG. 5 is a schematic perspective view showing an example of the configuration of the sensor.
  • FIG. 6 is a schematic diagram showing an example of the configuration of selection conditions.
  • FIG. 7A is a schematic diagram showing how a user touches the sensor unit and the amount of change in the pressure sensor.
  • FIG. 7B is a schematic diagram showing how the user touches the sensor unit and the amount of change in the pressure sensor.
  • FIG. 7C is a schematic diagram showing how the user touches the sensor section and the amount of change in the pressure sensor.
  • FIG. 8 is a flowchart for explaining the operation of the information processing system.
  • FIG. 9 is a schematic diagram
  • FIG. 1 is a schematic diagram showing an example of the configuration of an information processing system according to an embodiment.
  • FIG. 2 is a block diagram showing a configuration example of an information processing system according to an embodiment.
  • This information processing system includes a body pillow 4 having an interface device 1, a sensor section 2, and a speaker 40, as well as a portable information terminal 3 that communicates with the interface device 1.
  • the sensor section 2 includes a plurality of pressure sensors, detects the touch of the user 5, and outputs a signal to the interface device 1.
  • the interface device 1 performs AD conversion on the signal output from the sensor unit 2 in the control unit 10 and transmits it to the mobile information terminal 3 as pressure data, and also transmits audio data as a response of the mobile information terminal 3 to the transmitted pressure data.
  • the audio output unit 11 drives the speaker 40 to output audio.
  • the mobile information terminal 3 determines the type of contact of the user 5 with the sensor unit 2 based on the pressure data received from the interface device 1, and determines audio information to be output based on the determination result.
  • the sensor unit 2 disposed in the body pillow is touched by the user 5, for example, by stroking, kneading, pinching, etc., but is not limited thereto.
  • the interface device 1 includes electronic components such as a CPU (Central Processing Unit) having a function for processing information, an AD/DA converter, an amplifier, and a flash memory.
  • a CPU Central Processing Unit
  • the sensor unit 2 has one or more sensors inside, and each sensor is an example of a capacitive pressure sensor, but a pressure sensor using a resistive film method, an optical method, or the like may also be used. Alternatively, piezoelectric elements, MEMS, etc. may be used.
  • the mobile information terminal 3 is an information processing device such as a smartphone, a tablet terminal, or a PC (Personal Computer), and includes electronic components such as a CPU and a flash memory that have a function for processing information in the main body.
  • a smartphone such as a smartphone, a tablet terminal, or a PC (Personal Computer)
  • PC Personal Computer
  • connection between the mobile information terminal 3 and the interface device 1 uses wired communication such as USB (Universal Serial Bus) or wireless communication such as Bluetooth.
  • wired communication such as USB (Universal Serial Bus) or wireless communication such as Bluetooth.
  • FIG. 3 is a block diagram showing an example of the configuration of the mobile information terminal 3.
  • the portable information terminal 3 includes a control unit 30 that is composed of a CPU and the like and controls each unit and executes various programs, and a storage unit 31 that is composed of a storage medium such as a flash memory and stores information.
  • the control unit 30 includes a communication unit that communicates with the outside via a network.
  • the control unit 10 functions as a pressure data acquisition means 300, an audio data selection means 301, an audio data transmission means 302, a billing operation means 303, etc. by executing an application 310 to be described later.
  • the pressure data acquisition means 300 acquires pressure data from the interface device 1.
  • the audio data selection means 301 acquires corresponding audio data from among the plurality of candidates for the audio data 311 based on the pressure data acquired by the pressure data acquisition means 300 and the selection condition 312.
  • the audio data transmitting means 302 transmits the audio data selected by the audio data selecting means 301 to the interface device 1.
  • the billing operation means 303 accesses the external voice data database 6 in response to the operation of the user 5 on the operation section of the mobile information terminal 3, charges the voice data or acquires the voice data free of charge, and stores it in the storage section 31.
  • the storage unit 31 stores an application 310 that causes the control unit 30 to operate as each of the above-mentioned means 300-303, audio data 311, selection conditions 312, and the like.
  • FIG. 4 is a schematic diagram showing an example of the configuration of the sensor section 2.
  • the sensor section 2 includes a surface sensor 20s, which is a pressure sensor provided on the surface of the sensor section 2, and a deep sensor 20i, which is a pressure sensor provided in the deep layer.
  • a surface sensor 20s which is a pressure sensor provided on the surface of the sensor section 2
  • a deep sensor 20i which is a pressure sensor provided in the deep layer.
  • the “surface layer” refers to the inside of the surface of the sensor section 2 that is close to the area touched by the user 5
  • the “deep layer” refers to the inside that is far away.
  • FIG. 5 is a schematic perspective view showing an example of the configuration of the sensor.
  • the sensor 20 includes conductive cloths 200a and 200b each having a terminal, and a nonconductor 201 such as a sponge provided between them, and changes depending on the pressure applied between the conductive cloths 200a and 200b.
  • the capacitance changes depending on the distance between the conductive cloths 200a and 200b.
  • FIG. 6 is a schematic diagram showing a configuration example of the selection condition 312.
  • the selection conditions 312 include the way the user 5 touches the sensor unit 2, the amount of change in the pressure sensors of the surface sensor 20s and the deep sensor 20i (the amount of change in the capacitance of the sensor 20), the surface sensor 20s and the deep sensor 20i. has a pattern of change in pressure value (pattern of change in capacitance of the sensor 20).
  • FIG. 8 is a flowchart for explaining the operation of the information processing system.
  • the user 5 turns on the power of the body pillow 4 (interface device 1 and sensor section 2) in order to use the body pillow 4, and also launches the application 310 of the mobile information terminal 3.
  • This operation starts the application 310 of the interface device 1 and the mobile information terminal 3 (S10).
  • the interface device 1 and the portable information terminal 3 communicate with each other wirelessly.
  • the control unit 10 of the interface device 1 may automatically establish communication with the mobile information terminal 3 when the interface device 1 is started, and start the application 310 (directly or in the background).
  • the user 5 uses the body pillow 4 by touching the sensor section 2, as shown in FIGS. 7A to 7C, which will be described later.
  • the sensor 20 of the sensor section 2 outputs a signal depending on the way the user 5 touches it.
  • the interface device 1 performs AD conversion or the like on the output signal of the sensor section 2 and transmits it to the mobile information terminal 3 as pressure data.
  • the pressure data acquisition means 300 of the mobile information terminal 3 receives pressure data from the interface device 1 and acquires a pressure value (S11).
  • the audio data selection means 301 of the mobile information terminal 3 refers to the selection conditions 312 (S12), and selects the audio data 311 based on the pressure data acquired by the pressure data acquisition means 300 and the selection conditions 312 shown in FIG. Among them, the corresponding audio data is determined (S13). Specifically, as shown in FIGS. 7A to 7C shown below, when the user 5 touches the pressure sensor 20, the amount of change in the output value of the pressure sensor 20 changes depending on how the user 5 touches it, and this is used to determine the selection condition 312. ing.
  • 7A to 7C are schematic diagrams showing how the user 5 touches the sensor unit 2 and the amount of change in the pressure sensor 20.
  • the threshold value for the amount of change in the pressure value of the surface sensor 20s is set to a small value (a value smaller than the dynamic range of the pressure sensor 20), and the threshold value is set for the amount of change in the pressure value of the deep layer sensor 20i.
  • the audio data selection means 301 selects the "stroking" type of touch among the selection conditions 312. It is determined that
  • the threshold value for the amount of change in the pressure value of the surface sensor 20s is set to a large value (larger value than the dynamic range of the pressure sensor 20), and the threshold value is set for the amount of change in the pressure value of the deep layer sensor 20i.
  • the audio data selection means 301 selects the "pinch" touch type among the selection conditions 312. It is determined that
  • the threshold value for the amount of change in the pressure value of the surface sensor 20s is set to a large value (larger value than the dynamic range of the pressure sensor 20), and the threshold value is also set for the amount of change in the pressure value of the deep layer sensor 20i.
  • the audio data selection means 301 determines that the touch type is "kneading" among the selection conditions 312 when the threshold value is satisfied.
  • selection conditions 312 can be prepared in advance for other touch methods other than the above-mentioned touch methods. We can respond. Further, selection conditions 312 for various ways of touching may be prepared using machine learning or the like. Further, the selection condition 312 may be any condition as long as there is a correlation between the touch method and the pressure value, and is not limited to table-like information, and may exist as learning result information if it is obtained by machine learning etc. It may be.
  • the audio data selection means 301 of the mobile information terminal 3 acquires the corresponding audio data determined by the above operation from among the audio data 311 (S14).
  • the audio data transmitting means 302 of the mobile information terminal 3 transmits the audio data selected by the audio data selecting means 301 to the interface device 1.
  • the interface device 1 drives the speaker 40 based on the received audio data to output audio (S15).
  • the user 5 selects audio data 311 (and a set of selection conditions 312 for selecting the audio data 311) to be reproduced as a reaction to the sensor unit 2, in addition to the audio data prepared in advance in the mobile information terminal 3.
  • the user operates the mobile information terminal 3.
  • the billing operation means 303 accesses the voice data database 6 in response to the operation of the user 5 on the operation section (not shown) of the mobile information terminal 3, executes processing such as payment, downloads the voice data, and stores it in the storage section 31. Store. Through these operations, the types of sounds output from the speaker 40 via the interface device 1 can be increased.
  • the sensor unit 2 is provided with a plurality of sensors, and the selection condition 312 is prepared that defines the amount of change and change pattern of the output value of each sensor according to the touch method of the user 5. Since the audio data 311 is selected and reproduced based on the selection condition 312, it is possible to emit a sound depending on the way of touching.
  • the interface device 1, the sensor unit 2, and the mobile information terminal 3 are shown as separate devices, but each device may be integrated in any combination, or some of the functions of each device may be Alternatively, the entire system may be included in another device, or it may be configured to be divided into four or more devices.
  • the sensor section 2 described in the above embodiment may be configured as described below.
  • FIG. 9 is a schematic diagram showing another example of the configuration of the sensor section 2.
  • the sensor section 2A is another configuration example of the sensor section 2, and includes a surface layer sensor 20s that is a pressure sensor provided on the surface layer, a deep layer sensor 20i that is a pressure sensor provided in the deep layer, and a balloon-shaped adjustment sensor. It has a member 21.
  • the adjustment member 21 has a structure that can increase or decrease the amount of air inside, and by changing the amount of air, the bulge, hardness, etc. of the sensor section 2A can be changed, and the adjustment between the surface sensor 20s and the deep sensor 20i can be adjusted. Distance etc. can be changed.
  • the body pillow 4 may have a plurality of sensor sections 2, and may reproduce different audio data depending on how each sensor section 2 is touched.
  • each sensor section 2 may be configured to be replaceable with another sensor section 2 having a different feel and size according to the user's 5 preference.
  • different audio data 311 may be selected in response to replacement of another sensor section 2.
  • each means 300 to 303 of the control unit 30 are realized by a program, but all or part of each means may be realized by hardware such as an ASIC.
  • the programs used in the above embodiments can also be provided by being stored in a recording medium such as a CD-ROM.
  • the above steps explained in the above embodiments can be replaced, deleted, added, etc. without changing the gist of the present invention.

Abstract

[Problem] To provide an information processing device, an information processing program, and an information processing system that generate audio corresponding to a touching method. [Solution] A mobile information terminal (3) serving as an information processing device has a control unit (30) that functions as: a pressure data acquisition means (300) for acquiring pressure data of a plurality of pressure sensors provided in a sensor unit (2); an audio data selection and decoding means (301) for selecting, from a plurality of predefined candidates, a touching method of a user (5) with respect to the sensor unit (2) on the basis of a combination of changes in the pressure data of the pressure sensor (2) and performing decoding; and an audio data transmission means (302) for outputting audio data corresponding to the selected touching method to play back the audio data by means of an interface device (1).

Description

情報処理装置、情報処理プログラム及び情報処理システムInformation processing device, information processing program, and information processing system
 本発明は、情報処理装置、情報処理プログラム及び情報処理システムに関する。 The present invention relates to an information processing device, an information processing program, and an information processing system.
 従来の技術として、検出された装置の状態に基づいて、音の出力を制御する情報処理装置が提案されている(例えば、特許文献1参照)。 As a conventional technique, an information processing device has been proposed that controls the output of sound based on the detected state of the device (for example, see Patent Document 1).
 特許文献1に開示された情報処理装置は、検出された装置の状態に基づいて、装置による音の出力を制御する制御部を備え、制御部は、状態の変化量に応じて、装置が通常状態において出力し得る合成音の出力態様を連続的に変化させる。また、プロセッサが、検出された装置の状態に基づいて、前記装置による音の出力を制御し、前記制御することは、状態の変化量に応じて、装置が通常状態において出力し得る合成音の出力態様を連続的に変化させる。 The information processing device disclosed in Patent Document 1 includes a control section that controls sound output by the device based on the detected state of the device, and the control section controls whether the device normally outputs sound depending on the amount of change in the state. Continuously changes the output mode of the synthesized sound that can be output in the state. Further, the processor controls the output of the sound by the device based on the detected state of the device, and controlling the synthesized sound that the device can output in the normal state according to the amount of change in the state. Continuously change the output mode.
国際公開2020/129422号International Publication 2020/129422
 上記した特許文献1の情報処理装置によれば、装置の状態を検出するセンサの例として圧力センサが挙げられ、当該圧力センサの検出する圧力値の変化量に応じて音声の出力を連続的に変化させるものの、装置の状態として圧力値という一次元のパラメータにより装置の状態を検出するものであり、例えば、撫でる、揉む、つまむ、のように、装置がどのように触れられているかまで検出することができない、という問題がある。 According to the information processing device of Patent Document 1 mentioned above, a pressure sensor is cited as an example of a sensor that detects the state of the device, and audio output is continuously performed according to the amount of change in the pressure value detected by the pressure sensor. Although it changes, the state of the device is detected using a one-dimensional parameter called pressure value, and it even detects how the device is being touched, such as by stroking, kneading, or pinching. The problem is that it cannot be done.
 本発明の目的は、触れ方に応じた音声を発する情報処理装置、情報処理プログラム及び情報処理システムを提供することにある。 An object of the present invention is to provide an information processing device, an information processing program, and an information processing system that emit sounds depending on the way the object is touched.
 本発明の一態様は、上記目的を達成するため、以下の情報処理装置、情報処理プログラム及び情報処理システムを提供する。 In order to achieve the above object, one aspect of the present invention provides the following information processing device, information processing program, and information processing system.
[1]センサ部に設けられた複数の圧力センサの圧力データを取得し、
 前記圧力センサの圧力データの変化の組み合わせに基づいて前記センサ部に対する触り方を予め定めた複数の候補から選択し、
 当該選択した触り方に対応した音声データを再生処理する制御部を有する情報処理装置。
[2]前記センサ部は、当該センサ部の表層と深層に圧力センサを有し、
 前記制御部は、前記表層の圧力センサの圧力データの変化と、前記深層の圧力センサの圧力データの変化の組み合わせに基づいて前記触り方の前記複数の候補として、少なくとも撫でる、つまむ、揉むから選択する前記[1]に記載の情報処理装置。
[3]前記音声データは、予め記憶された音声データ又は外部からダウンロードされた音声データである前記[1]に記載の情報処理装置。
[4]センサ部に設けられた複数の圧力センサの圧力データを取得する取得手段と、
 前記圧力センサの圧力データの変化の組み合わせに基づいて前記センサ部に対する触り方を予め定めた複数の候補から選択する選択手段と、
 当該選択した触り方に対応した音声データを外部で再生するため出力する送信手段とを有する情報処理プログラム。
[5]複数の圧力センサが設けられたセンサ部と、
 前記センサ部に設けられた複数の圧力センサの圧力データを取得し、前記圧力センサの圧力データの変化の組み合わせに基づいて前記センサ部に対する触り方を予め定めた複数の候補から選択し、当該選択した触り方に対応した音声データを再生処理する端末とを有する情報処理システム。
[1] Obtain pressure data from multiple pressure sensors provided in the sensor section,
selecting a way of touching the sensor portion from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensor;
An information processing device including a control unit that reproduces audio data corresponding to the selected touch method.
[2] The sensor section has a pressure sensor in a surface layer and a deep layer of the sensor section,
The control unit selects at least stroking, pinching, and kneading as the plurality of touch options based on a combination of changes in pressure data of the superficial pressure sensor and changes in pressure data of the deep pressure sensor. The information processing device according to [1] above.
[3] The information processing device according to [1], wherein the audio data is pre-stored audio data or externally downloaded audio data.
[4] Acquisition means for acquiring pressure data from a plurality of pressure sensors provided in the sensor section;
selection means for selecting a way of touching the sensor section from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensor;
an information processing program comprising: a transmitting means for outputting audio data corresponding to the selected touch method for external playback;
[5] A sensor section provided with a plurality of pressure sensors;
Obtaining pressure data from a plurality of pressure sensors provided in the sensor section, selecting a way of touching the sensor section from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensors, and selecting the method of touching the sensor section. An information processing system comprising: a terminal that reproduces and processes audio data corresponding to the type of touch;
 請求項1、4、5に係る発明によれば、触れ方に応じた音声を発することができる。
 請求項2に係る発明によれば、表層の圧力センサの圧力データの変化と、深層の圧力センサの圧力データの変化の組み合わせに基づいて触り方の複数の候補として、少なくとも撫でる、つまむ、揉むから選択することができる。
 請求項3に係る発明によれば、音声データとして、予め記憶された音声データ又は外部からダウンロードされた音声データを用いることができる。
According to the inventions according to claims 1, 4, and 5, it is possible to emit a sound depending on the way of touching.
According to the invention according to claim 2, at least stroking, pinching, and kneading are selected as a plurality of touch options based on a combination of changes in pressure data from the surface pressure sensor and changes in pressure data from the deep pressure sensor. You can choose.
According to the third aspect of the invention, pre-stored audio data or externally downloaded audio data can be used as the audio data.
図1は、実施の形態に係る情報処理システムの構成の一例を示す概略図である。FIG. 1 is a schematic diagram showing an example of the configuration of an information processing system according to an embodiment. 図2は、実施の形態に係る情報処理システムの構成例を示すブロック図である。FIG. 2 is a block diagram illustrating a configuration example of an information processing system according to an embodiment. 図3は、携帯情報端末の構成の一例を示すブロック図である。FIG. 3 is a block diagram showing an example of the configuration of a portable information terminal. 図4は、センサ部の構成例を示す概略図である。FIG. 4 is a schematic diagram showing an example of the configuration of the sensor section. 図5は、センサの構成例を示す概略斜視図である。FIG. 5 is a schematic perspective view showing an example of the configuration of the sensor. 図6は、選択条件の構成例を示す概略図である。FIG. 6 is a schematic diagram showing an example of the configuration of selection conditions. 図7Aは、センサ部に対する利用者の触り方と圧力センサの変化量を示す概略図である。FIG. 7A is a schematic diagram showing how a user touches the sensor unit and the amount of change in the pressure sensor. 図7Bは、センサ部に対する利用者の触り方と圧力センサの変化量を示す概略図である。FIG. 7B is a schematic diagram showing how the user touches the sensor unit and the amount of change in the pressure sensor. 図7Cは、センサ部に対する利用者の触り方と圧力センサの変化量を示す概略図である。FIG. 7C is a schematic diagram showing how the user touches the sensor section and the amount of change in the pressure sensor. 図8は、情報処理システムの動作を説明するためのフローチャートである。FIG. 8 is a flowchart for explaining the operation of the information processing system. 図9は、センサ部の他の構成例を示す概略図である。FIG. 9 is a schematic diagram showing another example of the configuration of the sensor section.
[実施の形態]
(情報処理システムの構成)
 図1は、実施の形態に係る情報処理システムの構成の一例を示す概略図である。また、図2は、実施の形態に係る情報処理システムの構成例を示すブロック図である。
[Embodiment]
(Configuration of information processing system)
FIG. 1 is a schematic diagram showing an example of the configuration of an information processing system according to an embodiment. Further, FIG. 2 is a block diagram showing a configuration example of an information processing system according to an embodiment.
 この情報処理システムは、インターフェース装置1とセンサ部2とスピーカー40とを有する抱き枕4に加え、インターフェース装置1と通信する携帯情報端末3で構成される。センサ部2は、複数の圧力センサを有し、利用者5の接触を検出して信号をインターフェース装置1に出力する。インターフェース装置1は、制御部10においてセンサ部2の出力した信号をAD変換等して圧力データとして携帯情報端末3に送信するとともに、送信した圧力データに対する携帯情報端末3の応答としての音声データを受信して、音声出力部11においてスピーカー40を駆動して音声を出力する。携帯情報端末3は、インターフェース装置1から受信した圧力データに基づいてセンサ部2に対する利用者5の接触の種類を判別し、判別結果に基づいて出力する音声情報を決定する。抱き枕内に配置されたセンサ部2は、利用者5によって、例えば、撫でる、揉む、つまむ等の方法で触られるがこれに限られない。 This information processing system includes a body pillow 4 having an interface device 1, a sensor section 2, and a speaker 40, as well as a portable information terminal 3 that communicates with the interface device 1. The sensor section 2 includes a plurality of pressure sensors, detects the touch of the user 5, and outputs a signal to the interface device 1. The interface device 1 performs AD conversion on the signal output from the sensor unit 2 in the control unit 10 and transmits it to the mobile information terminal 3 as pressure data, and also transmits audio data as a response of the mobile information terminal 3 to the transmitted pressure data. Upon reception, the audio output unit 11 drives the speaker 40 to output audio. The mobile information terminal 3 determines the type of contact of the user 5 with the sensor unit 2 based on the pressure data received from the interface device 1, and determines audio information to be output based on the determination result. The sensor unit 2 disposed in the body pillow is touched by the user 5, for example, by stroking, kneading, pinching, etc., but is not limited thereto.
 インターフェース装置1は、本体内に情報を処理するための機能を有するCPU(Central Processing Unit)やAD/DAコンバータ、アンプ、フラッシュメモリ等の電子部品を備える。 The interface device 1 includes electronic components such as a CPU (Central Processing Unit) having a function for processing information, an AD/DA converter, an amplifier, and a flash memory.
 センサ部2は、内部に単数又は複数のセンサを有し、各センサは一例として静電容量式の圧力センサを用いるが、抵抗膜方式、光学方式等の方式を用いた圧力センサであってもよいし、圧電素子、MEMS等を採用してもよい。 The sensor unit 2 has one or more sensors inside, and each sensor is an example of a capacitive pressure sensor, but a pressure sensor using a resistive film method, an optical method, or the like may also be used. Alternatively, piezoelectric elements, MEMS, etc. may be used.
 携帯情報端末3は、スマートフォン、タブレット端末、PC(Personal Computer)等の情報処理装置であって、本体内に情報を処理するための機能を有するCPUやフラッシュメモリ等の電子部品を備える。 The mobile information terminal 3 is an information processing device such as a smartphone, a tablet terminal, or a PC (Personal Computer), and includes electronic components such as a CPU and a flash memory that have a function for processing information in the main body.
 携帯情報端末3とインターフェース装置1との接続は、USB(Universal Serial Bus)等の有線又はBluetooth等の無線通信を用いる。 The connection between the mobile information terminal 3 and the interface device 1 uses wired communication such as USB (Universal Serial Bus) or wireless communication such as Bluetooth.
 図3は、携帯情報端末3の構成の一例を示すブロック図である。 FIG. 3 is a block diagram showing an example of the configuration of the mobile information terminal 3.
 携帯情報端末3は、CPU等から構成され、各部を制御するとともに、各種のプログラムを実行する制御部30と、フラッシュメモリ等の記憶媒体から構成され、情報を記憶する記憶部31とを備える。なお、制御部30には、ネットワークを介して外部と通信する通信部を含むものとする。 The portable information terminal 3 includes a control unit 30 that is composed of a CPU and the like and controls each unit and executes various programs, and a storage unit 31 that is composed of a storage medium such as a flash memory and stores information. Note that the control unit 30 includes a communication unit that communicates with the outside via a network.
 制御部10は、後述するアプリケーション310を実行することで、圧力データ取得手段300、音声データ選択手段301、音声データ送信手段302、課金操作手段303等として機能する。 The control unit 10 functions as a pressure data acquisition means 300, an audio data selection means 301, an audio data transmission means 302, a billing operation means 303, etc. by executing an application 310 to be described later.
 圧力データ取得手段300は、インターフェース装置1から圧力データを取得する。 The pressure data acquisition means 300 acquires pressure data from the interface device 1.
 音声データ選択手段301は、圧力データ取得手段300が取得した圧力データと、選択条件312に基づいて音声データ311の複数の候補のうち対応する音声データを取得する。 The audio data selection means 301 acquires corresponding audio data from among the plurality of candidates for the audio data 311 based on the pressure data acquired by the pressure data acquisition means 300 and the selection condition 312.
 音声データ送信手段302は、音声データ選択手段301により選択された音声データをインターフェース装置1に送信する。 The audio data transmitting means 302 transmits the audio data selected by the audio data selecting means 301 to the interface device 1.
 課金操作手段303は、携帯情報端末3の操作部に対する利用者5の操作に応じて外部の音声データデータベース6にアクセスし、課金し又は無料で音声データを取得し、記憶部31に格納する。 The billing operation means 303 accesses the external voice data database 6 in response to the operation of the user 5 on the operation section of the mobile information terminal 3, charges the voice data or acquires the voice data free of charge, and stores it in the storage section 31.
 記憶部31は、制御部30を上述した各手段300-303として動作させるアプリケーション310、音声データ311、選択条件312等を記憶する。 The storage unit 31 stores an application 310 that causes the control unit 30 to operate as each of the above-mentioned means 300-303, audio data 311, selection conditions 312, and the like.
 図4は、センサ部2の構成例を示す概略図である。 FIG. 4 is a schematic diagram showing an example of the configuration of the sensor section 2.
 センサ部2は、センサ部2の表層に設けられた圧力センサである表層センサ20sと、深層に設けられた圧力センサである深層センサ20iとを有する。ここで、「表層」とは、センサ部2の表面のうち利用者5が触れる領域に対して距離が近い内部のことであり、「深層」とは距離が遠い内部のことである。 The sensor section 2 includes a surface sensor 20s, which is a pressure sensor provided on the surface of the sensor section 2, and a deep sensor 20i, which is a pressure sensor provided in the deep layer. Here, the "surface layer" refers to the inside of the surface of the sensor section 2 that is close to the area touched by the user 5, and the "deep layer" refers to the inside that is far away.
 図5は、センサの構成例を示す概略斜視図である。 FIG. 5 is a schematic perspective view showing an example of the configuration of the sensor.
 センサ20は、それぞれ端子を備えた導電布200a、200bと、これらの間に設けられたスポンジ等の不導体201とを有し、導電布200a、200b間に加えられた圧力に応じて変化する導電布200a、200b間の距離に応じて静電容量が変化する。 The sensor 20 includes conductive cloths 200a and 200b each having a terminal, and a nonconductor 201 such as a sponge provided between them, and changes depending on the pressure applied between the conductive cloths 200a and 200b. The capacitance changes depending on the distance between the conductive cloths 200a and 200b.
 図6は、選択条件312の構成例を示す概略図である。 FIG. 6 is a schematic diagram showing a configuration example of the selection condition 312.
 選択条件312は、利用者5のセンサ部2の触り方、表層センサ20sと深層センサ20iのそれぞれの圧力センサの変化量(センサ20の静電容量の変化量)、表層センサ20sと深層センサ20iのそれぞれの圧力値の変化のパターン(センサ20の静電容量の変化のパターン)を有する。 The selection conditions 312 include the way the user 5 touches the sensor unit 2, the amount of change in the pressure sensors of the surface sensor 20s and the deep sensor 20i (the amount of change in the capacitance of the sensor 20), the surface sensor 20s and the deep sensor 20i. has a pattern of change in pressure value (pattern of change in capacitance of the sensor 20).
(情報処理装置の動作)
 次に、本実施の形態の作用を説明する。図8は、情報処理システムの動作を説明するためのフローチャートである。
(Operation of information processing device)
Next, the operation of this embodiment will be explained. FIG. 8 is a flowchart for explaining the operation of the information processing system.
 まず、利用者5は、抱き枕4を利用するために抱き枕4(インターフェース装置1及びセンサ部2)の電源を入れるとともに、携帯情報端末3のアプリケーション310を立ち上げる。当該操作により、インターフェース装置1及び携帯情報端末3のアプリケーション310が起動する(S10)。インターフェース装置1と携帯情報端末3は、無線により互いに通信する。なお、インターフェース装置1の起動とともにインターフェース装置1の制御部10が自動で携帯情報端末3と通信を確立し、アプリケーション310を(直接又はバックグラウンドで)起動するものであってもよい。 First, the user 5 turns on the power of the body pillow 4 (interface device 1 and sensor section 2) in order to use the body pillow 4, and also launches the application 310 of the mobile information terminal 3. This operation starts the application 310 of the interface device 1 and the mobile information terminal 3 (S10). The interface device 1 and the portable information terminal 3 communicate with each other wirelessly. Note that the control unit 10 of the interface device 1 may automatically establish communication with the mobile information terminal 3 when the interface device 1 is started, and start the application 310 (directly or in the background).
 次に、利用者5は、後述する図7A~図7Cに示すように、センサ部2を触ることで抱き枕4を利用する。利用者5の触り方に応じてセンサ部2のセンサ20は信号を出力する。インターフェース装置1は、センサ部2の出力信号をAD変換等して圧力データとして携帯情報端末3に送信する。 Next, the user 5 uses the body pillow 4 by touching the sensor section 2, as shown in FIGS. 7A to 7C, which will be described later. The sensor 20 of the sensor section 2 outputs a signal depending on the way the user 5 touches it. The interface device 1 performs AD conversion or the like on the output signal of the sensor section 2 and transmits it to the mobile information terminal 3 as pressure data.
 携帯情報端末3の圧力データ取得手段300は、インターフェース装置1から圧力データを受信して圧力値を取得する(S11)。 The pressure data acquisition means 300 of the mobile information terminal 3 receives pressure data from the interface device 1 and acquires a pressure value (S11).
 次に、携帯情報端末3の音声データ選択手段301は、選択条件312を参照し(S12)、圧力データ取得手段300が取得した圧力データと、図6に示す選択条件312に基づいて音声データ311のうち対応する音声データを決定する(S13)。具体的には、以下に示す図7A~図7Cのように利用者5が触ると、触り方により圧力センサ20の出力値の変化量が変わるため、これを利用して選択条件312が定められている。 Next, the audio data selection means 301 of the mobile information terminal 3 refers to the selection conditions 312 (S12), and selects the audio data 311 based on the pressure data acquired by the pressure data acquisition means 300 and the selection conditions 312 shown in FIG. Among them, the corresponding audio data is determined (S13). Specifically, as shown in FIGS. 7A to 7C shown below, when the user 5 touches the pressure sensor 20, the amount of change in the output value of the pressure sensor 20 changes depending on how the user 5 touches it, and this is used to determine the selection condition 312. ing.
 図7A~図7Cは、センサ部2に対する利用者5の触り方と圧力センサ20の変化量を示す概略図である。 7A to 7C are schematic diagrams showing how the user 5 touches the sensor unit 2 and the amount of change in the pressure sensor 20.
 例えば、図7Aに示すように、センサ部2を利用者5が撫でる場合、センサ部2の表層部の形状がわずかに変化することになるため、表層センサ20sの圧力値変化量が小さく、深層センサ20iの圧力値変化量はほぼ0となる。そのため、表層センサ20sの圧力値の変化量についてしきい値を小さめの値(圧力センサ20のダイナミックレンジに比べて小さな値)に設定するとともに、深層センサ20iの圧力値の変化量についてしきい値を0とみなす値(ノイズやわずかな誤差を無視できる程度の値)に設定することで、当該しきい値を満たす場合に音声データ選択手段301は選択条件312のうち「撫でる」触り方であると判別する。 For example, as shown in FIG. 7A, when the user 5 strokes the sensor unit 2, the shape of the surface layer of the sensor unit 2 changes slightly, so the amount of change in the pressure value of the surface sensor 20s is small, and The amount of change in the pressure value of the sensor 20i becomes approximately 0. Therefore, the threshold value for the amount of change in the pressure value of the surface sensor 20s is set to a small value (a value smaller than the dynamic range of the pressure sensor 20), and the threshold value is set for the amount of change in the pressure value of the deep layer sensor 20i. By setting the value to a value that is considered to be 0 (a value that allows noise and slight errors to be ignored), if the threshold value is satisfied, the audio data selection means 301 selects the "stroking" type of touch among the selection conditions 312. It is determined that
 また、例えば、図7Bに示すように、センサ部2を利用者5がつまむ場合、センサ部2の表層部の形状が大きく変化することになるため、表層センサ20sの圧力値変化量が大きく、深層センサ20iの圧力値変化量はほぼ0となる。そのため、表層センサ20sの圧力値の変化量についてしきい値を大きめの値(圧力センサ20のダイナミックレンジに比べて大きな値)に設定するとともに、深層センサ20iの圧力値の変化量についてしきい値を0とみなす値(ノイズやわずかな誤差を無視できる程度の値)に設定することで、当該しきい値を満たす場合に音声データ選択手段301は選択条件312のうち「つまむ」触り方であると判別する。 For example, as shown in FIG. 7B, when the user 5 pinches the sensor unit 2, the shape of the surface layer of the sensor unit 2 changes greatly, so the amount of change in the pressure value of the surface sensor 20s is large. The amount of change in the pressure value of the depth sensor 20i is approximately zero. Therefore, the threshold value for the amount of change in the pressure value of the surface sensor 20s is set to a large value (larger value than the dynamic range of the pressure sensor 20), and the threshold value is set for the amount of change in the pressure value of the deep layer sensor 20i. By setting the value to a value that is considered to be 0 (a value that allows noise and slight errors to be ignored), if the threshold value is satisfied, the audio data selection means 301 selects the "pinch" touch type among the selection conditions 312. It is determined that
 また、例えば、図7Cに示すように、センサ部2を利用者5が揉む場合、センサ部2全体の形状が大きく変化することになるため、表層センサ20sの圧力値変化量が大きくなるとともに、深層センサ20iの圧力値変化量が大きくなる。そのため、表層センサ20sの圧力値の変化量についてしきい値を大きめの値(圧力センサ20のダイナミックレンジに比べて大きな値)に設定するとともに、深層センサ20iの圧力値の変化量についてもしきい値を大きめの値に設定することで、当該しきい値を満たす場合に音声データ選択手段301は選択条件312のうち「揉む」触り方であると判別する。 For example, as shown in FIG. 7C, when the user 5 rubs the sensor section 2, the shape of the entire sensor section 2 changes significantly, and the amount of change in the pressure value of the surface sensor 20s increases. The amount of change in the pressure value of the depth sensor 20i increases. Therefore, the threshold value for the amount of change in the pressure value of the surface sensor 20s is set to a large value (larger value than the dynamic range of the pressure sensor 20), and the threshold value is also set for the amount of change in the pressure value of the deep layer sensor 20i. By setting the value to a larger value, the audio data selection means 301 determines that the touch type is "kneading" among the selection conditions 312 when the threshold value is satisfied.
 上記のように利用者5のセンサ部2の触り方により表層センサ20s及び深層センサ20iの変化量が異なるため、上述した触り方以外の他の触り方についても選択条件312を予め準備することにより対応することができる。また、機械学習等により様々な触り方について選択条件312を準備するようにしてもよい。また、選択条件312は、触り方と圧力値の間に相関があるものであればよく、テーブル状の情報に限られないし、機械学習等により得られるものであれば学習結果情報として存在するものであってもよい。 As described above, since the amount of change in the surface sensor 20s and the deep sensor 20i differs depending on how the user 5 touches the sensor section 2, selection conditions 312 can be prepared in advance for other touch methods other than the above-mentioned touch methods. We can respond. Further, selection conditions 312 for various ways of touching may be prepared using machine learning or the like. Further, the selection condition 312 may be any condition as long as there is a correlation between the touch method and the pressure value, and is not limited to table-like information, and may exist as learning result information if it is obtained by machine learning etc. It may be.
 次に、携帯情報端末3の音声データ選択手段301は、音声データ311のうち、上記動作により決定した対応する音声データを取得する(S14)。 Next, the audio data selection means 301 of the mobile information terminal 3 acquires the corresponding audio data determined by the above operation from among the audio data 311 (S14).
 次に、携帯情報端末3の音声データ送信手段302は、音声データ選択手段301により選択された音声データをインターフェース装置1に送信する。インターフェース装置1は、受信した音声データに基づきスピーカー40を駆動して音声を出力する(S15)。 Next, the audio data transmitting means 302 of the mobile information terminal 3 transmits the audio data selected by the audio data selecting means 301 to the interface device 1. The interface device 1 drives the speaker 40 based on the received audio data to output audio (S15).
 また、利用者5は、センサ部2に対する反応として再生される音声データ311(及び当該音声データ311を選択するための選択条件312のセット)を携帯情報端末3に予め準備されたもの以外に音声データデータベース6から購入するため、携帯情報端末3を操作する。 In addition, the user 5 selects audio data 311 (and a set of selection conditions 312 for selecting the audio data 311) to be reproduced as a reaction to the sensor unit 2, in addition to the audio data prepared in advance in the mobile information terminal 3. In order to make a purchase from the data database 6, the user operates the mobile information terminal 3.
 課金操作手段303は、携帯情報端末3の図示しない操作部に対する利用者5の操作に応じて音声データデータベース6にアクセスし、支払い等の処理を実行して音声データをダウンロードして記憶部31に格納する。これらの操作により、インターフェース装置1を介してスピーカー40から出力される音声の種類を増やすことができる。 The billing operation means 303 accesses the voice data database 6 in response to the operation of the user 5 on the operation section (not shown) of the mobile information terminal 3, executes processing such as payment, downloads the voice data, and stores it in the storage section 31. Store. Through these operations, the types of sounds output from the speaker 40 via the interface device 1 can be increased.
(実施の形態の効果)
 上記した実施の形態によれば、センサ部2に複数のセンサを設けて利用者5の触り方に応じて各センサの出力値の変化量、変化パターンを定義した選択条件312を用意し、当該選択条件312に基づいて音声データ311を選択して再生するようにしたため、触れ方に応じた音声を発することができる。
(Effects of embodiment)
According to the embodiment described above, the sensor unit 2 is provided with a plurality of sensors, and the selection condition 312 is prepared that defines the amount of change and change pattern of the output value of each sensor according to the touch method of the user 5. Since the audio data 311 is selected and reproduced based on the selection condition 312, it is possible to emit a sound depending on the way of touching.
[他の実施の形態]
 なお、本発明は、上記実施の形態に限定されず、本発明の趣旨を逸脱しない範囲で種々な変形が可能である。
[Other embodiments]
Note that the present invention is not limited to the embodiments described above, and various modifications can be made without departing from the spirit of the present invention.
 例えば、上記実施の形態では、インターフェース装置1、センサ部2、携帯情報端末3をそれぞれ別装置として示したが、各装置を任意の組み合わせで一体化してもよいし、各装置の機能の一部又は全部を他の装置に含めるよう構成してもよいし、4以上の装置に分割して構成してもよい。 For example, in the above embodiment, the interface device 1, the sensor unit 2, and the mobile information terminal 3 are shown as separate devices, but each device may be integrated in any combination, or some of the functions of each device may be Alternatively, the entire system may be included in another device, or it may be configured to be divided into four or more devices.
 また、上記実施の形態で説明したセンサ部2は、以下に説明するように構成してもよい。 Furthermore, the sensor section 2 described in the above embodiment may be configured as described below.
 図9は、センサ部2の他の構成例を示す概略図である。 FIG. 9 is a schematic diagram showing another example of the configuration of the sensor section 2.
 センサ部2Aは、センサ部2の他の構成例であり、表層に設けられた圧力センサである表層センサ20sと、深層に設けられた圧力センサである深層センサ20iとに加え、風船状の調整部材21とを有する。調整部材21は、内部の空気量を増減可能な構造を有しており、空気量を変化させることでセンサ部2Aの膨らみ、硬さ等を変化させるとともに、表層センサ20sと深層センサ20iとの距離等を変化させることができる。 The sensor section 2A is another configuration example of the sensor section 2, and includes a surface layer sensor 20s that is a pressure sensor provided on the surface layer, a deep layer sensor 20i that is a pressure sensor provided in the deep layer, and a balloon-shaped adjustment sensor. It has a member 21. The adjustment member 21 has a structure that can increase or decrease the amount of air inside, and by changing the amount of air, the bulge, hardness, etc. of the sensor section 2A can be changed, and the adjustment between the surface sensor 20s and the deep sensor 20i can be adjusted. Distance etc. can be changed.
 また、抱き枕4は、センサ部2を複数有するものであってもよく、それぞれのセンサ部2に対する触れ方に応じて異なる音声データを再生するようにしてもよい。また、各センサ部2は、利用者5の好みに合わせて触り心地や大きさの異なる別のセンサ部2に交換可能な構成にしてもよい。また、別のセンサ部2の交換に応じて異なる音声データ311を選択するようにしてもよい。 Furthermore, the body pillow 4 may have a plurality of sensor sections 2, and may reproduce different audio data depending on how each sensor section 2 is touched. Furthermore, each sensor section 2 may be configured to be replaceable with another sensor section 2 having a different feel and size according to the user's 5 preference. Furthermore, different audio data 311 may be selected in response to replacement of another sensor section 2.
 上記実施の形態では制御部30の各手段300~303の機能をプログラムで実現したが、各手段の全て又は一部をASIC等のハードウエアによって実現してもよい。また、上記実施の形態で用いたプログラムをCD-ROM等の記録媒体に記憶して提供することもできる。また、上記実施の形態で説明した上記ステップの入れ替え、削除、追加等は本発明の要旨を変更しない範囲内で可能である。 In the above embodiment, the functions of each means 300 to 303 of the control unit 30 are realized by a program, but all or part of each means may be realized by hardware such as an ASIC. Further, the programs used in the above embodiments can also be provided by being stored in a recording medium such as a CD-ROM. Furthermore, the above steps explained in the above embodiments can be replaced, deleted, added, etc. without changing the gist of the present invention.
 触れ方に応じた音声を発する情報処理装置、情報処理プログラム及び情報処理システムを提供する。 Provides an information processing device, an information processing program, and an information processing system that emit sounds depending on how the object is touched.
1       :インターフェース装置
2、2A    :センサ部
3       :携帯情報端末
4       :抱き枕
5       :利用者
6       :音声データデータベース
10      :制御部
11      :音声出力部
20      :圧力センサ
20i     :深層センサ
20s     :表層センサ
21      :調整部材
30      :制御部
31      :記憶部
40      :スピーカー
200a、200b:導電布
201     :不導体
300     :圧力データ取得手段
301     :音声データ選択手段
302     :音声データ送信手段
303     :課金操作手段
310     :アプリケーション
311     :音声データ
312     :選択条件

 
1: Interface device 2, 2A: Sensor unit 3: Mobile information terminal 4: Body pillow 5: User 6: Voice data database 10: Control unit 11: Voice output unit 20: Pressure sensor 20i: Deep sensor 20s: Surface sensor 21 : Adjustment member 30 : Control part 31 : Storage part 40 : Speakers 200a, 200b : Conductive cloth 201 : Non-conductor 300 : Pressure data acquisition means 301 : Audio data selection means 302 : Audio data transmission means 303 : Billing operation means 310 : Application 311: Audio data 312: Selection conditions

Claims (5)

  1.  センサ部に設けられた複数の圧力センサの圧力データを取得し、
     前記圧力センサの圧力データの変化の組み合わせに基づいて前記センサ部に対する触り方を予め定めた複数の候補から選択し、
     当該選択した触り方に対応した音声データを再生処理する制御部を有する情報処理装置。
    Obtain pressure data from multiple pressure sensors installed in the sensor section,
    selecting a way of touching the sensor portion from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensor;
    An information processing device including a control unit that reproduces audio data corresponding to the selected touch method.
  2.  前記センサ部は、当該センサ部の表層と深層に圧力センサを有し、
     前記制御部は、前記表層の圧力センサの圧力データの変化と、前記深層の圧力センサの圧力データの変化の組み合わせに基づいて前記触り方の前記複数の候補として、少なくとも撫でる、つまむ、揉むから選択する請求項1に記載の情報処理装置。
    The sensor section has pressure sensors in a surface layer and a deep layer of the sensor section,
    The control unit selects at least stroking, pinching, and kneading as the plurality of touch options based on a combination of changes in pressure data of the superficial pressure sensor and changes in pressure data of the deep pressure sensor. The information processing device according to claim 1.
  3.  前記音声データは、予め記憶された音声データ又は外部からダウンロードされた音声データである請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the audio data is pre-stored audio data or externally downloaded audio data.
  4.  センサ部に設けられた複数の圧力センサの圧力データを取得する取得手段と、
     前記圧力センサの圧力データの変化の組み合わせに基づいて前記センサ部に対する触り方を予め定めた複数の候補から選択する選択手段と、
     当該選択した触り方に対応した音声データを外部で再生するため出力する送信手段とを有する情報処理プログラム。
    acquisition means for acquiring pressure data from a plurality of pressure sensors provided in the sensor section;
    selection means for selecting a way of touching the sensor section from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensor;
    an information processing program comprising: a transmitting means for outputting audio data corresponding to the selected touch method for external playback;
  5.  複数の圧力センサが設けられたセンサ部と、
     前記センサ部に設けられた複数の圧力センサの圧力データを取得し、前記圧力センサの圧力データの変化の組み合わせに基づいて前記センサ部に対する触り方を予め定めた複数の候補から選択し、当該選択した触り方に対応した音声データを再生処理する端末とを有する情報処理システム。

     
    a sensor section provided with a plurality of pressure sensors;
    Obtaining pressure data from a plurality of pressure sensors provided in the sensor section, selecting a way of touching the sensor section from a plurality of predetermined candidates based on a combination of changes in pressure data of the pressure sensors, and selecting the method of touching the sensor section. An information processing system comprising: a terminal that reproduces and processes audio data corresponding to the type of touch;

PCT/JP2023/016285 2022-07-14 2023-04-25 Information processing device, information processing program, and information processing system WO2024014089A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022113488A JP2024011485A (en) 2022-07-14 2022-07-14 Information processing device, information processing program, and information processing system
JP2022-113488 2022-07-14

Publications (1)

Publication Number Publication Date
WO2024014089A1 true WO2024014089A1 (en) 2024-01-18

Family

ID=89536437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/016285 WO2024014089A1 (en) 2022-07-14 2023-04-25 Information processing device, information processing program, and information processing system

Country Status (2)

Country Link
JP (1) JP2024011485A (en)
WO (1) WO2024014089A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5471192A (en) * 1994-01-24 1995-11-28 Dash; Glen Sound producing device stimulated by petting
JP2012532360A (en) * 2009-06-30 2012-12-13 ラムド リミテッド Processor interface
JP2014092963A (en) * 2012-11-05 2014-05-19 Fujitsu Ltd Touch state detection device, method and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5471192A (en) * 1994-01-24 1995-11-28 Dash; Glen Sound producing device stimulated by petting
JP2012532360A (en) * 2009-06-30 2012-12-13 ラムド リミテッド Processor interface
JP2014092963A (en) * 2012-11-05 2014-05-19 Fujitsu Ltd Touch state detection device, method and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KIZUKI ARCHU, BUTCHER U: "Introducing "VOPPAI, " a boobs mouse pad with a voice! Advance breast tasting event with illustrations", FIGURE NEWS, 30 April 2016 (2016-04-30), XP093128510, Retrieved from the Internet <URL:http://figurenews.blog.jp/archives/47458086.html> [retrieved on 20240207] *
SHINODA, HIROYUKI: "Intelligence in Human Skins", SYSTEMS, CONTROL AND INFORMATION, vol. 46, no. 1, 1 January 2002 (2002-01-01), pages 28 - 34, XP009552662, DOI: 10.11509/isciesci.46.1_28 *

Also Published As

Publication number Publication date
JP2024011485A (en) 2024-01-25

Similar Documents

Publication Publication Date Title
US8710968B2 (en) System and method for outputting virtual textures in electronic devices
JP6513749B2 (en) Voice assist system, server device, voice assist method thereof, and program for execution by computer
WO2017215639A1 (en) Sound effect configuration method and system and related device
CN112544090B (en) Audio circuit
CN107509153B (en) Detection method and device of sound playing device, storage medium and terminal
US20140340328A1 (en) Drawing apparatus and drawing system
JP6449639B2 (en) System and method for optical transmission of tactile display parameters
JP2019046505A (en) Dynamic haptic conversion system
US20140340326A1 (en) Drawing apparatus and drawing system
US20090167542A1 (en) Personal media device input and output control based on associated conditions
JP5757166B2 (en) Sound control apparatus, program, and control method
EP3564789A1 (en) Haptic warping system that transforms a haptic signal into a collection of vibrotactile haptic effect patterns
KR20150028725A (en) Systems and methods for generating haptic effects associated with an envelope in audio signals
CN108712571A (en) Method, apparatus, electronic device and the storage medium of display screen sounding
KR20150028736A (en) Systems and methods for generating haptic effects associated with transitions in audio signals
CN108476247A (en) Media access control(MAC)Address identifies
TW200922269A (en) Portable hands-free device with sensor
CN109032556B (en) Sound production control method, sound production control device, electronic device, and storage medium
JP2014016989A (en) Input sensing method, and electronic apparatus for processing said method
CN109086024B (en) Screen sounding method and device, electronic device and storage medium
WO2017215507A1 (en) Sound effect processing method and mobile terminal
CN108881568A (en) Method, apparatus, electronic device and the storage medium of display screen sounding
CN108769327A (en) Method, apparatus, electronic device and the storage medium of display screen sounding
CN109144460B (en) Sound production control method, sound production control device, electronic device, and storage medium
JP2007115157A (en) Key operation feeling imparting method and portable information device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23839267

Country of ref document: EP

Kind code of ref document: A1