WO2022059784A1 - Information provision device, information provision method, and program - Google Patents

Information provision device, information provision method, and program Download PDF

Info

Publication number
WO2022059784A1
WO2022059784A1 PCT/JP2021/034398 JP2021034398W WO2022059784A1 WO 2022059784 A1 WO2022059784 A1 WO 2022059784A1 JP 2021034398 W JP2021034398 W JP 2021034398W WO 2022059784 A1 WO2022059784 A1 WO 2022059784A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
output
unit
user
environment
Prior art date
Application number
PCT/JP2021/034398
Other languages
French (fr)
Japanese (ja)
Inventor
隆幸 菅原
早人 中尾
規 高田
秀生 鶴
哲也 諏訪
翔平 大段
Original Assignee
株式会社Jvcケンウッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020157525A external-priority patent/JP2022051185A/en
Priority claimed from JP2020157526A external-priority patent/JP2022051186A/en
Priority claimed from JP2020157524A external-priority patent/JP2022051184A/en
Application filed by 株式会社Jvcケンウッド filed Critical 株式会社Jvcケンウッド
Publication of WO2022059784A1 publication Critical patent/WO2022059784A1/en
Priority to US18/179,409 priority Critical patent/US20230200711A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1112Global tracking of patients, e.g. by using GPS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/38Acoustic or auditory stimuli
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns

Definitions

  • the present invention relates to an information providing device, an information providing method, and a program.
  • Patent Document 1 describes a device that gives a user the feeling that a virtual object actually exists by presenting a plurality of sensory information to the user.
  • Patent Document 2 describes that the provision form and the provision timing of the information providing device are determined so that the sum of the evaluation functions representing the appropriateness of the information provision timing is maximized.
  • the information providing device that provides information to the user, it is required to appropriately provide the information to the user.
  • the present embodiment aims to provide an information providing device, an information providing method, and a program capable of appropriately providing information to a user.
  • the information providing device is an information providing device that provides information to the user, and includes a display unit that outputs a visual stimulus, a voice output unit that outputs an auditory stimulus, and visual and auditory senses.
  • a display unit that outputs a visual stimulus
  • a voice output unit that outputs an auditory stimulus
  • visual and auditory senses Is an output unit including a sensory stimulus output unit that outputs different sensory stimuli, an environment sensor that detects environmental information around the information providing device, the display unit, the voice output unit, and the voice output unit based on the environmental information.
  • An output selection unit for selecting any of the sensory stimulation output units is included.
  • the information providing method is an information providing method for providing information to a user, and is a step of detecting surrounding environmental information and a display unit that outputs a visual stimulus based on the environmental information. , A step of selecting one of a voice output unit that outputs an auditory stimulus and a sensory stimulus output unit that outputs a sensory stimulus different from the visual and auditory senses.
  • the program according to one aspect of the present embodiment includes a step of detecting surrounding environmental information, a display unit that outputs a visual stimulus, a voice output unit that outputs an auditory stimulus, and visual and auditory senses based on the environmental information.
  • information can be appropriately provided to the user.
  • FIG. 1 is a schematic diagram of an information providing device according to the present embodiment.
  • FIG. 2 is a diagram showing an example of an image displayed by the information providing device.
  • FIG. 3 is a schematic block diagram of the information providing device according to the present embodiment.
  • FIG. 4 is a flowchart illustrating the processing contents of the information providing device according to the present embodiment.
  • FIG. 5 is a table illustrating an example of an environmental score.
  • FIG. 6 is a table showing an example of an environmental pattern.
  • FIG. 7 is a schematic diagram illustrating an example of the level of the output specification of the content image.
  • FIG. 8 is a table showing the relationship between the environmental pattern, the target device, and the reference output specifications.
  • FIG. 9 is a graph showing an example of a pulse wave.
  • FIG. 10 is a table showing an example of the relationship between the user state and the output specification correction degree.
  • FIG. 11 is a table showing an example of output restriction necessity information.
  • FIG. 1 is a schematic diagram of an information providing device according to the present embodiment.
  • the information providing device 10 according to the present embodiment is a device that provides information to the user U by outputting a visual stimulus, an auditory stimulus, and a sensory stimulus to the user U.
  • the sensory stimulus here is a stimulus for a sensation different from the visual and auditory senses.
  • the sensory stimulus is a tactile stimulus, but is not limited to the tactile stimulus, and may be a stimulus for any sensation different from the visual sense and the auditory sense.
  • the sensory stimulus may be a stimulus for the sense of taste, a stimulus for the sense of smell, or a stimulus for two or more of the sense of touch, taste, and hearing. As shown in FIG.
  • the information providing device 10 is a so-called wearable device worn on the body of the user U.
  • the information providing device 10 includes a device 10A worn on the eyes of the user U, a device 10B worn on the ears of the user U, and a device 10C worn on the arm of the user U.
  • the device 10A worn on the eyes of the user U includes a display unit 26A described later that outputs a visual stimulus to the user U (displays an image), and the device 10B worn on the ear of the user U is an auditory stimulus to the user U.
  • the device 10C attached to the arm of the user U includes a later-described voice output unit 26B that outputs (voice), and includes a later-described sensory stimulus output unit 26C that outputs a sensory stimulus to the user U.
  • a later-described voice output unit 26B that outputs (voice)
  • a later-described sensory stimulus output unit 26C that outputs a sensory stimulus to the user U.
  • the configuration of FIG. 1 is an example, and the number of devices and the mounting position on the user U may be arbitrary.
  • the information providing device 10 is not limited to a wearable device, and may be a device carried by the user U, for example, a so-called smartphone or tablet terminal.
  • FIG. 2 is a diagram showing an example of an image displayed by the information providing device.
  • the information providing device 10 provides the environment image PM to the user U through the display unit 26A.
  • the user U wearing the information providing device 10 can visually recognize the environment image PM.
  • the environment image PM is an image of a landscape that the user U will see when it is assumed that the user U is not equipped with the information providing device 10, and is within the field of view of the user U. It can be said that it is an image of a real object.
  • the information providing device 10 provides the environment image PM to the user U, for example, by transmitting external light (peripheral visible light) from the display unit 26A.
  • the information providing device 10 is not limited to allowing the user U to directly visually recognize the image of the actual scenery, and by displaying the image of the environment image PM on the display unit 26A, the user U can see the environment image PM through the display unit 26A. May be provided. In this case, the user U will visually recognize the image of the scenery displayed on the display unit 26A as the environment image PM. In this case, the information providing device 10 causes the display unit 26A to display an image captured by the camera 20A, which will be described later, within the visual field range of the user U as an environment image PM. In FIG. 2, roads and buildings are included as the environmental image PM, but this is just an example.
  • the information providing device 10 causes the display unit 26A to display the content image PS.
  • the content image PS is an image other than the actual scenery within the field of view of the user U.
  • the content image PS may be any content (content) as long as it is an image including information to be notified to the user U.
  • the content image PS may be a distribution image such as a movie or a TV program, a navigation image showing directions to the user U, or a communication to the user U such as a telephone or an e-mail. It may be a notification image indicating that the above is received, or it may be an image including all of them.
  • the content image PS may not include an advertisement which is information notifying a product or service.
  • the content image PS is displayed on the display unit 26A so as to be superimposed on the environment image PM provided through the display unit 26A.
  • the user U can visually recognize the image in which the content image PS is superimposed on the environment image PM.
  • the method of displaying the content image PS is not limited to superimposing as shown in FIG.
  • the method of displaying the content image PS, that is, the output specifications described later are set by, for example, environmental information, and will be described in detail later.
  • FIG. 3 is a schematic block diagram of the information providing device according to the present embodiment.
  • the information providing device 10 includes an environment sensor 20, a biological sensor 22, an input unit 24, an output unit 26, a communication unit 28, a storage unit 30, and a control unit 32.
  • the environment sensor 20 is a sensor that detects environmental information around the information providing device 10. It can be said that the environmental information around the information providing device 10 is information indicating under what kind of environment the information providing device 10 is placed. Further, since the information providing device 10 is attached to the user U, it can be paraphrased that the environment sensor 20 detects the environmental information around the user U.
  • the environment sensor 20 includes a camera 20A, a microphone 20B, a GNSS receiver 20C, an acceleration sensor 20D, a gyro sensor 20E, an optical sensor 20F, a temperature sensor 20G, and a humidity sensor 20H.
  • the environment sensor 20 may include an arbitrary sensor that detects environmental information, for example, a camera 20A, a microphone 20B, a GNSS receiver 20C, an acceleration sensor 20D, a gyro sensor 20E, and an optical sensor. It may include at least one of the sensor 20F, the temperature sensor 20G, and the humidity sensor 20H, or may include another sensor.
  • the camera 20A is an image pickup device, and captures the surroundings of the information providing device 10 by detecting visible light around the information providing device 10 (user U) as environmental information.
  • the camera 20A may be a video camera that captures images at predetermined frame rates.
  • the position and orientation of the camera 20A in the information providing device 10 are arbitrary, but for example, the camera 20A is provided in the device 10A shown in FIG. 1 and the imaging direction is the direction in which the face of the user U is facing. It may be there.
  • the camera 20A can image an object in the line of sight of the user U, that is, an object within the field of view of the user U.
  • the number of cameras 20A is arbitrary, and may be singular or plural. If there are a plurality of cameras 20A, the information in the direction in which the cameras 20A are facing is also acquired.
  • the microphone 20B is a microphone that detects voice (sound wave information) around the information providing device 10 (user U) as environmental information.
  • the position, orientation, number, and the like of the microphone 20B provided in the information providing device 10 are arbitrary. If there are a plurality of microphones 20B, information in the direction in which the microphones 20B are facing is also acquired.
  • the GNSS receiver 20C is a device that detects the position information of the information providing device 10 (user U) as environmental information.
  • the position information here is the earth coordinates.
  • the GNSS receiver 20C is a so-called GNSS (Global Navigation Satellite System) module, which receives radio waves from satellites and detects the position information of the information providing device 10 (user U).
  • GNSS Global Navigation Satellite System
  • the acceleration sensor 20D is a sensor that detects the acceleration of the information providing device 10 (user U) as environmental information, and detects, for example, gravity, vibration, and impact.
  • the gyro sensor 20E is a sensor that detects the rotation and orientation of the information providing device 10 (user U) as environmental information, and detects it using the principle of Coriolis force, Euler force, centrifugal force, and the like.
  • the optical sensor 20F is a sensor that detects the intensity of light around the information providing device 10 (user U) as environmental information.
  • the optical sensor 20F can detect the intensity of visible light, infrared rays, and ultraviolet rays.
  • the temperature sensor 20G is a sensor that detects the temperature around the information providing device 10 (user U) as environmental information.
  • the humidity sensor 20H is a sensor that detects the humidity around the information providing device 10 (user U) as environmental information.
  • the biosensor 22 is a sensor that detects the biometric information of the user U.
  • the biosensor 22 may be provided at any position as long as it can detect the biometric information of the user U.
  • the biometric information here is not immutable such as a fingerprint, but is preferably information whose value changes according to the state of the user U, for example.
  • the biometric information here is information about the autonomic nerve of the user U, that is, information whose value changes regardless of the intention of the user U.
  • the biological sensor 22 includes the pulse wave sensor 22A and the brain wave sensor 22B, and detects the pulse wave and the brain wave of the user U as biological information.
  • the pulse wave sensor 22A is a sensor that detects the pulse wave of the user U.
  • the pulse wave sensor 22A may be, for example, a transmissive photoelectric sensor including a light emitting unit and a light receiving unit.
  • the pulse wave sensor 22A is configured such that, for example, the light emitting portion and the light receiving portion face each other with the fingertip of the user U interposed therebetween, and the light receiving portion receives the light transmitted through the fingertip, and the pressure of the pulse wave.
  • the pulse waveform may be measured by utilizing the fact that the larger the value, the larger the blood flow.
  • the pulse wave sensor 22A is not limited to this, and may be any method capable of detecting a pulse wave.
  • the brain wave sensor 22B is a sensor that detects the brain wave of the user U.
  • the brain wave sensor 22B may have any configuration as long as it can detect the brain wave of the user U, but in principle, for example, a wave such as an ⁇ wave or a ⁇ wave or a basic rhythm (background brain wave) that appears in the entire brain. It suffices if the activity can be grasped and the improvement or decrease of the activity of the entire brain can be detected.
  • unlike the electroencephalogram measurement for medical purposes it suffices to be able to roughly measure the change in the state of the user U. Therefore, for example, by attaching only two electrodes to the forehead and the ear, a very simple surface electroencephalogram can be obtained. It is also possible to detect the detection.
  • the biological sensor 22 is not limited to detecting pulse waves and brain waves as biological information, and may detect at least one of pulse waves and brain waves, for example. Further, the biological sensor 22 may detect other than pulse waves and brain waves as biological information, and may detect, for example, the amount of sweating and the size of the pupil. Further, the biosensor 22 is not an essential configuration and may not be provided in the information processing apparatus 10.
  • the input unit 24 is a device that accepts user operations, and may be, for example, a touch panel.
  • the output unit 26 is a device that outputs a stimulus for at least one of the five senses to the user U.
  • the output unit 26 includes a display unit 26A, a voice output unit 26B, and a sensory stimulation output unit 26C.
  • the display unit 26A is a display that outputs the visual stimulus of the user U by displaying an image, and can be paraphrased as a visual stimulus output unit.
  • the display unit 26A is a so-called HMD (Head Mount Display).
  • the display unit 26A displays the content image PS as described above.
  • the voice output unit 26B is a device (speaker) that outputs the auditory stimulus of the user U by outputting the voice, and can be paraphrased as the auditory stimulus output unit.
  • the sensory stimulus output unit 26C is a device that outputs the sensory stimulus of the user U, and in the present embodiment, the tactile stimulus.
  • the sensory stimulus output unit 26C is a vibration motor such as a vibrator, and outputs a tactile stimulus to the user by physically operating such as vibration.
  • the type of the tactile stimulus is not limited to vibration or the like. It may be a thing.
  • the output unit 26 stimulates the visual sense, the auditory sense, and the senses different from the visual sense and the auditory sense (tactile sense in the present embodiment) among the five human senses.
  • the output unit 26 is not limited to outputting visual stimuli, auditory stimuli, and sensations different from those of visual and auditory senses.
  • the output unit 26 may output at least one of visual stimuli, auditory stimuli, and sensations different from visual and auditory sensations, or at least outputs visual stimuli (displays an image). It may be present, or it may output either an auditory stimulus or a tactile sensation in addition to the visual stimulus, and it may be one of the five senses in addition to at least one of the visual stimulus, the auditory stimulus, and the tactile sensation. It may output other sensory stimuli (that is, at least one of taste stimuli and olfactory stimuli).
  • the communication unit 28 is a module that communicates with an external device or the like, and may include, for example, an antenna or the like.
  • the communication method by the communication unit 28 is wireless communication in this embodiment, but the communication method may be arbitrary.
  • the communication unit 28 includes a content image receiving unit 28A.
  • the content image receiving unit 28A is a receiver that receives the content image data, which is the image data of the content image.
  • the content displayed by the content image may include audio and sensory stimuli different from visual and auditory senses.
  • the content image receiving unit 28A may receive voice data and sensory stimulation data as well as the image data of the content image as the content image data.
  • the content image data is received by the content image receiving unit 28A in this way.
  • the content image receiving unit 28A is stored in the storage unit 30 in advance, and the content image receiving unit 28A reads the content image data from the storage unit 30. May be good.
  • the storage unit 30 is a memory that stores various information such as calculation contents and programs of the control unit 32.
  • a RAM Random Access Memory
  • a main storage device such as a ROM (Read Only Memory)
  • an HDD Includes at least one of external storage devices such as Hard Disk Drive.
  • the storage unit 30 stores the learning model 30A, the map data 30B, and the specification setting database 30C.
  • the learning model 30A is an AI model used to specify the environment in which the user U is located based on the environment information.
  • the map data 30B is data including position information of actual buildings and natural objects, and can be said to be data in which the earth coordinates and actual buildings and natural objects are associated with each other.
  • the specification setting database 30C is a database that includes information for determining the display specifications of the content image PS as described later. The processing using the learning model 30A, the map data 30B, the specification setting database 30C, and the like will be described later.
  • the learning model 30A, the map data 30B, the specification setting database 30C, and the program for the control unit 32 stored by the storage unit 30 may be stored in a recording medium readable by the information providing device 10. Further, the program for the control unit 32 stored by the storage unit 30, the learning model 30A, the map data 30B, and the specification setting database 30C are not limited to being stored in advance in the storage unit 30, and these data are stored. When used, the information providing device 10 may acquire from an external device by communication.
  • the control unit 32 is an arithmetic unit, that is, a CPU (Central Processing Unit).
  • the control unit 32 includes an environment information acquisition unit 40, a biometric information acquisition unit 42, an environment identification unit 44, a user state identification unit 46, an output selection unit 48, an output specification determination unit 50, and a content image acquisition unit 52. And an output control unit 54.
  • the control unit 32 reads out a program (software) from the storage unit 30 and executes it to output the environment information acquisition unit 40, the biometric information acquisition unit 42, the environment identification unit 44, the user state identification unit 46, the output selection unit 48, and the output.
  • the specification determination unit 50, the content image acquisition unit 52, and the output control unit 54 are realized, and their processing is executed.
  • the control unit 32 may execute these processes by one CPU, or may include a plurality of CPUs and execute the processes by the plurality of CPUs. Further, at least one of the environment information acquisition unit 40, the biometric information acquisition unit 42, the environment identification unit 44, the user state identification unit 46, the output selection unit 48, the output specification determination unit 50, the content image acquisition unit 52, and the output control unit 54.
  • the part may be realized by hardware.
  • the environment information acquisition unit 40 controls the environment sensor 20 to cause the environment sensor 20 to detect the environment information.
  • the environmental information acquisition unit 40 acquires the environmental information detected by the environment sensor 20.
  • the processing of the environment information acquisition unit 40 will be described later.
  • the environmental information acquisition unit 40 is hardware, it can also be called an environmental information detector.
  • the biometric information acquisition unit 42 controls the biometric sensor 22 to cause the biometric sensor 22 to detect biometric information.
  • the biological information acquisition unit 42 acquires the environmental information detected by the biological sensor 22. The processing of the biological information acquisition unit 42 will be described later.
  • the biometric information acquisition unit 42 is hardware, it can also be called a biometric information detector.
  • the biological information acquisition unit 42 is not an essential configuration.
  • the environment specifying unit 44 identifies the environment in which the user U is placed, based on the environment information acquired by the environment information acquisition unit 40.
  • the environment specifying unit 44 calculates the environment score, which is a score for specifying the environment, and specifies the environment by specifying the environment state pattern indicating the state of the environment based on the environment score. The processing of the environment specifying unit 44 will be described later.
  • the user state specifying unit 46 specifies the state of the user U based on the biometric information acquired by the biometric information acquisition unit 42. The processing of the user state specifying unit 46 will be described later.
  • the user state specifying unit 46 is not an essential configuration.
  • the output selection unit 48 selects a target device to be operated in the output unit 26 based on at least one of the environmental information acquired by the environmental information acquisition unit 40 and the biometric information acquired by the biometric information acquisition unit 42.
  • the processing of the output selection unit 48 will be described later.
  • the output selection unit 48 is hardware, it may be called a sensory selector.
  • the output specification determination unit 50 which will be described later, determines the output specification based on environmental information or the like, the output selection unit 48 may not be provided. In this case, for example, the information providing device 10 operates all of the output units 26, that is, all of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C without selecting the target device. It's okay.
  • the output specification determination unit 50 outputs a stimulus output by the output unit 26 based on at least one of the environmental information acquired by the environmental information acquisition unit 40 and the biological information acquired by the biological information acquisition unit 42 (here, a visual stimulus, Determine the output specifications of auditory and tactile stimuli).
  • the output specification determination unit 50 is a content image PS displayed by the display unit 26A based on at least one of the environmental information acquired by the environmental information acquisition unit 40 and the biometric information acquired by the biometric information acquisition unit 42. It can be said that the display specifications (output specifications) are determined.
  • the output specification is an index showing how the stimulus output by the output unit 26 is output, and the details will be described later. The processing of the output specification determination unit 50 will be described later.
  • the output specification determination unit 50 may not be provided.
  • the information providing device 10 may output the stimulus to the selected target device with an arbitrary output specification without determining the output specification from the environmental information or the like.
  • the content image acquisition unit 52 acquires content image data via the content image receiving unit 28A.
  • the output control unit 54 controls the output unit 26 to output.
  • the output control unit 54 causes the target device selected by the output selection unit 48 to output with the output specifications determined by the output specification determination unit 50.
  • the output control unit 54 controls the display unit 26A to superimpose the content image PS acquired by the content image acquisition unit 52 on the main image PM, and has a display specification determined by the output specification determination unit 50. To display.
  • the output control unit 54 is hardware, it may be called a multi-sensory sensory provider.
  • the information providing device 10 has the configuration as described above.
  • FIG. 4 is a flowchart illustrating the processing contents of the information providing device according to the present embodiment.
  • the information providing device 10 acquires the environmental information detected by the environment sensor 20 by the environment information acquisition unit 40 (step S10).
  • the environmental information acquisition unit 40 acquires image data obtained by capturing an image of the periphery of the information providing device 10 (user U) from the camera 20A, and acquires image data around the information providing device 10 (user U) from the microphone 20B.
  • the voice data is acquired, the position information of the information providing device 10 (user U) is acquired from the GNSS receiver 20C, the acceleration information of the information providing device 10 (user U) is acquired from the acceleration sensor 20D, and the gyro sensor 20E is acquired.
  • the temperature information around the information providing device (user U) is acquired from, and the humidity information around the information providing device 10 (user U) is acquired from the humidity sensor 20H.
  • the environmental information acquisition unit 40 sequentially acquires these environmental information at predetermined intervals.
  • the environmental information acquisition unit 40 may acquire each environmental information at the same timing, or may acquire each environmental information at different timings. Further, the predetermined period until the next environmental information is acquired may be arbitrarily set, and the predetermined period may be the same or different for each environmental information.
  • the information providing device 10 After acquiring the environment information, the information providing device 10 determines whether the environment around the user U is in a dangerous state based on the environment information by the environment specifying unit 44 (step S12). ..
  • the environment specifying unit 44 determines whether or not it is in a dangerous state based on the image around the information providing device 10 captured by the camera 20A.
  • the image of the periphery of the information providing device 10 captured by the camera 20A will be appropriately referred to as a peripheral image.
  • the environment specifying unit 44 identifies, for example, an object shown in a peripheral image, and determines whether or not it is in a dangerous state based on the type of the specified object. More specifically, the environment specifying unit 44 determines that the object shown in the peripheral image is in a dangerous state when it is a preset specific object, and determines that it is not in a dangerous state when it is not a specific object. It's okay.
  • the specific object may be set arbitrarily, but it may be an object that may pose a danger to the user U, such as a flame indicating that it is a fire, a vehicle, or a sign indicating that construction is underway. It may be there. Further, the environment specifying unit 44 may determine whether or not it is in a dangerous state based on a plurality of peripheral images continuously captured in time series. For example, the environment specifying unit 44 identifies an object for each of a plurality of peripheral images continuously captured in time series, and whether the object is a specific object and is the same object. To judge.
  • the environment specifying unit 44 determines whether the specific object reflected in the peripheral image captured later in the time series is relatively larger in the image, that is, the specific object. It is determined whether the specific object is approaching the user U. Then, the environment specifying unit 44 determines that it is in a dangerous state when the specific object is larger than the specific object shown in the peripheral image captured later, that is, when the specific object is approaching the user U. .. On the other hand, the environment specifying unit 44 determines that it is not in a dangerous state when it is not as large as the specific object shown in the peripheral image captured later, that is, when the specific object is not approaching the user U. ..
  • the environment specifying unit 44 may determine whether it is a dangerous state based on one peripheral image, or determine whether it is a dangerous state based on a plurality of peripheral images continuously captured in time series. You may.
  • the environment specifying unit 44 may switch the determination method according to the type of the object shown in the peripheral image.
  • the environment specifying unit 44 may determine from one peripheral image that it is in a dangerous state.
  • the environment specifying unit 44 is in a dangerous state based on a plurality of peripheral images continuously captured in time series. You may make a judgment.
  • the environment specifying unit 44 may specify the object shown in the peripheral image by any method, but for example, the learning model 30A may be used to specify the object.
  • the learning model 30A is constructed by learning image data and information indicating the type of an object shown in the image as one data set and learning a plurality of data sets as teacher data. It is an AI model.
  • the environment specifying unit 44 inputs the image data of the peripheral image into the learned learning model 30A, acquires the information specifying the type of the object reflected in the peripheral image, and identifies the object. ..
  • the environment specifying unit 44 may determine whether or not it is in a dangerous state based on the position information acquired by the GNSS receiver 20C in addition to the peripheral image. In this case, the environment specifying unit 44 acquires the location information indicating the location of the user U based on the location information of the information providing device 10 (user U) acquired by the GNSS receiver 20C and the map data 30B.
  • the whereabouts information is information indicating what kind of place the user U (information providing device 10) is in. That is, for example, the whereabouts information is information that the user U is in the shopping center, information that the user U is on the road, and the like.
  • the environment specifying unit 44 reads out the map data 30B, identifies the type of the structure or the natural object within a predetermined distance range with respect to the current position of the user U, and specifies the location information from the structure or the natural object. For example, when the current position of the user U overlaps with the coordinates of the shopping center, it is specified as the location information that the user U is in the shopping center. Then, the environment specifying unit 44 determines that the location information and the type of the object specified from the surrounding image are in a dangerous state when they have a specific relationship, and when they do not have a specific relationship, the dangerous state. Judge that it is not. A specific relationship may be set arbitrarily, but for example, a combination of an object and a whereabouts, which may pose a danger if the object exists in a certain place, is set as a specific relationship. It's okay.
  • the environment specifying unit 44 determines whether or not it is in a dangerous state based on the voice information acquired by the microphone 20B.
  • the audio information around the information providing device 10 acquired by the microphone 20B will be appropriately referred to as peripheral audio.
  • the environment specifying unit 44 identifies, for example, the type of voice included in the peripheral voice, and determines whether or not it is in a dangerous state based on the type of the specified voice. More specifically, the environment specifying unit 44 determines that if the type of voice included in the peripheral voice is a preset specific voice, it is in a dangerous state, and if it is not a specific voice, it is determined that it is not in a dangerous state. It's okay.
  • the specific voice may be set arbitrarily, but for example, a voice indicating that it is a fire, a voice indicating that the vehicle is under construction, or a voice indicating that the user U is under construction, which may pose a danger to the user U. It may be there.
  • the environment specifying unit 44 may specify the type of voice included in the peripheral voice by any method, but may specify the object by using, for example, the learning model 30A.
  • voice data for example, data indicating the frequency and intensity of sound
  • information indicating the type of the voice are used as one data set, and a plurality of data sets are learned as teacher data. It is a built AI model.
  • the environment specifying unit 44 inputs the voice data of the peripheral voice into the learned learning model 30A, acquires the information specifying the type of the voice included in the peripheral voice, and specifies the voice type. ..
  • the environment specifying unit 44 may determine whether or not it is in a dangerous state based on the position information acquired by the GNSS receiver 20C in addition to the peripheral voice. In this case, the environment specifying unit 44 acquires the location information indicating the location of the user U based on the location information of the information providing device 10 (user U) acquired by the GNSS receiver 20C and the map data 30B. Then, the environment specifying unit 44 determines that the location information and the type of voice specified from the surrounding voice are in a dangerous state when they have a specific relationship, and when they do not have a specific relationship, they are not in a dangerous state. Judge. The specific relationship may be set arbitrarily, but for example, a combination of sound and whereabouts, which may be dangerous if the sound is generated in a certain place, may be set as a specific relationship. ..
  • the environment specifying unit 44 determines the dangerous state based on the peripheral image and the peripheral sound.
  • the method for determining the dangerous state is not limited to the above and is arbitrary.
  • the environment specifying unit 44 may determine the dangerous state based on either the peripheral image or the peripheral sound.
  • the environment specifying unit 44 has at least an image of the periphery of the information providing device 10 captured by the camera 20A, a sound around the information providing device 10 detected by the microphone 20B, and a position information acquired by the GNSS receiver 20C. You may determine if you are in a dangerous state based on one. Further, in the present embodiment, the determination of the dangerous state is not essential and may not be carried out.
  • the information providing device 10 sets the danger notification content, which is the notification content for notifying the dangerous state, by the output control unit 54 (step S12).
  • the information providing device 10 sets the danger notification content based on the content of the danger state.
  • the content of the dangerous state is information indicating what kind of danger is imminent, and is specified from the type of the object shown in the peripheral image, the type of sound included in the peripheral sound, and the like. For example, when the object is a vehicle and is approaching, the content of the dangerous state is "the vehicle is approaching”.
  • the content of the danger notification is information indicating the content of the dangerous state. For example, when the content of the dangerous state is that the vehicle is approaching, the content of the danger notification is information indicating that the vehicle is approaching.
  • the content of the danger notification differs depending on the type of the target device selected in step S26 described later.
  • the danger notification content is the display content (content) of the content image PS. That is, the danger notification content is displayed as the content image PS. In this case, for example, the content of the danger notification is image data indicating the content "Be careful because the car is approaching!.
  • the voice output unit 26B is the target device
  • the danger notification content is the content of the voice output from the voice output unit 26B.
  • the content of the danger notification is voice data for issuing a voice saying "A car is approaching. Please be careful”.
  • the sensory stimulus output unit 26C is the target device
  • the danger notification content is the content of the sensory stimulus output from the sensory stimulus output unit 26C. In this case, for example, the danger notification content is a tactile stimulus that attracts the attention of the user U.
  • the setting of the danger notification content in step S14 may be executed at an arbitrary timing after the danger notification content is determined in step S12 and before the danger notification content is output in the subsequent step S38. For example, it may be executed after selecting the target device in the subsequent step S32.
  • step S12 the information providing device 10 calculates various environmental scores based on the environmental information by the environment specifying unit 44 as shown in steps S16 to S22.
  • the environment score is a score for specifying the environment in which the user U (information providing device 10) is placed.
  • the environment specifying unit 44 calculates the posture score (step S16), the whereabouts score (step S18), the movement score (step S20), and the safety score as the environment score. (Step S22).
  • the order from step S16 to step S22 is not limited to this, and is arbitrary. Even when the danger notification content is set in step S14, various environmental scores are calculated as shown in steps S16 to S22. Hereinafter, the environmental score will be described more specifically.
  • FIG. 5 is a table illustrating an example of an environmental score.
  • the environment specifying unit 44 calculates an environment score for each environment category.
  • the environment category indicates the type of environment of user U.
  • the posture of user U, the location of user U, the movement of user U, and the safety of the environment around user U are shown. And, including. Further, the environment specifying unit 44 divides the environment category into more specific subcategories, and calculates the environment score for each subcategory.
  • the environment specifying unit 44 calculates the posture score as the environment score for the posture category of the user U. That is, the posture score is information indicating the posture of the user U, and can be said to be information indicating what kind of posture the user U is in as a numerical value.
  • the environment specifying unit 44 calculates the posture score based on the environment information related to the posture of the user U among the plurality of types of environment information.
  • Environmental information related to the posture of the user U includes a peripheral image acquired by the camera 20A and the orientation of the information providing device 10 detected by the gyro sensor 20E.
  • the posture category of the user U includes a subcategory of standing and a subcategory of the face facing horizontally.
  • the environment specifying unit 44 calculates the posture score for the sub-category of standing state based on the peripheral image acquired by the camera 20A.
  • the posture score for the subcategory of the standing state can be said to be a numerical value indicating the degree of matching of the posture of the user U with the standing state.
  • the method of calculating the posture score for the sub-category of standing may be arbitrary, but for example, it may be calculated using the learning model 30A.
  • the learning model 30A the image data of the scenery reflected in the field of view of a person and the information indicating whether the person is standing are used as one data set, and a plurality of data sets are learned as teacher data. It is a constructed AI model.
  • the environment specifying unit 44 acquires a numerical value indicating the degree of coincidence with the standing state and uses it as a posture score.
  • the degree of agreement with respect to the standing state is used here, the degree of agreement is not limited to the standing state, and may be, for example, the degree of agreement with a sitting state or a sleeping state.
  • the environment specifying unit 44 calculates the posture score for the sub-category that the face orientation is horizontal based on the orientation of the information providing device 10 detected by the gyro sensor 20E.
  • the posture score for the subcategory in which the face orientation is horizontal can be said to be a numerical value indicating the degree of coincidence of the posture (face orientation) of the user U with respect to the horizontal direction.
  • the method of calculating the posture score for the subcategory in which the face orientation is horizontal may be arbitrary. Although the degree of coincidence with respect to the horizontal direction of the face is used here, the degree of coincidence with respect to the horizontal direction may be used.
  • the environment specifying unit 44 sets information (here, the posture score) indicating the posture of the user U based on the peripheral image and the orientation of the information providing device 10.
  • the environment specifying unit 44 is not limited to using the peripheral image and the orientation of the information providing device 10 in order to set the information indicating the posture of the user U, and may use arbitrary environmental information, for example, the peripheral image. And at least one of the orientation of the information providing device 10 may be used.
  • the environment specifying unit 44 calculates the whereabouts score as the environment score for the category of the whereabouts of the user U. That is, the whereabouts score is information indicating the whereabouts of the user U, and can be said to be information indicating what kind of place the user U is located in as a numerical value.
  • the environment specifying unit 44 calculates the location score based on the environment information related to the location of the user U among the plurality of types of environment information.
  • Environmental information related to the location of the user U includes peripheral images acquired by the camera 20A, position information of the information providing device 10 acquired by the GNSS receiver 20C, and peripheral audio acquired by the microphone 20B. Be done.
  • the category of the whereabouts of the user U includes a subcategory of being on the train, a subcategory of being on the railroad track, and a subcategory of being the sound in the train.
  • the environment specifying unit 44 calculates the whereabouts score for the subcategory of being in the train based on the peripheral image acquired by the camera 20A.
  • the whereabouts score for the subcategory of being on the train can be said to be a numerical value indicating the degree of matching of the whereabouts of the user U with respect to the place of being on the train.
  • the method of calculating the whereabouts score for the subcategory of being in the train may be arbitrary, but for example, it may be calculated using the learning model 30A.
  • the learning model 30A the image data of the scenery reflected in the field of view of a person and the information indicating whether the person is in the train are used as one data set, and a plurality of data sets are learned as teacher data. It is an AI model constructed by.
  • the environment specifying unit 44 acquires a numerical value indicating the degree of coincidence with the location in the train and uses it as the location score.
  • the degree of coincidence with respect to the location in the train is calculated here, the degree of coincidence with respect to being in any type of vehicle may be calculated without limitation.
  • the environment specifying unit 44 calculates the whereabouts score for the subcategory of being on the railroad track based on the position information of the information providing device 10 acquired by the GNSS receiver 20C.
  • the whereabouts score for the subcategory of being on the railroad track can be said to be a numerical value indicating the degree of matching of the whereabouts of the user U with the whereabouts of being on the railroad track.
  • the method of calculating the whereabouts score for the sub-category of being on the railroad track may be arbitrary, but for example, map data 30B may be used.
  • the environment specifying unit 44 reads out the map data 30B, and when the current position of the user U overlaps with the coordinates of the railroad track, the location score is such that the degree of matching of the user U's location with the location on the track is high. Is calculated. Although the degree of coincidence on the track is calculated here, the degree of coincidence with the position of any kind of structure or natural object may be calculated without limitation.
  • the environment specifying unit 44 calculates the whereabouts score for the subcategory that it is the sound in the train based on the peripheral voice acquired by the microphone 20B.
  • the whereabouts score for the subcategory of sounds in the train can be said to be a numerical value indicating the degree of matching of the surrounding sounds with the sounds in the train.
  • the method of calculating the whereabouts score for the subcategory of sound in the train may be arbitrary, but for example, in the same manner as the method of determining whether or not a dangerous state is based on the surrounding voice as described above, that is, for example, for example. Judgment may be made by determining whether the peripheral sound is a specific type of sound. Although the degree of matching with the sound in the train is calculated here, the degree of matching with the sound in any place may be calculated without limitation.
  • the environment specifying unit 44 sets information indicating the whereabouts of the user U (here, the whereabouts score) based on the peripheral image, the peripheral voice, and the position information of the information providing device 10.
  • the environment specifying unit 44 is not limited to using the peripheral image, the peripheral voice, and the position information of the information providing device 10 in order to set the information indicating the location of the user U, and may use any environmental information.
  • at least one of a peripheral image, a peripheral sound, and a position information of the information providing device 10 may be used.
  • the environment specifying unit 44 calculates the movement score as the environment score for the movement category of the user U. That is, the movement score is information indicating the movement of the user U, and can be said to be information indicating how the user U is moving as a numerical value.
  • the environment specifying unit 44 calculates the motion score based on the environmental information related to the motion of the user U among the plurality of types of environmental information. Examples of the environmental information related to the movement of the user U include the acceleration information acquired by the acceleration sensor 20D.
  • a subcategory that the user U is moving is included with respect to the movement category of the user U.
  • the environment specifying unit 44 calculates the whereabouts score for the subcategory of moving based on the acceleration information of the information providing device 10 acquired by the acceleration sensor 20D.
  • the movement score for the subcategory of moving can be said to be a numerical value indicating the degree of agreement between the current situation of the user U and the movement of the user U.
  • the method of calculating the movement score for the subcategory of moving may be arbitrary, but for example, the movement score may be calculated from the change in acceleration in a predetermined period.
  • the movement score is calculated so that the degree of agreement with the movement of the user U is high.
  • the position information of the information providing device 10 may be acquired and the movement score may be calculated based on the degree of change in the position in a predetermined period.
  • the speed can be predicted from the amount of change in position during a predetermined period, and the means of transportation such as a vehicle or walking can be specified.
  • the degree of coincidence for moving is calculated here, the degree of coincidence for moving at a predetermined speed may be calculated, for example.
  • the environment specifying unit 44 sets the information indicating the movement of the user U (here, the movement score) based on the acceleration information of the information providing device 10 and the position information of the information providing device 10.
  • the environment specifying unit 44 is not limited to using the acceleration information and the position information in order to set the information indicating the movement of the user U, and may use any environment information, for example, the acceleration information and the position information. At least one may be used.
  • the environment specifying unit 44 calculates the safety score as the environment score for the safety category of the user U. That is, the safety score is information indicating the safety of the user U, and can be said to be information indicating whether the user U is in a safe environment as a numerical value.
  • the environment specifying unit 44 calculates the safety score based on the environmental information related to the safety of the user U among the plurality of types of environmental information.
  • Environmental information related to the safety of the user U includes the peripheral image acquired by the camera 20A, the peripheral sound acquired by the microphone 20B, the light intensity information detected by the optical sensor 20F, and the temperature sensor 20G. Examples include the detected ambient temperature information and the ambient humidity information detected by the humidity sensor 20H.
  • the subcategory of being bright for the safety category of the user U, the subcategory of being bright, the subcategory of having an appropriate amount of infrared rays and ultraviolet rays, and the subcategory of having an appropriate temperature are suitable. It includes a sub-category of high humidity and a sub-category of dangerous goods.
  • the environment specifying unit 44 calculates a safety score for the subcategory of brightness based on the intensity of visible light in the surroundings acquired by the optical sensor 20F.
  • the safety score for the bright subcategory can be said to be a numerical value indicating the degree of matching of the surrounding brightness with sufficient brightness.
  • the method of calculating the safety score for the subcategory of bright may be arbitrary, but for example, it may be calculated based on the intensity of visible light detected by the optical sensor 20F. Further, for example, a safety score for the subcategory of brightness may be calculated based on the brightness of the image captured by the camera 20A. Although the degree of coincidence with respect to sufficient brightness is calculated here, the degree of coincidence with respect to any degree of brightness may be calculated without limitation.
  • the environment specifying unit 44 calculates the safety score for the subcategory that the amount of infrared rays and ultraviolet rays is appropriate based on the intensity of infrared rays and ultraviolet rays in the vicinity acquired by the optical sensor 20F.
  • the safety score for the subcategory that the amount of infrared rays and ultraviolet rays is appropriate can be said to be a numerical value indicating the degree of matching of the intensities of surrounding infrared rays and ultraviolet rays with the appropriate intensities of infrared rays and ultraviolet rays.
  • the method of calculating the safety score for the subcategory that the amount of infrared rays or ultraviolet rays is appropriate may be arbitrary, but for example, it may be calculated based on the intensity of infrared rays or ultraviolet rays detected by the optical sensor 20F. Although the degree of coincidence with respect to the appropriate intensity of infrared rays and ultraviolet rays is calculated here, the degree of coincidence with respect to any intensity of infrared rays and ultraviolet rays may be calculated without limitation.
  • the environment specifying unit 44 calculates a safety score for the subcategory that the temperature is suitable based on the ambient temperature acquired by the temperature sensor 20G.
  • the safety score for the subcategory of suitable temperature can be said to be a numerical value indicating the degree of agreement between the ambient temperature and the suitable temperature.
  • the method of calculating the safety score for the subcategory of suitable temperature may be arbitrary, but may be calculated based on, for example, the ambient temperature detected by the temperature sensor 20G. Although the degree of coincidence with respect to a suitable temperature is calculated here, the degree of coincidence with respect to any temperature may be calculated without limitation.
  • the environment specifying unit 44 calculates a safety score for the subcategory that the humidity is suitable based on the surrounding humidity acquired by the humidity sensor 20H.
  • the safety score for the subcategory of suitable humidity can be said to be a numerical value indicating the degree of agreement between the surrounding humidity and the suitable humidity.
  • the method of calculating the safety score for the subcategory of suitable humidity may be arbitrary, but may be calculated based on, for example, the ambient humidity detected by the humidity sensor 20H. Although the degree of coincidence with respect to suitable humidity is calculated here, the degree of coincidence with respect to any humidity may be calculated without limitation.
  • the environment specifying unit 44 calculates the safety score for the subcategory that there is a dangerous substance based on the peripheral image acquired by the camera 20A.
  • the safety score for the subcategory of dangerous goods can be said to be a numerical value indicating the degree of agreement with the presence of dangerous goods.
  • the method of calculating the safety score for the subcategory that there is a dangerous substance may be arbitrary, but for example, it is the same method as the method of determining whether or not it is in a dangerous state based on the peripheral image as described above, that is, for example. , The judgment may be made by judging whether the object included in the peripheral image is a specific object.
  • the environment specifying unit 44 calculates a safety score for the subcategory that there is a dangerous substance based on the peripheral voice acquired by the microphone 20B.
  • the method of calculating the safety score for the subcategory of dangerous goods may be arbitrary, but for example, in the same manner as the method of determining whether or not a dangerous state is based on the surrounding voice as described above, that is, for example. , The judgment may be made by judging whether the peripheral voice is a specific type of voice.
  • FIG. 5 illustrates the environmental scores calculated for the environment D1 to the environment D4.
  • Environments D1 to D4 indicate cases where the user U is in a different environment, and an environment score for each category (sub-category) in each environment is calculated.
  • the types of environment categories and subcategories shown in FIG. 5 are examples, and the values of the environment scores in environments D1 to D4 are also examples.
  • the information providing device 10 can take an error or the like into consideration by expressing the information indicating the environment of the user U as a numerical value such as an environment score, and estimate the environment of the user U more accurately. can do. In other words, it can be said that the information providing device 10 can accurately estimate the environment of the user U by classifying the environmental information into any of three or more degrees (here, the environmental score).
  • the information indicating the environment of the user U set by the information providing device 10 based on the environment information is not limited to a value such as an environment score, and may be data of any method, for example, Yes or No. Information indicating either of the two options may be used.
  • the information providing device 10 calculates various environmental scores by the method described above in steps S16 to S22 shown in FIG. As shown in FIG. 4, after the information providing device 10 calculates the environment score, the environment specifying unit 44 determines an environment pattern indicating the environment in which the user U is placed based on each environment score (step). S24). That is, the environment specifying unit 44 determines how the user U is in the environment based on the environment score. While the environmental information and the environmental score are the information indicating some elements of the environment of the user U detected by the environment sensor 20, the environmental pattern is set based on the information indicating some elements. , It can be said that it is an index that comprehensively shows the environment.
  • FIG. 6 is a table showing an example of an environmental pattern.
  • the environment specifying unit 44 selects an environment pattern that matches the environment in which the user U is placed from among the environment patterns corresponding to various environments, based on the environment score.
  • correspondence information (table) in which the value of the environmental score and the environmental pattern are associated with each other is recorded in the specification setting database 30C.
  • the environment specifying unit 44 determines the environment pattern based on the environment information and the corresponding information. Specifically, the environment specifying unit 44 selects an environment pattern associated with the calculated environment score value from the corresponding information, and selects it as the environment pattern to be adopted.
  • FIG. 1 the environment specifying unit 44 selects an environment pattern that matches the environment in which the user U is placed from among the environment patterns corresponding to various environments, based on the environment score.
  • correspondence information table in which the value of the environmental score and the environmental pattern are associated with each other is recorded in the specification setting database 30C.
  • the environment specifying unit 44 determines the environment pattern based on the environment information and the
  • the environment pattern PT1 indicates that the user U is sitting in the train
  • the environment pattern PT2 indicates that the user U is walking on the sidewalk
  • the environment pattern PT3 indicates that the user U is walking on the sidewalk. It indicates that the user U is walking on a dark sidewalk
  • the environmental pattern PT4 indicates that the user U is shopping.
  • the environment score of "standing” is 10
  • the environment score of "face orientation is horizontal” is 100. Therefore, the user U sits down. It can be predicted that the face is turned almost horizontally.
  • the environmental score of "inside the train” is 90
  • the environmental score of "on the railroad track” is 100
  • the environmental score of "sound in the train” is 90
  • the environmental score of "bright” is 50, which means that it is darker than the outside because it is inside the train.
  • the environmental scores of "infrared rays and ultraviolet rays are appropriate", “suitable temperature”, and “suitable humidity” are 100, which can be said to be safe.
  • the environmental score of "there is a dangerous substance” is 10 in terms of images and 20 in terms of sound, so this is also considered safe. That is, in the environment D1, it is possible to estimate that the user U is in a safe and comfortable situation while moving in the train from each environment score, and the environment pattern of the environment D1 is It is said to be the environmental pattern PT1 indicating that the person is sitting on the train.
  • the environment score of the “standing state” is 10
  • the environment score of the “face orientation in the horizontal direction” is 90. It can be predicted that he will sit and turn his face almost horizontally.
  • the environmental score of "inside the train” is 0, the environmental score of "on the railroad track” is 0, and the environmental score of "sound in the train” is 10, it can be seen that the user U is not on the train.
  • the environment D2 it can be confirmed that the user U is on the road based on the environment score of the place of residence.
  • the environment score of "moving" is 100, it can be seen that the user U is moving with a constant velocity or acceleration.
  • the environmental score of "bright” is 100, which indicates that it is a bright outdoor environment.
  • the "appropriate amount of infrared rays and ultraviolet rays” is 80, and it can be seen that there is a slight influence of ultraviolet rays and the like.
  • the environmental scores of "suitable temperature” and “suitable humidity” are 100, which can be said to be safe.
  • the environmental score of "there is a dangerous substance” is 10 in terms of images and 20 in terms of sound, so this is also considered safe. That is, in the environment D2, it is possible to estimate from each environment score that the user U is moving on the sidewalk on foot, is bright outdoors, and no dangerous substance is recognized, and the environment pattern of the environment D2 is. , It is said to be the environmental pattern PT2 indicating that the person is walking on the sidewalk.
  • the environment score of the “standing state” is 0, and the environment score of the “face orientation in the horizontal direction” is 90. It can be predicted that he will sit and turn his face almost horizontally.
  • the environmental score of "inside the train” is 5, the environmental score of "on the railroad track” is 0, and the environmental score of "sound in the train” is 5, it can be seen that the user U is not on the train.
  • the environment score of "moving" is 100, it can be seen that the user U is moving with a constant velocity or acceleration.
  • the environment score of "bright” is 10, which indicates that the environment is dark.
  • the "appropriate amount of infrared rays and ultraviolet rays” is 100, which shows that it is safe.
  • the environmental score of "suitable temperature” is 75, which can be said to be hotter or colder than the standard.
  • the environmental score of "there is a dangerous substance” is 90 in the image and 80 in the sound, it can be seen that something is making a sound and approaching.
  • the object can be determined from the sound and the image, and here it can be determined that the car is approaching from the front and the sound is the engine sound of the car.
  • the pattern is the environmental pattern PT3, which indicates walking on a dark sidewalk.
  • the environment score of the "standing state” is 0, and the environment score of the "face orientation in the horizontal direction” is 90. It can be predicted that he will sit and turn his face almost horizontally.
  • the environmental score of "inside the train” is 20, the environmental score of "on the railroad track” is 0, and the environmental score of "sound in the train” is 5, it can be seen that the user U is not on the train.
  • the environment D3 it can be confirmed that the user U is in the shopping center based on the environment score of the place of residence.
  • the environment score of "moving" is 80, it can be seen that the user U is moving slowly.
  • the environmental score of "bright” is 70, and it can be expected that the environment score is relatively bright but as bright as indoor lighting. Further, the "appropriate amount of infrared rays and ultraviolet rays” is 100, which shows that it is safe. Further, the environmental score of "suitable temperature” is 100, which is comfortable, but the environmental score of "suitable humidity” is 90, so it cannot be said that it is comfortable. In addition, the environmental score of "there is a dangerous substance” is 10 in terms of images and 20 in terms of sound, so this is also considered safe.
  • the environment D4 it is possible to estimate from each environment score that the user U is moving in the shopping center on foot, the surrounding area is relatively bright, and there are no dangerous substances, and the environment pattern of the environment D4 is. , The environmental pattern PT4 indicating that the person is shopping.
  • the information providing device 10 selects the target device to be operated from the output units 26 based on the environment pattern by the output selection unit 48 and the output specification determination unit 50, as shown in FIG.
  • the reference output specification is set (step S26).
  • the target device is a device that is operated in the output unit 26, and in the present embodiment, the output selection unit 48 and the display unit 26A are based on environmental information, more preferably based on an environmental pattern.
  • the target device is selected from the voice output unit 26B and the sensory stimulation output unit 26C. Since the environment pattern is information indicating the environment of the current user U, by selecting the target device based on the environment pattern, it is possible to select an appropriate sensory stimulus according to the environment of the current user U.
  • the output selection unit 48 determines whether it is highly necessary for the user U to visually recognize the surrounding environment based on the environment information, and determines whether the display unit 26A is the target device based on the determination result. good.
  • the output specification determination unit 50 selects the display unit 26A as the target device when the necessity of visually recognizing the surrounding environment is lower than the predetermined standard, and the display unit 50 exceeds the predetermined standard. 26A is not the target device. It may be arbitrarily determined whether or not the user U needs to visually recognize the surrounding environment, but for example, when the user U is moving or there is a dangerous object, it is set to be equal to or higher than a predetermined standard. You may judge that it will be.
  • the output selection unit 48 determines whether the user U has a high need to hear surrounding sounds based on the environmental information, and determines whether the voice output unit 26B is the target device based on the determination result. It's okay.
  • the output specification determination unit 50 selects the audio output unit 26B as the target device when the necessity of listening to the surrounding sound is lower than the predetermined standard, and outputs the audio when the necessity exceeds the predetermined standard. Part 26B is not the target device. It may be arbitrarily determined whether the user U has a high need to hear the surrounding sounds, but for example, when the user U is moving or there is a dangerous object, the standard is exceeded. You may judge.
  • the output selection unit 48 may determine whether the user U may receive the tactile stimulus based on the environmental information, and may determine whether the sensory stimulus output unit 26C is the target device based on the determination result. .. In this case, for example, when the output specification determination unit 50 determines that the tactile stimulus may be received, the sensory stimulus output unit 26C is selected as the target device, and when it is determined that the tactile stimulus is not acceptable, the sensory stimulus is stimulated. The output unit 26C is not the target device. It may be arbitrarily determined whether the user U can receive the tactile stimulus, but for example, when the user U is moving or there is a dangerous object, it is determined that the user U is not allowed to receive the tactile stimulus. You can do it.
  • the output selection unit 48 has a relationship between the environment pattern and the target device, for example, as shown in FIG. 8 described later. It is preferable to select the target device based on the table showing.
  • the output specification determination unit 50 determines the reference output specification, which is the reference output specification, based on the environmental information, more preferably based on the environmental pattern.
  • the output specification is an index showing how the stimulus output by the output unit 26 is output.
  • the output specification of the display unit 26A indicates how to display the content image PS to be output, and can be rephrased as the display specification.
  • Examples of the output specifications of the display unit 26A include the size (area) of the content image PS, the transparency of the content image PS, and the display content (content) of the content image PS.
  • the size of the content image PS refers to the area occupied by the content image PS in the screen of the display unit 26A.
  • the transparency of the content image PS refers to the degree of transparency of the content image PS. The higher the transparency of the content image PS, the more light incident on the user U's eyes as the background image PA is transmitted through the content image PS, and the background image PA superimposed on the content image PS is visually recognized more clearly. Will be done.
  • the output specification determination unit 50 determines the size, transparency, and display content of the content image PS as the output specifications of the display unit 26A based on the environment pattern.
  • the output specifications of the display unit 26A are not limited to all of the size, transparency, and display content of the content image PS.
  • the output specification of the display unit 26A may be at least one of the size, transparency, and display content of the content image PS, or may be another.
  • the output specification determination unit 50 determines whether it is highly necessary for the user U to visually recognize the surrounding environment based on the environment information, and based on the determination result, determines the output specification (reference output specification) of the display unit 26A. You may decide. In this case, the output specification determination unit 50 determines the output specification (reference output specification) of the display unit 26A so that the higher the necessity of visually recognizing the surrounding environment, the higher the visibility of the environment image PM. The visibility here refers to the ease of viewing the environment image PM. For example, the output specification determination unit 50 may reduce the size of the content image PS or increase the transparency of the content image PS as the necessity of visually recognizing the surrounding environment increases. The restrictions on the display contents may be increased, or these may be combined.
  • the distribution image may be excluded from the display content and the display content may be at least one of the navigation image and the notification image. Further, it may be arbitrarily determined whether or not the user U needs to visually recognize the surrounding environment, and examples thereof include the case where the user U is moving or there is a dangerous object.
  • FIG. 7 is a schematic diagram illustrating an example of the level of the output specification of the content image.
  • the output specification determination unit 50 may classify the output specifications of the content image PS into levels and select the level of the output specifications based on the environmental information.
  • the output specifications of the content image PS are set so that the visibility of the environment image PM is different for each level.
  • each level of the output specification is set so that the higher the level, the stronger the output stimulus and the lower the visibility of the environmental image PM. Therefore, the output specification determination unit 50 sets the level of the output specification higher as the necessity of visually recognizing the surrounding environment is lower.
  • the content image PS is not displayed and only the environment image PM is visually recognized, so that the visibility of the environment image PM is the highest.
  • the content image PS is displayed, but the display content of the content image PS is limited.
  • the distribution image is excluded from the display content, and the display content is at least one of the navigation image and the notification image.
  • the size of the content image PS is set small.
  • the content image PS is superimposed and displayed on the environment image PM only when it is necessary to display the navigation image and the notification image. Therefore, the visibility of the environment image PM at level 1 is lower than level 0 because the content image PS is displayed, but it is higher because the display content of the content image PS is limited.
  • the display content of the content image PS is not limited, but the size of the content image PS is limited and the size of the content image PS is set small.
  • the visibility of the environment image PM at level 2 is lower than that at level 1 because the display content is not limited.
  • the display content and size of the content image PS are not limited, and for example, the content image PS is displayed on the entire screen of the display unit 26A.
  • the transparency of the content image PS is limited, and the transparency is set high. Therefore, at level 3, the translucent content image PS and the environment image PM superimposed on the content image PS are visually recognized.
  • the visibility of the environment image PM at level 3 is lower than that of level 2 because the size of the content image PS is not limited.
  • the display content, size, and transparency of the content image PS are not limited, and for example, the content image PS has zero transparency on the entire screen of the display unit 26A. It is displayed.
  • the transparency of the content image PS is zero (opaque)
  • the environment image PM is not visually recognized, and only the content image PS is visually recognized. Therefore, the visibility of the environmental image PM at level 4 is the lowest.
  • an image that falls within the visual field range of the user U may be displayed as an environment image PM in a part of the screen of the display unit 26A.
  • the output specification determination unit 50 also determines the output specifications of the voice output unit 26B and the sensory stimulation output unit 26C.
  • Examples of the output specifications (voice specifications) of the voice output unit 26B include volume, presence / absence and degree of sound. Acoustic refers to special effects such as surround and three-dimensional sound fields. The louder the volume and the louder the degree of sound, the stronger the degree of auditory stimulation to the user U can be.
  • the output specification determination unit 50 determines whether it is highly necessary for the user U to hear surrounding sounds based on the environmental information, and based on the determination result, determines the output specification (reference output specification) of the audio output unit 26B. You may decide.
  • the output specification determination unit 50 has an output specification (reference output specification) of the audio output unit 26B so that the lower the need to hear the surrounding sound, the louder the volume and the louder the degree of sound. To decide. It may be arbitrarily determined whether or not the user U has a high need to hear the surrounding sounds, and examples thereof include the case where the user U is moving or there is a dangerous object.
  • the output specification determination unit 50 may set the level of the output specification of the audio output unit 26B in the same manner as the output specification of the display unit 26A.
  • the output specifications of the sensory stimulus output unit 26C include the strength of the tactile stimulus and the frequency of outputting the tactile stimulus. The higher the intensity and frequency of the tactile stimulus, the stronger the degree of the tactile stimulus to the user U can be.
  • the output specification determination unit 50 determines whether the user U is in a state suitable for receiving a tactile stimulus based on the environmental information, and based on the determination result, the output specification (reference output specification) of the sensory stimulus output unit 26C. ) May be determined. In this case, the output specification determination unit 50 outputs the sensory stimulus output unit 26C so that the strength of the tactile stimulus increases and the frequency of the tactile stimulus increases as the output specification determination unit 50 is suitable for receiving the tactile stimulus. Determine the specifications (reference output specifications).
  • the output specification determination unit 50 may set the level of the output specification of the sensory stimulation output unit 26C in the same manner as the output specification of the display unit 26A.
  • the output selection unit 48 and the output specification determination unit 50 determine the target device and the reference output specification based on the relationship between the environment pattern and the target device and the reference output specification.
  • FIG. 8 is a table showing the relationship between the environmental pattern, the target device, and the reference output specifications.
  • the output selection unit 48 and the output specification determination unit 50 determine the target device and the reference output specification based on the relational information indicating the relationship between the environment pattern and the target device and the reference output specification.
  • the relational information is information (table) in which the environment pattern, the target device, and the reference output specification are stored in association with each other, and is stored in, for example, the specification setting database 30C.
  • reference output specifications are set for each type of the output unit 26, that is, here, for each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C.
  • the output selection unit 48 and the output specification determination unit 50 determine the target device and the reference output specification based on this related information and the environment pattern set by the environment identification unit 44. Specifically, the output selection unit 48 and the output specification determination unit 50 read out the relational information, and from the relational information, select the target device and the reference output specification associated with the environment pattern set by the environment identification unit 44. Select to determine the target device and reference output specifications.
  • the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C are all targeted devices, and their reference outputs are used.
  • the specification level is assigned to 4. The higher the level, the higher the output stimulus.
  • the environment pattern PT2 which is said to be walking on the sidewalk, is almost safe and comfortable, but since it is considered that forward attention is required because the person is walking, the display unit 26A, the audio output unit 26B, and the like.
  • all of the sensory stimulus output unit 26C are targeted devices, and the level of their reference output specifications is assigned to 3.
  • the sensory stimulus output unit 26C is the target device, and the levels of the reference output specifications of the display unit 26A, the voice output unit 26B, and the sensory stimulus output unit 26C are assigned to 0, 2, and 2, respectively.
  • the display unit 26A and the audio output unit All of 26B and the sensory stimulus output unit 26C are targeted devices, and the level of their reference output specifications is assigned to 2.
  • the allocation of the target device and the reference output specification for each environment pattern in FIG. 8 is an example and may be set as appropriate.
  • the information providing device 10 sets the target device and the reference output specification based on the relationship between the environment pattern and the target device and the reference output specification set in advance.
  • the setting method of the target device and the reference output specification is not limited to this, and the information providing device 10 sets the target device and the reference output specification by an arbitrary method based on the environmental information detected by the environment sensor 20. good.
  • the information providing device 10 is not limited to selecting both the target device and the reference output specification based on the environmental information, and may select at least one of the target device and the reference output specification.
  • the information providing device 10 acquires the biometric information of the user U detected by the biometric sensor 22 by the biometric information acquisition unit 42 (step S28).
  • the biological information acquisition unit 42 acquires the pulse wave information of the user U from the pulse wave sensor 22A, and acquires the brain wave information of the user U from the brain wave sensor 22B.
  • FIG. 9 is a graph showing an example of a pulse wave. As shown in FIG. 9, the pulse wave is a waveform in which a peak called R wave WR appears at predetermined time intervals. The heart is dominated by the autonomic nervous system, and the pulse rate moves by generating electrical signals at the cellular level that trigger the movement of the heart.
  • electrocardiography is a repetition of depolarization / action potential and repolarization / resting potential, and by detecting this electrical activity from the body surface, electrocardiogram can be detected.
  • the pulse wave travels at a very high speed and is transmitted throughout the body almost at the same time as the heart strikes, so it can be said that the heartbeat is also synchronized with the pulse wave. Since the pulse wave hit by the heart and the R wave of the electrocardiogram are synchronized, the RR interval of the pulse wave can be considered to be equivalent to the RR interval of the electrocardiogram.
  • the fluctuation of the pulse wave RR interval can be said to be a time differential value, so by calculating the differential value and detecting the magnitude of the fluctuation, the activity of the autonomic nerves of the living body is almost irrelevant to the wearer's intention. It is possible to predict to some extent the degree of calming and the degree of calming, that is, frustration due to mental disorder, unpleasant feelings due to a crowded train, and stress that occurs in a relatively short time.
  • EEG is a wave such as ⁇ wave and ⁇ wave
  • the activity of the whole brain is increased or decreased by detecting the basic rhythm (background EEG) activity that appears in the whole brain and detecting its amplitude.
  • the information providing device 10 identifies the user state indicating the mental state of the user U based on the biometric information of the user U by the user state specifying unit 46, and the user state.
  • the output specification correction degree is calculated based on (step S30).
  • the output specification correction degree is a value for correcting the reference output specification set by the output specification determination unit 50, and the final output specification is determined based on the reference output specification and the output specification correction degree.
  • FIG. 10 is a table showing an example of the relationship between the user state and the output specification correction degree.
  • the user state specifying unit 46 specifies the brain activity of the user U as the user state based on the brain wave information of the user U.
  • the user state specifying unit 46 may specify the brain activity by any method based on the brain wave information of the user U, and for example, the brain activity is from a specific region of the frequency with respect to the waveforms of the ⁇ wave and the ⁇ wave. You may specify the degree.
  • the user state specifying unit 46 performs a fast Fourier transform on the time waveform of the brain wave to calculate the power spectrum amount of the high frequency portion (for example, 10 Hz to 11.75 Hz) of the ⁇ wave.
  • the user state specifying unit 46 sets the brain activity when the power spectrum amount of the high frequency part of the ⁇ wave is within a predetermined numerical range as VA3, and sets the power spectrum amount of the high frequency part of the ⁇ wave as the brain activity degree VA3.
  • the brain activity in the case of a predetermined numerical range lower than the numerical range of the case is VA2, and the power spectral amount of the high frequency portion of the ⁇ wave is in the predetermined numerical range lower than the numerical range of the brain activity VA2.
  • the brain activity in a certain case is defined as VA1.
  • the brain activity is higher in the order of VA1, VA2, and VA3.
  • the larger the power spectrum amount of the high frequency component of the ⁇ wave (for example, 18 Hz to 29.75 Hz), the higher the possibility of psychological "warning" and "upset". Therefore, the power spectrum of the high frequency component of the ⁇ wave
  • the amount may also be used to specify brain activity.
  • the user state specifying unit 46 determines the output specification correction degree based on the brain activity of the user U.
  • the output specification correction degree is determined based on the output specification correction degree relation information indicating the relationship between the user state (brain activity in this example) and the output specification correction degree.
  • the output specification correction degree-related information is information (table) in which the user state and the output specification correction degree are stored in association with each other, and is stored in, for example, the specification setting database 30C.
  • the output specification correction degree is set for each type of the output unit 26, that is, here, for each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C.
  • the user state specifying unit 46 determines the output specification correction degree based on the output specification correction degree related information and the specified user state. Specifically, the user state specifying unit 46 reads out the output specification correction degree related information, and from the output specification correction degree related information, outputs the output specification correction degree associated with the set brain activity of the user U. Select to determine the output specification correction degree.
  • the output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C is set to -1 with respect to the brain activity degree VA3, respectively, with respect to the brain activity degree VA2.
  • the output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulus output unit 26C is set to 0, respectively, and the display unit 26A, the voice output unit 26B, and the sensory stimulus output are set with respect to the brain activity VA1.
  • the output specification correction degree of the unit 26C is set to 1, respectively.
  • the output specification correction degree here is set to a value that increases the output specification as the value increases. That is, the user state specifying unit 46 sets the output specification correction degree so that the lower the brain activity, the higher the output specification. It should be noted that increasing the output specifications here means strengthening the sensory stimulus, and the same applies thereafter.
  • the value of the output specification correction degree in FIG. 10 is an example and may be set as appropriate.
  • the user state specifying unit 46 specifies the mental stability of the user U as the user state based on the pulse wave information of the user U.
  • the user state specifying unit 46 calculates the fluctuation value of the interval length between continuous R waves WH in time series from the brain wave information of the user U, that is, the differential value of the RR interval, and R -The brain activity of the user U is specified based on the differential value of the R interval.
  • the user state specifying unit 46 specifies that the smaller the differential value of the RR interval, that is, the more the interval length between the R waves WH does not fluctuate, the higher the mental stability of the user U is. In the example of FIG.
  • the user state specifying unit 46 classifies the mental stability into one of VB3, VB2, and VB1 from the pulse wave information of the user U.
  • the user state specifying unit 46 sets the stability of the mind when the differential value of the RR interval is within a predetermined numerical range as VB3, and sets the differential value of the RR interval as the stability of the mind VB3.
  • VB1 be the stability of the mind. It is assumed that the stability of the mind is higher in the order of VB1, VB2, and VB3.
  • the user state specifying unit 46 determines the output specification correction degree based on the output specification correction degree related information and the specified mental stability. Specifically, the user state specifying unit 46 reads out the output specification correction degree related information, and from the output specification correction degree related information, the output specification correction degree associated with the set mental stability of the user U. Select to determine the output specification correction degree.
  • the output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C is set to 1 with respect to the mental stability VB3, respectively, with respect to the mental stability VB2.
  • the output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C is set to 0, respectively, and the display unit 26A, the voice output unit 26B, and the sensation are set with respect to the mental stability VB1.
  • the output specification correction degree of the stimulus output unit 26C is set to -1, respectively. That is, the user state specifying unit 46 sets the output specification correction degree so that the higher the stability of the mind, the higher the output specification (sensory stimulation).
  • the value of the output specification correction degree in FIG. 10 is an example and may be set as appropriate.
  • the user state specifying unit 46 sets the output specification correction degree based on the preset relationship between the user state and the output specification correction degree.
  • the method of setting the output specification correction degree is not limited to this, and the information providing device 10 may set the output specification correction degree by any method based on the biological information detected by the biological sensor 22. Further, the information providing device 10 calculates the output specification correction degree using both the brain activity specified from the electroencephalogram and the mental stability specified from the pulse wave, but is not limited thereto. For example, the information providing device 10 may calculate the output specification correction degree by using either the brain activity specified from the electroencephalogram or the mental stability specified from the pulse wave.
  • the information providing device 10 handles the biometric information as a numerical value, and by estimating the user state based on the biometric information, it is possible to take into account the error of the biometric information and the like, and the psychology of the user U can be more accurately performed.
  • the state can be estimated.
  • the information providing device 10 can accurately estimate the psychological state of the user U by classifying the biometric information and the user state based on the biometric information into any of three or more degrees.
  • the information providing device 10 is not limited to classifying the biometric information and the user state based on the biometric information into three or more degrees, and treats the information as, for example, information indicating either Yes or No. You may.
  • the information providing device 10 generates output restriction necessity information based on the biometric information of the user U by the user state specifying unit 46 (step S32).
  • FIG. 11 is a table showing an example of output restriction necessity information.
  • the output restriction necessity information is information indicating whether or not the output restriction of the output unit 26 is necessary, and can be said to be information indicating whether or not the operation of the output unit 26 is permitted.
  • the output restriction necessity information is generated for each output unit 26, that is, for each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C.
  • the user state specifying unit 46 provides output restriction necessity information indicating whether or not to permit the operation of each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C based on the biological information. Generate. More specifically, the user state specifying unit 46 generates output restriction necessity information based on both biometric information and environmental information. The user state specifying unit 46 generates output restriction necessity information based on the user state set based on the biological information and the environmental score calculated based on the environmental information. In the example of FIG. 11, the user state specifying unit 46 generates output restriction necessity information based on the brain activity as the user state and the location score for the subcategory of being on the railroad track as the environmental score. In the example of FIG.
  • the user state specifying unit 46 has a location score of 100 for the subcategory of being on the railroad track, and the display unit 26A satisfies the first condition that the brain activity is VA3 and VA2. Generates output restriction necessity information that disallows the use of.
  • the first condition is not limited to the case where the location score for the subcategory of being on the railroad track is 100 and the brain activity is VA3 and VA2.
  • the position of the information providing device 10 is a predetermined area. It may be in the case where the brain activity is equal to or less than a predetermined brain activity threshold.
  • the predetermined area here may be, for example, on a railroad track or a roadway.
  • the user state specifying unit 46 generates output restriction necessity information based on the brain activity as the user state and the motion score for the subcategory of moving as the environmental score. ..
  • the user state specifying unit 46 has a motion score of 0 for the subcategory of being moving, and the display unit 26A satisfies the first condition that the brain activity is VA3 and VA2. Generates output restriction necessity information that disallows use.
  • the second condition is not limited to the case where the movement score for the subcategory of moving is 0 and the brain activity is VA3 or VA2, for example, per unit time of the position of the information providing device 10.
  • the change amount may be equal to or less than a predetermined change amount threshold value, and the brain activity may be equal to or less than a predetermined brain activity threshold value.
  • the user state specifying unit 46 satisfies the case where the biometric information and the environmental information satisfy a specific relationship, and here, when the user state and the environmental score satisfy at least one of the first condition and the second condition. Generates output restriction necessity information that disallows the use of the display unit 26A.
  • the user state specifying unit 46 does not generate the output restriction necessity information disallowing the use of the display unit 26A.
  • the generation of output restriction necessity information is not an essential process.
  • the information providing device 10 acquires the image data of the content image PS by the content image acquisition unit 52 (step S34).
  • the image data of the content image PS is image data for displaying the content (display content) of the content image.
  • the content image acquisition unit 52 acquires image data of the content image from an external device via the content image reception unit 28A.
  • the content image acquisition unit 52 may acquire image data of the content image of the content (display content) according to the position (earth coordinates) of the information providing device 10 (user U).
  • the position of the information providing device 10 is specified by the GNSS receiver 20C.
  • the content image acquisition unit 52 receives the content related to the position.
  • the content image PS can be displayed and controlled at the will of the user U, but if the display is set to be possible, it is convenient because it is not known when, where, and at what timing, but it can be annoying. obtain.
  • the specification setting database 30C information indicating whether or not the content image PS set by the user U can be displayed, display specifications, and the like may be recorded.
  • the content image acquisition unit 52 reads this information from the specification setting database 30C, and controls the acquisition of the content image PS based on this information. Further, the location information and the specification setting database 30C may describe the same information on a site on the Internet, and the content image acquisition unit 52 may control the acquisition of the content image PS while checking the contents. ..
  • the step S34 for acquiring the image data of the content image PS is not limited to being executed before the step S36 described later, and may be executed at any timing before the step S38 described later.
  • the content image acquisition unit 52 may acquire audio data and tactile stimulus data related to the content image PS as well as the image data of the content image PS.
  • the audio output unit 26B outputs audio data related to the content image PS as audio content (audio content)
  • the sensory stimulus output unit 26C outputs tactile stimulus data related to the content image PS to tactile stimulus content (tactile sensation). Output as the content of the stimulus).
  • the information providing device 10 determines the output specifications by the output specification determining unit 50 based on the reference output specifications and the output specification correction degree (step S36).
  • the output specification determination unit 50 determines the reference output specification set based on the environmental information as the final output specification for the output unit 26 by correcting the reference output specification set based on the biological information with the output specification correction degree.
  • the formula for correcting the reference output specification with the output specification correction degree may be arbitrary.
  • the information providing device 10 corrects the reference output specification set based on the environmental information with the output specification correction degree set based on the biological information, and determines the final output specification. ..
  • the information providing device 10 is not limited to determining the output specifications by correcting the reference output specifications with the output specification correction degree, and uses at least one of the environmental information and the biometric information to adjust the output specifications by an arbitrary method. It may be something to decide. That is, the information providing device 10 may determine the output specifications by an arbitrary method based on the environmental information and the biometric information, or determine the output specifications by an arbitrary method based on either the environmental information or the biometric information. You may decide.
  • the information providing device 10 may determine the output specifications by using the method for determining the above-mentioned reference output specifications based on the environmental information among the environmental information and the biological information. Further, for example, the information providing device 10 may determine the output specifications by using the above-mentioned method of determining the output specification correction degree based on the biological information among the environmental information and the biological information.
  • the output selection unit 48 is based not only on the environment score but also on the output restriction necessity information. Select the target device. That is, even the output unit 26 selected as the target device based on the environmental score in step S26 is excluded from the target device if the specification is disapproved in the output restriction necessity information. In other words, the output selection unit 48 selects the target device based on the output restriction necessity information and the environmental information. Furthermore, since the output restriction necessity information is set based on the biological information, it can be said that the target device is set based on the biological information and the environmental information. However, the output selection unit 48 is not limited to setting the target device based on both the biological information and the environmental information, and may select the target device based on at least one of the biological information and the environmental information.
  • Output control After setting the target device and the output specifications and acquiring the image data of the content image PS and the like, the information providing device 10 uses the output control unit 54 for the target device based on the output specifications as shown in FIG. Output is performed (step S38). The output control unit 54 does not operate the output unit 26 that is not the target device.
  • the output control unit 54 uses the content image data acquired by the content image acquisition unit 52 so as to comply with the output specifications of the display unit 26A for the display unit 26A.
  • the output specifications are set based on the environmental information and biological information. Therefore, by displaying the content image PS according to the output specifications, the environment in which the user U is placed and the psychological state of the user U are displayed.
  • the content image PS can be displayed in an appropriate manner according to the above.
  • the output control unit 54 outputs the audio acquired by the content image acquisition unit 52 so as to follow the output specifications of the audio output unit 26B for the audio output unit 26B. Output audio based on data.
  • the higher the brain activity of the user U or the lower the stability of the mind of the user U the weaker the auditory stimulus, so that the user U can concentrate on other things or have a margin in the mind. If there is no such thing, it is possible to reduce the risk of being bothered by voice.
  • the lower the brain activity of the user U and the higher the stability of the mind of the user U the stronger the auditory stimulus, so that information can be appropriately obtained by voice.
  • the output control unit 54 causes the content image acquisition unit 52 to follow the output specifications of the sensory stimulus output unit 26C with respect to the sensory stimulus output unit 26C.
  • the tactile stimulus based on the acquired tactile stimulus data is output.
  • the higher the brain activity of the user U or the lower the stability of the mind of the user U the weaker the tactile stimulus, so that the user U can concentrate on other things or have a margin in the mind. In the absence of, the risk of being bothered by tactile stimuli can be reduced.
  • the lower the brain activity of the user U and the higher the stability of the mind of the user U the stronger the tactile stimulus, so that information can be appropriately obtained by the tactile stimulus.
  • the output control unit 54 causes the target device to notify the danger notification content so as to comply with the set output specifications. ..
  • the information providing device 10 is appropriate according to the environment in which the user U is placed and the psychological state of the user U by setting the output specifications based on the environmental information and the biological information. It is possible to output sensory stimuli to a certain degree. Further, the information providing device 10 selects an appropriate sensory stimulus according to the environment in which the user U is placed and the psychological state of the user U by selecting the target device to be operated based on the environmental information and the biological information. can.
  • the information providing device 10 is not limited to using both environmental information and biological information, and for example, only one of them may be used. Therefore, the information providing device 10 may, for example, select a target device and set an output specification based on environmental information, or select a target device and set an output specification based on biometric information. May be good.
  • the information providing device 10 is a device that provides information to the user U, and includes an output unit 26, an environment sensor 20, an output specification determination unit 50, and an output. It includes a control unit 54.
  • the output unit 26 includes a display unit 26A that outputs a visual stimulus, a voice output unit 26B that outputs an auditory stimulus, and a sensory stimulus output unit 26C that outputs a sensory stimulus different from the visual and auditory stimuli.
  • the environment sensor 20 detects environmental information around the information providing device 10.
  • the output specification determination unit 50 determines the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus, that is, the output specifications of the display unit 26A, the audio output unit 26B, and the sensory stimulus output unit 26C, based on the environmental information. ..
  • the output control unit 54 causes the output unit 26 to output visual stimuli, auditory stimuli, and sensory stimuli based on the output specifications.
  • the information providing device 10 sets the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental information, so that the visual stimulus, the auditory stimulus, and the sensory stimulus are set according to the environment in which the user U is placed.
  • the stimulus can be balanced and output. Therefore, according to the information providing device 10, information can be appropriately provided to the user U.
  • the information providing device 10 includes a plurality of environment sensors that detect different types of environmental information from each other, and an environment specifying unit 44.
  • the environment specifying unit 44 identifies an environment pattern that comprehensively indicates the current environment of the user U based on different types of environment information.
  • the output specification determination unit 50 determines the output specifications based on the environment pattern.
  • the information providing device 10 sets the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental pattern specified from a plurality of types of environmental information, so that the information providing device 10 can be set according to the environment in which the user U is placed. Information can be provided more appropriately.
  • the output specification determination unit 50 determines the size of the image displayed by the display unit 26A and the transparency of the image displayed by the display unit 26A as the output specifications of the visual stimulus. And the content (display content) of the image displayed by the display unit 26A, at least one is determined.
  • the information providing device 10 can more appropriately provide visual information by determining these as the output specifications of the visual stimulus.
  • the output specification determination unit 50 determines at least one of the volume of the voice output by the voice output unit 26B and the sound as the output specifications of the auditory stimulus. decide.
  • the information providing device 10 can more appropriately provide auditory information by determining these as the output specifications of the auditory stimulus.
  • the sensory stimulus output unit 26C outputs a tactile stimulus as a sensory stimulus
  • the output specification determination unit 50 outputs a auditory stimulus as an output specification. At least one of the strength of the tactile stimulus output by the sensory stimulus output unit 26C and the frequency of outputting the tactile stimulus is determined.
  • the information providing device 10 can more appropriately provide tactile information by determining these as output specifications of the tactile stimulus.
  • the information providing device 10 is a device that provides information to the user U, and includes an output unit 26, a biosensor 22, an output specification determination unit 50, and an output control unit 54.
  • the output unit 26 includes a display unit 26A that outputs a visual stimulus, a voice output unit 26B that outputs an auditory stimulus, and a sensory stimulus output unit 26C that outputs a sensory stimulus different from the visual and auditory stimuli.
  • the biosensor 22 detects the biometric information of the user U.
  • the output specification determination unit 50 determines the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus, that is, the output specifications of the display unit 26A, the audio output unit 26B, and the sensory stimulus output unit 26C, based on the biological information.
  • the output control unit 54 causes the output unit 26 to output visual stimuli, auditory stimuli, and sensory stimuli based on the output specifications.
  • the information providing device 10 sets the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental information, so that the visual stimulus, the auditory stimulus, and the sensory stimulus are balanced according to the psychological state of the user U. Can be output appropriately. Therefore, according to the information providing device 10, information can be appropriately provided to the user U.
  • the biological information includes information on the autonomic nerve of the user U
  • the output specification determination unit 50 determines the output specification based on the information on the autonomic nerve of the user U.
  • the information providing device 10 provides information more appropriately according to the psychological state of the user U by setting the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the information about the autonomic nerve of the user U. can.
  • the information providing device 10 is a device that provides information to the user U, and includes an output unit 26, an environment sensor 20, an output selection unit 48, and an output control unit 54.
  • the output unit 26 includes a display unit 26A that outputs a visual stimulus, a voice output unit 26B that outputs an auditory stimulus, and a sensory stimulus output unit 26C that outputs a sensory stimulus different from the visual and auditory stimuli.
  • the environment sensor 20 detects environmental information around the information providing device 10.
  • the output selection unit 48 selects the target device to be used from the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C based on the environmental information.
  • the output control unit 54 controls the target device.
  • the information providing device 10 By selecting the target device based on the environmental information, the information providing device 10 appropriately determines which stimulus, the visual stimulus, the auditory stimulus, or the sensory stimulus, is output according to the environment in which the user U is placed. Can be selected. Therefore, according to the information providing device 10, information can be appropriately provided to the user U according to the environment in which the user U is placed.
  • the information providing device 10 further includes a biological sensor 22 for detecting the biological information of the user, and the output selection unit 48 is a target based on the environmental information and the biological information of the user U. Select a device.
  • the information providing device 10 can select an appropriate sensory stimulus according to the environment in which the user U is placed and the psychological state of the user U by selecting the target device to be operated based on the environmental information and the biological information.
  • the environment sensor 20 detects the position information of the information providing device 10 as the environmental information
  • the biosensor 22 detects the brain activity of the user U as the biometric information
  • the output selection unit 48 has the first condition that the position of the information providing device 10 is within a predetermined area and the brain activity is equal to or less than the brain activity threshold value, and the amount of change in the position of the information providing device 10 per unit time. Is less than or equal to the predetermined change amount threshold value, and the display unit 26A is selected as the target device when at least one of the second condition that the brain activity is equal to or less than the brain activity threshold value is satisfied.
  • the output selection unit 48 does not satisfy the first condition and the second condition, the output selection unit 48 does not select the display unit 26A as the target device. Since the information providing device 10 determines whether to operate the display unit 26A in this way, for example, when the user U is not moving and is relaxed, or when the user is in a vehicle and is relaxed. For example, the visual stimulus can be appropriately output to the user U.
  • the embodiments are not limited by the contents of these embodiments.
  • the above-mentioned components include those that can be easily assumed by those skilled in the art, those that are substantially the same, that is, those in a so-called equal range.
  • the above-mentioned components can be appropriately combined, and the configurations of the respective embodiments can be combined. Further, various omissions, replacements or changes of the components can be made without departing from the gist of the above-described embodiment.
  • the information providing device, the information providing method, and the program of the present embodiment can be used, for example, for displaying an image.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Pulmonology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides information to a user appropriately. An information provision device (10) provides information to a user and comprises: an output unit (26) including a display unit (26A) that outputs visual stimulation, a sound output unit (26B) that outputs auditory stimulation, and a sensory stimulation output unit (26C) that outputs sensory stimulation different from the visual or auditory stimulation; an environment sensor (20) that detects environment information about the environment surrounding the information provision device (10); and an output selection unit (48) that selects any of the display unit (26A), the sound output unit (26B), or the sensory stimulation output unit (26C) on the basis of the environment information.

Description

情報提供装置、情報提供方法及びプログラムInformation providing equipment, information providing method and program
 本発明は、情報提供装置、情報提供方法及びプログラムに関する。 The present invention relates to an information providing device, an information providing method, and a program.
 昨今、高速なCPUや高精細な画面表示技術、小型軽量のバッテリーの技術の進化、無線ネットワーク環境の普及や広帯域化などに伴い、情報機器が大きな進化をしている。そして、このような情報機器としては、その代表例であるスマートフォンだけでなく、ユーザに装着される所謂ウエラブルデバイスなども普及してきた。例えば特許文献1には、複数の感覚情報をユーザに提示することによって、仮想物体が実在しているような感覚をユーザに与える装置が記載されている。また、特許文献2には、情報提供タイミングの適切度を表す評価関数の和が最大となるように、情報提供デバイスの提供形態及び提供タイミングを決定する旨が記載されている。 In recent years, information devices have undergone major evolution with the evolution of high-speed CPUs, high-definition screen display technologies, compact and lightweight battery technologies, the spread of wireless network environments, and widening the bandwidth. As such information devices, not only smartphones, which are typical examples thereof, but also so-called wearable devices worn by users have become widespread. For example, Patent Document 1 describes a device that gives a user the feeling that a virtual object actually exists by presenting a plurality of sensory information to the user. Further, Patent Document 2 describes that the provision form and the provision timing of the information providing device are determined so that the sum of the evaluation functions representing the appropriateness of the information provision timing is maximized.
特開2011-96171号公報Japanese Unexamined Patent Publication No. 2011-96171 特開2011-242219号公報Japanese Unexamined Patent Publication No. 2011-242219
 ここで、ユーザに情報を提供する情報提供装置においては、ユーザに適切に情報を提供することが求められている。 Here, in the information providing device that provides information to the user, it is required to appropriately provide the information to the user.
 本実施形態は、上記課題を鑑み、ユーザに適切に情報を提供可能な情報提供装置、情報提供方法及びプログラムを提供することを目的とする。 In view of the above problems, the present embodiment aims to provide an information providing device, an information providing method, and a program capable of appropriately providing information to a user.
 本発実施形態の一態様にかかる情報提供装置は、ユーザに情報を提供する情報提供装置であって、視覚刺激を出力する表示部、聴覚刺激を出力する音声出力部、及び、視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部を含む出力部と、前記情報提供装置の周辺の環境情報を検出する環境センサと、前記環境情報に基づいて、前記表示部、前記音声出力部、及び前記感覚刺激出力部のいずれかを選択する出力選択部と、を含む。 The information providing device according to one aspect of the present embodiment is an information providing device that provides information to the user, and includes a display unit that outputs a visual stimulus, a voice output unit that outputs an auditory stimulus, and visual and auditory senses. Is an output unit including a sensory stimulus output unit that outputs different sensory stimuli, an environment sensor that detects environmental information around the information providing device, the display unit, the voice output unit, and the voice output unit based on the environmental information. An output selection unit for selecting any of the sensory stimulation output units is included.
 本実施形態の一態様にかかる情報提供方法は、ユーザに情報を提供する情報提供方法であって、周辺の環境情報を検出するステップと、前記環境情報に基づいて、視覚刺激を出力する表示部、聴覚刺激を出力する音声出力部、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部のいずれかを選択するステップと、を含む。 The information providing method according to one aspect of the present embodiment is an information providing method for providing information to a user, and is a step of detecting surrounding environmental information and a display unit that outputs a visual stimulus based on the environmental information. , A step of selecting one of a voice output unit that outputs an auditory stimulus and a sensory stimulus output unit that outputs a sensory stimulus different from the visual and auditory senses.
 本実施形態の一態様にかかるプログラムは、周辺の環境情報を検出するステップと、前記環境情報に基づいて、視覚刺激を出力する表示部、聴覚刺激を出力する音声出力部、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部のいずれかを選択するステップと、を含む、ユーザに情報を提供する情報提供方法を、コンピュータに実行させる。 The program according to one aspect of the present embodiment includes a step of detecting surrounding environmental information, a display unit that outputs a visual stimulus, a voice output unit that outputs an auditory stimulus, and visual and auditory senses based on the environmental information. Causes the computer to perform an information providing method that provides information to the user, including the step of selecting one of the sensory stimulus output units that output different sensory stimuli.
 本実施形態によれば、ユーザに適切に情報を提供できる。 According to this embodiment, information can be appropriately provided to the user.
図1は、本実施形態に係る情報提供装置の模式図である。FIG. 1 is a schematic diagram of an information providing device according to the present embodiment. 図2は、情報提供装置が表示する画像の一例を示す図である。FIG. 2 is a diagram showing an example of an image displayed by the information providing device. 図3は、本実施形態に係る情報提供装置の模式的なブロック図である。FIG. 3 is a schematic block diagram of the information providing device according to the present embodiment. 図4は、本実施形態に係る情報提供装置の処理内容を説明するフローチャートである。FIG. 4 is a flowchart illustrating the processing contents of the information providing device according to the present embodiment. 図5は、環境スコアの例を説明する表である。FIG. 5 is a table illustrating an example of an environmental score. 図6は、環境パターンの一例を示す表である。FIG. 6 is a table showing an example of an environmental pattern. 図7は、コンテンツ画像の出力仕様のレベルの例を説明する模式図である。FIG. 7 is a schematic diagram illustrating an example of the level of the output specification of the content image. 図8は、環境パターンと、対象機器及び基準出力仕様との関係を示す表である。FIG. 8 is a table showing the relationship between the environmental pattern, the target device, and the reference output specifications. 図9は、脈波の一例を示すグラフである。FIG. 9 is a graph showing an example of a pulse wave. 図10は、ユーザ状態と出力仕様補正度との関係の一例を示す表である。FIG. 10 is a table showing an example of the relationship between the user state and the output specification correction degree. 図11は、出力制限要否情報の一例を示す表である。FIG. 11 is a table showing an example of output restriction necessity information.
 以下に、本実施形態を図面に基づいて詳細に説明する。なお、以下に説明する実施形態により本実施形態が限定されるものではない。 Hereinafter, the present embodiment will be described in detail based on the drawings. The present embodiment is not limited to the embodiments described below.
 (情報提供装置)
 図1は、本実施形態に係る情報提供装置の模式図である。本実施形態に係る情報提供装置10は、ユーザUに、視覚刺激と聴覚刺激と感覚刺激とを出力することで、ユーザUに情報を提供する装置である。ここでの感覚刺激は、視覚及び聴覚とは異なる感覚に対する刺激である。本実施形態では、感覚刺激は、触覚刺激であるが、触覚刺激であることに限られず、視覚及び聴覚とは異なる任意の感覚に対する刺激であってよい。例えば、感覚刺激は、味覚に対する刺激であってもよいし、嗅覚に対する刺激であってもよいし、触覚、味覚、及び聴覚のうちの2つ以上に対する刺激であってもよい。図1に示すように、情報提供装置10は、ユーザUの体に装着される、いわゆるウェアラブルデバイスである。本実施形態の例では、情報提供装置10は、ユーザUの目に装着される装置10Aと、ユーザUの耳に装着される装置10Bと、ユーザの腕に装着される装置10Cとを含む。ユーザUの目に装着される装置10AはユーザUに視覚刺激を出力する(画像を表示する)後述の表示部26Aを含み、ユーザUの耳に装着される装置10Bは、ユーザUに聴覚刺激(音声)を出力する後述の音声出力部26Bを含み、ユーザUの腕に装着される装置10Cは、ユーザUに感覚刺激を出力する後述の感覚刺激出力部26Cを含む。ただし、図1の構成は一例であり、装置の数や、ユーザUへの装着位置も任意であってよい。例えば、情報提供装置10は、ウェアラブルデバイスに限られず、ユーザUに携帯される装置であってよく、例えばいわゆるスマートフォンやタブレット端末などであってもよい。
(Information providing device)
FIG. 1 is a schematic diagram of an information providing device according to the present embodiment. The information providing device 10 according to the present embodiment is a device that provides information to the user U by outputting a visual stimulus, an auditory stimulus, and a sensory stimulus to the user U. The sensory stimulus here is a stimulus for a sensation different from the visual and auditory senses. In the present embodiment, the sensory stimulus is a tactile stimulus, but is not limited to the tactile stimulus, and may be a stimulus for any sensation different from the visual sense and the auditory sense. For example, the sensory stimulus may be a stimulus for the sense of taste, a stimulus for the sense of smell, or a stimulus for two or more of the sense of touch, taste, and hearing. As shown in FIG. 1, the information providing device 10 is a so-called wearable device worn on the body of the user U. In the example of the present embodiment, the information providing device 10 includes a device 10A worn on the eyes of the user U, a device 10B worn on the ears of the user U, and a device 10C worn on the arm of the user U. The device 10A worn on the eyes of the user U includes a display unit 26A described later that outputs a visual stimulus to the user U (displays an image), and the device 10B worn on the ear of the user U is an auditory stimulus to the user U. The device 10C attached to the arm of the user U includes a later-described voice output unit 26B that outputs (voice), and includes a later-described sensory stimulus output unit 26C that outputs a sensory stimulus to the user U. However, the configuration of FIG. 1 is an example, and the number of devices and the mounting position on the user U may be arbitrary. For example, the information providing device 10 is not limited to a wearable device, and may be a device carried by the user U, for example, a so-called smartphone or tablet terminal.
 (環境像)
 図2は、情報提供装置が表示する画像の一例を示す図である。図2に示すように、情報提供装置10は、表示部26Aを通して、ユーザUに環境像PMを提供する。これにより、情報提供装置10を装着したユーザUは、環境像PMを視認できる。環境像PMとは、本実施形態では、ユーザUが情報提供装置10を装着していないと仮定した場合に、ユーザUが視認することになる景色の像であり、ユーザUの視野範囲に入る実在の対象物の像であるともいえる。本実施形態では、情報提供装置10は、例えば表示部26Aから外光(周辺の可視光)を透過させることで、ユーザUに環境像PMを提供する。すなわち、本実施形態では、ユーザUは、表示部26Aを通して、実際の景色の像を直接視認しているといえる。ただし、情報提供装置10は、ユーザUに実際の景色の像を直接視認させることに限られず、表示部26Aに環境像PMの画像を表示させることで、表示部26Aを通してユーザUに環境像PMを提供してもよい。この場合、ユーザUは、表示部26Aに表示された景色の画像を、環境像PMとして視認することとなる。この場合、情報提供装置10は、後述するカメラ20Aが撮像した、ユーザUの視野範囲に入る画像を、環境像PMとして表示部26Aに表示させる。なお、図2では、環境像PMとして道路と建物が含まれているが、単なる一例である。
(Environmental image)
FIG. 2 is a diagram showing an example of an image displayed by the information providing device. As shown in FIG. 2, the information providing device 10 provides the environment image PM to the user U through the display unit 26A. As a result, the user U wearing the information providing device 10 can visually recognize the environment image PM. In the present embodiment, the environment image PM is an image of a landscape that the user U will see when it is assumed that the user U is not equipped with the information providing device 10, and is within the field of view of the user U. It can be said that it is an image of a real object. In the present embodiment, the information providing device 10 provides the environment image PM to the user U, for example, by transmitting external light (peripheral visible light) from the display unit 26A. That is, in the present embodiment, it can be said that the user U directly visually recognizes the image of the actual scenery through the display unit 26A. However, the information providing device 10 is not limited to allowing the user U to directly visually recognize the image of the actual scenery, and by displaying the image of the environment image PM on the display unit 26A, the user U can see the environment image PM through the display unit 26A. May be provided. In this case, the user U will visually recognize the image of the scenery displayed on the display unit 26A as the environment image PM. In this case, the information providing device 10 causes the display unit 26A to display an image captured by the camera 20A, which will be described later, within the visual field range of the user U as an environment image PM. In FIG. 2, roads and buildings are included as the environmental image PM, but this is just an example.
 (コンテンツ画像)
 図2に示すように、情報提供装置10は、表示部26Aにコンテンツ画像PSを表示させる。コンテンツ像PSとは、ユーザUの視野範囲に入る実在の景色以外の画像である。コンテンツ像PSは、ユーザUに通知する情報を含む画像であれば、任意の内容(コンテンツ)であってよい。例えば、コンテンツ像PSは、映画やテレビ番組などの配信画像であってもよいし、ユーザUに対して道順などを示すナビゲーション画像であってもよいし、電話やメールなど、ユーザUへの通信を受信したことを示す通知画像であってもよいし、これら全てを含むものであってもよい。なお、コンテンツ像PSは、商品やサービスを知らせる情報である広告を含まないものであってもよい。
(Content image)
As shown in FIG. 2, the information providing device 10 causes the display unit 26A to display the content image PS. The content image PS is an image other than the actual scenery within the field of view of the user U. The content image PS may be any content (content) as long as it is an image including information to be notified to the user U. For example, the content image PS may be a distribution image such as a movie or a TV program, a navigation image showing directions to the user U, or a communication to the user U such as a telephone or an e-mail. It may be a notification image indicating that the above is received, or it may be an image including all of them. The content image PS may not include an advertisement which is information notifying a product or service.
 図2の例では、コンテンツ画像PSは、表示部26Aを通して提供される環境像PMに重畳するように、表示部26Aに表示されている。これにより、ユーザUは、環境像PMにコンテンツ画像PSが重畳された像を視認することとなる。ただし、コンテンツ画像PSの表示のさせ方は、図2のように重畳させることに限られない。コンテンツ画像PSの表示のさせ方、すなわち後述の出力仕様は、例えば環境情報などによって設定されるが、詳しくは後述する。 In the example of FIG. 2, the content image PS is displayed on the display unit 26A so as to be superimposed on the environment image PM provided through the display unit 26A. As a result, the user U can visually recognize the image in which the content image PS is superimposed on the environment image PM. However, the method of displaying the content image PS is not limited to superimposing as shown in FIG. The method of displaying the content image PS, that is, the output specifications described later are set by, for example, environmental information, and will be described in detail later.
 (情報提供装置の構成)
 図3は、本実施形態に係る情報提供装置の模式的なブロック図である。図3に示すように、情報提供装置10は、環境センサ20と、生体センサ22と、入力部24と、出力部26と、通信部28と、記憶部30と、制御部32とを備える。
(Configuration of information providing device)
FIG. 3 is a schematic block diagram of the information providing device according to the present embodiment. As shown in FIG. 3, the information providing device 10 includes an environment sensor 20, a biological sensor 22, an input unit 24, an output unit 26, a communication unit 28, a storage unit 30, and a control unit 32.
 (環境センサ)
 環境センサ20は、情報提供装置10の周辺の環境情報を検出するセンサである。情報提供装置10の周辺の環境情報とは、情報提供装置10がどのような環境下に置かれているかを示す情報であるともいえる。また、情報提供装置10はユーザUに装着されているため、環境センサ20は、ユーザUの周辺の環境情報を検出するとも言い換えることができる。
(Environment sensor)
The environment sensor 20 is a sensor that detects environmental information around the information providing device 10. It can be said that the environmental information around the information providing device 10 is information indicating under what kind of environment the information providing device 10 is placed. Further, since the information providing device 10 is attached to the user U, it can be paraphrased that the environment sensor 20 detects the environmental information around the user U.
 環境センサ20は、カメラ20Aと、マイク20Bと、GNSS受信機20Cと、加速度センサ20Dと、ジャイロセンサ20Eと、光センサ20Fと、温度センサ20Gと、湿度センサ20Hとを含む。ただし、環境センサ20は、環境情報を検出する任意のセンサを含むものであってよく、例えば、カメラ20Aと、マイク20Bと、GNSS受信機20Cと、加速度センサ20Dと、ジャイロセンサ20Eと、光センサ20Fと、温度センサ20Gと、湿度センサ20Hとの、少なくとも1つを含んだものであってよいし、他のセンサを含んだものであってもよい。 The environment sensor 20 includes a camera 20A, a microphone 20B, a GNSS receiver 20C, an acceleration sensor 20D, a gyro sensor 20E, an optical sensor 20F, a temperature sensor 20G, and a humidity sensor 20H. However, the environment sensor 20 may include an arbitrary sensor that detects environmental information, for example, a camera 20A, a microphone 20B, a GNSS receiver 20C, an acceleration sensor 20D, a gyro sensor 20E, and an optical sensor. It may include at least one of the sensor 20F, the temperature sensor 20G, and the humidity sensor 20H, or may include another sensor.
 カメラ20Aは、撮像装置であり、環境情報として、情報提供装置10(ユーザU)の周辺の可視光を検出することで、情報提供装置10の周辺を撮像する。カメラ20Aは、所定のフレームレート毎に撮像するビデオカメラであってよい。情報提供装置10においてカメラ20Aの設けられる位置や向きは任意であるが、例えば、カメラ20Aは、図1に示す装置10Aに設けられており、撮像方向がユーザUの顔が向いている方向であってよい。これにより、カメラ20Aは、ユーザUの視線の先にある対象物を、すなわちユーザUの視野の範囲に入る対象物を、撮像できる。また、カメラ20Aの数は任意であり、単数であっても複数であってもよい。なお、カメラ20Aが複数ある場合には、カメラ20Aが向いている方向の情報も、取得される。 The camera 20A is an image pickup device, and captures the surroundings of the information providing device 10 by detecting visible light around the information providing device 10 (user U) as environmental information. The camera 20A may be a video camera that captures images at predetermined frame rates. The position and orientation of the camera 20A in the information providing device 10 are arbitrary, but for example, the camera 20A is provided in the device 10A shown in FIG. 1 and the imaging direction is the direction in which the face of the user U is facing. It may be there. As a result, the camera 20A can image an object in the line of sight of the user U, that is, an object within the field of view of the user U. Further, the number of cameras 20A is arbitrary, and may be singular or plural. If there are a plurality of cameras 20A, the information in the direction in which the cameras 20A are facing is also acquired.
 マイク20Bは、環境情報として、情報提供装置10(ユーザU)の周辺の音声(音波情報)を検出するマイクである。情報提供装置10においてマイク20Bの設けられる位置、向き、及び数などは任意である。なお、マイク20Bが複数ある場合には、マイク20Bが向いている方向の情報も、取得される。 The microphone 20B is a microphone that detects voice (sound wave information) around the information providing device 10 (user U) as environmental information. The position, orientation, number, and the like of the microphone 20B provided in the information providing device 10 are arbitrary. If there are a plurality of microphones 20B, information in the direction in which the microphones 20B are facing is also acquired.
 GNSS受信機20Cは、環境情報として、情報提供装置10(ユーザU)の位置情報を検出する装置である。ここでの位置情報とは、地球座標である。本実施形態では、GNSS受信機20Cは、いわゆるGNSS(Global Navigation Satellite System)モジュールであり、衛星からの電波を受信して、情報提供装置10(ユーザU)の位置情報を検出する。 The GNSS receiver 20C is a device that detects the position information of the information providing device 10 (user U) as environmental information. The position information here is the earth coordinates. In the present embodiment, the GNSS receiver 20C is a so-called GNSS (Global Navigation Satellite System) module, which receives radio waves from satellites and detects the position information of the information providing device 10 (user U).
 加速度センサ20Dは、環境情報として、情報提供装置10(ユーザU)の加速度を検出するセンサであり、例えば、重力、振動、及び衝撃などを検出する。 The acceleration sensor 20D is a sensor that detects the acceleration of the information providing device 10 (user U) as environmental information, and detects, for example, gravity, vibration, and impact.
 ジャイロセンサ20Eは、環境情報として、情報提供装置10(ユーザU)の回転や向きを検出するセンサであり、コリオリの力やオイラー力や遠心力の原理などを用いて検出する。 The gyro sensor 20E is a sensor that detects the rotation and orientation of the information providing device 10 (user U) as environmental information, and detects it using the principle of Coriolis force, Euler force, centrifugal force, and the like.
 光センサ20Fは、環境情報として、情報提供装置10(ユーザU)の周辺の光の強度を検出するセンサである。光センサ20Fは、可視光線や赤外線や紫外線の強度を検出できる。 The optical sensor 20F is a sensor that detects the intensity of light around the information providing device 10 (user U) as environmental information. The optical sensor 20F can detect the intensity of visible light, infrared rays, and ultraviolet rays.
 温度センサ20Gは、環境情報として、情報提供装置10(ユーザU)の周辺の温度を検出するセンサである。 The temperature sensor 20G is a sensor that detects the temperature around the information providing device 10 (user U) as environmental information.
 湿度センサ20Hは、環境情報として、情報提供装置10(ユーザU)の周辺の湿度を検出するセンサである。 The humidity sensor 20H is a sensor that detects the humidity around the information providing device 10 (user U) as environmental information.
 (生体センサ)
 生体センサ22は、ユーザUの生体情報を検出するセンサである。生体センサ22は、ユーザUの生体情報を検出可能であれば、任意の位置に設けられてよい。ここでの生体情報は、指紋など不変のものではなく、例えばユーザUの状態に応じて値が変化する情報であることが好ましい。さらに言えば、ここでの生体情報は、ユーザUの自律神経に関する情報、すなわちユーザUの意思にかかわらず値が変化する情報であることが好ましい。具体的には、生体センサ22は、脈波センサ22Aと脳波センサ22Bとを含んで、生体情報として、ユーザUの脈波及び脳波を検出する。
(Biological sensor)
The biosensor 22 is a sensor that detects the biometric information of the user U. The biosensor 22 may be provided at any position as long as it can detect the biometric information of the user U. The biometric information here is not immutable such as a fingerprint, but is preferably information whose value changes according to the state of the user U, for example. Furthermore, it is preferable that the biometric information here is information about the autonomic nerve of the user U, that is, information whose value changes regardless of the intention of the user U. Specifically, the biological sensor 22 includes the pulse wave sensor 22A and the brain wave sensor 22B, and detects the pulse wave and the brain wave of the user U as biological information.
 脈波センサ22Aは、ユーザUの脈波を検出するセンサである。脈波センサ22Aは、例えば、発光部と受光部とを備える透過型光電方式のセンサであってよい。この場合、脈波センサ22Aは、例えば、ユーザUの指先を挟んで発光部と受光部とが対峙する構成となっており、指先を透過してきた光を受光部が受光し、脈波の圧力が大きいほど血流が大きくなることを利用して、脈の波形を計測するものであってよい。ただし、脈波センサ22Aは、それに限られず、脈波を検出可能な任意の方式のものであってよい。 The pulse wave sensor 22A is a sensor that detects the pulse wave of the user U. The pulse wave sensor 22A may be, for example, a transmissive photoelectric sensor including a light emitting unit and a light receiving unit. In this case, the pulse wave sensor 22A is configured such that, for example, the light emitting portion and the light receiving portion face each other with the fingertip of the user U interposed therebetween, and the light receiving portion receives the light transmitted through the fingertip, and the pressure of the pulse wave. The pulse waveform may be measured by utilizing the fact that the larger the value, the larger the blood flow. However, the pulse wave sensor 22A is not limited to this, and may be any method capable of detecting a pulse wave.
 脳波センサ22Bは、ユーザUの脳波を検出するセンサである。脳波センサ22Bは、ユーザUの脳波を検出可能であれば任意の構成であってよいが、例えば、原理的にはα波、β波といった波や、脳全体に出現する基礎律動(背景脳波)活動を把握し、脳全体としての活動の向上や低下を検出できればよいので、数個程度設けられていればよい。本実施形態においては、医療目的の脳波測定と違って、ユーザUの状態のおおまか変化を測定できればよいので、例えば額と耳に2つだけの電極を装着して、非常に簡単な表面脳波を検波するものとすることも可能である。 The brain wave sensor 22B is a sensor that detects the brain wave of the user U. The brain wave sensor 22B may have any configuration as long as it can detect the brain wave of the user U, but in principle, for example, a wave such as an α wave or a β wave or a basic rhythm (background brain wave) that appears in the entire brain. It suffices if the activity can be grasped and the improvement or decrease of the activity of the entire brain can be detected. In the present embodiment, unlike the electroencephalogram measurement for medical purposes, it suffices to be able to roughly measure the change in the state of the user U. Therefore, for example, by attaching only two electrodes to the forehead and the ear, a very simple surface electroencephalogram can be obtained. It is also possible to detect the detection.
 なお、生体センサ22は、生体情報として、脈波及び脳波を検出することに限られず、例えば脈波及び脳波の少なくとも1つを検出してもよい。また、生体センサ22は、生体情報として、脈波及び脳波以外を検出してもよく、例えば、発汗量や瞳孔の大きさなどを検出してもよい。また、生体センサ22は必須の構成でなく、情報処理装置10に設けられていなくてもよい。 The biological sensor 22 is not limited to detecting pulse waves and brain waves as biological information, and may detect at least one of pulse waves and brain waves, for example. Further, the biological sensor 22 may detect other than pulse waves and brain waves as biological information, and may detect, for example, the amount of sweating and the size of the pupil. Further, the biosensor 22 is not an essential configuration and may not be provided in the information processing apparatus 10.
 (入力部)
 入力部24は、ユーザの操作を受け付ける装置であり、例えばタッチパネルなどであってよい。
(Input section)
The input unit 24 is a device that accepts user operations, and may be, for example, a touch panel.
 (出力部)
 出力部26は、ユーザUに対して5感のうちの少なくとも1つに対する刺激を出力する装置である。具体的には、出力部26は、表示部26Aと音声出力部26Bと感覚刺激出力部26Cとを有する。表示部26Aは、画像を表示することでユーザUの視覚刺激を出力するディスプレイであり、視覚刺激出力部と言い換えることもできる。本実施形態では、表示部26Aは、いわゆるHMD(Head Mount Display)である。表示部26Aは、上述のように、コンテンツ画像PSを表示する。音声出力部26Bは、音声を出力することでユーザUの聴覚刺激を出力する装置(スピーカ)であり、聴覚刺激出力部と言い換えることもできる。感覚刺激出力部26Cは、ユーザUの感覚刺激を、本実施形態では触覚刺激を、出力する装置である。例えば、感覚刺激出力部26Cは、バイブレータなどの振動モータであり、振動などの物理的に作動することで、ユーザに触覚刺激を出力するが、触覚刺激の種類は、振動などに限られず任意のものであってよい。
(Output section)
The output unit 26 is a device that outputs a stimulus for at least one of the five senses to the user U. Specifically, the output unit 26 includes a display unit 26A, a voice output unit 26B, and a sensory stimulation output unit 26C. The display unit 26A is a display that outputs the visual stimulus of the user U by displaying an image, and can be paraphrased as a visual stimulus output unit. In the present embodiment, the display unit 26A is a so-called HMD (Head Mount Display). The display unit 26A displays the content image PS as described above. The voice output unit 26B is a device (speaker) that outputs the auditory stimulus of the user U by outputting the voice, and can be paraphrased as the auditory stimulus output unit. The sensory stimulus output unit 26C is a device that outputs the sensory stimulus of the user U, and in the present embodiment, the tactile stimulus. For example, the sensory stimulus output unit 26C is a vibration motor such as a vibrator, and outputs a tactile stimulus to the user by physically operating such as vibration. However, the type of the tactile stimulus is not limited to vibration or the like. It may be a thing.
 このように、出力部26は、人の5感のうち、視覚、聴覚、及び視覚と聴覚とは異なる感覚(本実施形態では触覚)を刺激する。ただし、出力部26は、視覚刺激、聴覚刺激、及び視覚と聴覚とは異なる感覚を出力することに限られない。例えば、出力部26は、視覚刺激、聴覚刺激、及び視覚と聴覚とは異なる感覚の少なくとも1つを出力するものであってもよいし、少なくとも視覚刺激を出力する(画像を表示する)ものであってもよいし、視覚刺激に加えて、聴覚刺激及び触覚のいずれかを出力するものであってもよいし、視覚刺激、聴覚刺激、及び触覚の少なくとも1つに加えて、5感のうちの他の感覚刺激(すなわち味覚刺激及び嗅覚刺激の少なくとも1つ)を出力するものであってもよい。 In this way, the output unit 26 stimulates the visual sense, the auditory sense, and the senses different from the visual sense and the auditory sense (tactile sense in the present embodiment) among the five human senses. However, the output unit 26 is not limited to outputting visual stimuli, auditory stimuli, and sensations different from those of visual and auditory senses. For example, the output unit 26 may output at least one of visual stimuli, auditory stimuli, and sensations different from visual and auditory sensations, or at least outputs visual stimuli (displays an image). It may be present, or it may output either an auditory stimulus or a tactile sensation in addition to the visual stimulus, and it may be one of the five senses in addition to at least one of the visual stimulus, the auditory stimulus, and the tactile sensation. It may output other sensory stimuli (that is, at least one of taste stimuli and olfactory stimuli).
 (通信部)
 通信部28は、外部の装置などと通信するモジュールであり、例えばアンテナなどを含んでよい。通信部28による通信方式は、本実施形態では無線通信であるが、通信方式は任意であってよい。通信部28は、コンテンツ画像受信部28Aを含む。コンテンツ画像受信部28Aは、コンテンツ画像の画像データであるコンテンツ画像データを受信する受信機である。なお、コンテンツ画像が表示するコンテンツは、音声や、視覚及び聴覚とは異なる感覚刺激を含む場合もある。この場合、コンテンツ画像受信部28Aは、コンテンツ画像データとして、コンテンツ画像の画像データと共に、音声データや感覚刺激データも受信してよい。なお、コンテンツ画像のデータは、このようにコンテンツ画像受信部28Aにより受信されるが、例えば予め記憶部30に記憶されており、コンテンツ画像受信部28Aが記憶部30からコンテンツ画像のデータを読み出してもよい。
(Communication department)
The communication unit 28 is a module that communicates with an external device or the like, and may include, for example, an antenna or the like. The communication method by the communication unit 28 is wireless communication in this embodiment, but the communication method may be arbitrary. The communication unit 28 includes a content image receiving unit 28A. The content image receiving unit 28A is a receiver that receives the content image data, which is the image data of the content image. The content displayed by the content image may include audio and sensory stimuli different from visual and auditory senses. In this case, the content image receiving unit 28A may receive voice data and sensory stimulation data as well as the image data of the content image as the content image data. The content image data is received by the content image receiving unit 28A in this way. For example, the content image receiving unit 28A is stored in the storage unit 30 in advance, and the content image receiving unit 28A reads the content image data from the storage unit 30. May be good.
 (記憶部)
 記憶部30は、制御部32の演算内容やプログラムなどの各種情報を記憶するメモリであり、例えば、RAM(Random Access Memory)と、ROM(Read Only Memory)のような主記憶装置と、HDD(Hard Disk Drive)などの外部記憶装置とのうち、少なくとも1つ含む。
(Memory)
The storage unit 30 is a memory that stores various information such as calculation contents and programs of the control unit 32. For example, a RAM (Random Access Memory), a main storage device such as a ROM (Read Only Memory), and an HDD ( Includes at least one of external storage devices such as Hard Disk Drive).
 記憶部30には、学習モデル30Aと、地図データ30Bと、仕様設定用データベース30Cとが記憶されている。学習モデル30Aは、環境情報に基づいてユーザUのおかれている環境を特定するために用いられるAIモデルである。地図データ30Bは、実在の建造物や自然物などの位置情報を含んだデータであり、地球座標と実在の建造物や自然物などとが、関連付けられたデータといえる。仕様設定用データベース30Cは、後述のようにコンテンツ画像PSの表示仕様を決定するための情報が含まれているデータベースである。学習モデル30A、地図データ30B、及び仕様設定用データベース30Cなどを用いた処理については、後述する。なお、学習モデル30A、地図データ30B、及び仕様設定用データベース30Cや、記憶部30が保存する制御部32用のプログラムは、情報提供装置10が読み取り可能な記録媒体に記憶されていてもよい。また、記憶部30が保存する制御部32用のプログラムや、学習モデル30A、地図データ30B、及び仕様設定用データベース30Cは、記憶部30に予め記憶されていることに限られず、これらのデータを使用する際に、情報提供装置10が通信によって外部の装置から取得してもよい。 The storage unit 30 stores the learning model 30A, the map data 30B, and the specification setting database 30C. The learning model 30A is an AI model used to specify the environment in which the user U is located based on the environment information. The map data 30B is data including position information of actual buildings and natural objects, and can be said to be data in which the earth coordinates and actual buildings and natural objects are associated with each other. The specification setting database 30C is a database that includes information for determining the display specifications of the content image PS as described later. The processing using the learning model 30A, the map data 30B, the specification setting database 30C, and the like will be described later. The learning model 30A, the map data 30B, the specification setting database 30C, and the program for the control unit 32 stored by the storage unit 30 may be stored in a recording medium readable by the information providing device 10. Further, the program for the control unit 32 stored by the storage unit 30, the learning model 30A, the map data 30B, and the specification setting database 30C are not limited to being stored in advance in the storage unit 30, and these data are stored. When used, the information providing device 10 may acquire from an external device by communication.
 (制御部)
 制御部32は、演算装置、すなわちCPU(Central Processing Unit)である。制御部32は、環境情報取得部40と、生体情報取得部42と、環境特定部44と、ユーザ状態特定部46と、出力選択部48と、出力仕様決定部50と、コンテンツ画像取得部52と、出力制御部54と、を含む。制御部32は、記憶部30からプログラム(ソフトウェア)を読み出して実行することで、環境情報取得部40と生体情報取得部42と環境特定部44とユーザ状態特定部46と出力選択部48と出力仕様決定部50とコンテンツ画像取得部52と出力制御部54とを実現して、それらの処理を実行する。なお、制御部32は、1つのCPUによってこれらの処理を実行してもよいし、複数のCPUを備えて、それらの複数のCPUで、処理を実行してもよい。また、環境情報取得部40と生体情報取得部42と環境特定部44とユーザ状態特定部46と出力選択部48と出力仕様決定部50とコンテンツ画像取得部52と出力制御部54との少なくとも一部を、ハードウェアで実現してもよい。
(Control unit)
The control unit 32 is an arithmetic unit, that is, a CPU (Central Processing Unit). The control unit 32 includes an environment information acquisition unit 40, a biometric information acquisition unit 42, an environment identification unit 44, a user state identification unit 46, an output selection unit 48, an output specification determination unit 50, and a content image acquisition unit 52. And an output control unit 54. The control unit 32 reads out a program (software) from the storage unit 30 and executes it to output the environment information acquisition unit 40, the biometric information acquisition unit 42, the environment identification unit 44, the user state identification unit 46, the output selection unit 48, and the output. The specification determination unit 50, the content image acquisition unit 52, and the output control unit 54 are realized, and their processing is executed. The control unit 32 may execute these processes by one CPU, or may include a plurality of CPUs and execute the processes by the plurality of CPUs. Further, at least one of the environment information acquisition unit 40, the biometric information acquisition unit 42, the environment identification unit 44, the user state identification unit 46, the output selection unit 48, the output specification determination unit 50, the content image acquisition unit 52, and the output control unit 54. The part may be realized by hardware.
 環境情報取得部40は、環境センサ20を制御して、環境センサ20に環境情報を検出させる。環境情報取得部40は、環境センサ20が検出した環境情報を取得する。環境情報取得部40の処理については後述する。なお、環境情報取得部40がハードウェアである場合には、環境情報検出器と呼ぶこともできる。 The environment information acquisition unit 40 controls the environment sensor 20 to cause the environment sensor 20 to detect the environment information. The environmental information acquisition unit 40 acquires the environmental information detected by the environment sensor 20. The processing of the environment information acquisition unit 40 will be described later. When the environmental information acquisition unit 40 is hardware, it can also be called an environmental information detector.
 生体情報取得部42は、生体センサ22を制御して、生体センサ22に生体情報を検出させる。生体情報取得部42は、生体センサ22が検出した環境情報を取得する。生体情報取得部42の処理については後述する。なお、生体情報取得部42がハードウェアである場合には、生体情報検出器と呼ぶこともできる。なお、生体情報取得部42は必須の構成ではない。 The biometric information acquisition unit 42 controls the biometric sensor 22 to cause the biometric sensor 22 to detect biometric information. The biological information acquisition unit 42 acquires the environmental information detected by the biological sensor 22. The processing of the biological information acquisition unit 42 will be described later. When the biometric information acquisition unit 42 is hardware, it can also be called a biometric information detector. The biological information acquisition unit 42 is not an essential configuration.
 環境特定部44は、環境情報取得部40が取得した環境情報に基づいて、ユーザUが置かれている環境を特定する。環境特定部44は、環境を特定するためのスコアである環境スコアを算出し、環境スコアに基づいて、環境の状態を示す環境状態パターンを特定することで、環境を特定する。環境特定部44の処理については後述する。 The environment specifying unit 44 identifies the environment in which the user U is placed, based on the environment information acquired by the environment information acquisition unit 40. The environment specifying unit 44 calculates the environment score, which is a score for specifying the environment, and specifies the environment by specifying the environment state pattern indicating the state of the environment based on the environment score. The processing of the environment specifying unit 44 will be described later.
 ユーザ状態特定部46は、生体情報取得部42が取得した生体情報に基づいて、ユーザUの状態を特定する。ユーザ状態特定部46の処理については後述する。なお、ユーザ状態特定部46は必須の構成ではない。 The user state specifying unit 46 specifies the state of the user U based on the biometric information acquired by the biometric information acquisition unit 42. The processing of the user state specifying unit 46 will be described later. The user state specifying unit 46 is not an essential configuration.
 出力選択部48は、環境情報取得部40が取得した環境情報と、生体情報取得部42が取得した生体情報との少なくとも一方に基づいて、出力部26の中で作動させる対象機器を選択する。出力選択部48の処理については後述する。なお、出力選択部48がハードウェアである場合には、感覚選択器と呼んでもよい。後述の出力仕様決定部50によって、環境情報などに基づいて出力仕様を決定する場合などには、出力選択部48を設けなくてもよい。この場合例えば、情報提供装置10は、対象機器を選択することなく、出力部26のうちの全てを、すなわち、表示部26Aと音声出力部26Bと感覚刺激出力部26Cとの全てを、作動させてよい。 The output selection unit 48 selects a target device to be operated in the output unit 26 based on at least one of the environmental information acquired by the environmental information acquisition unit 40 and the biometric information acquired by the biometric information acquisition unit 42. The processing of the output selection unit 48 will be described later. When the output selection unit 48 is hardware, it may be called a sensory selector. When the output specification determination unit 50, which will be described later, determines the output specification based on environmental information or the like, the output selection unit 48 may not be provided. In this case, for example, the information providing device 10 operates all of the output units 26, that is, all of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C without selecting the target device. It's okay.
 出力仕様決定部50は、環境情報取得部40が取得した環境情報と、生体情報取得部42が取得した生体情報との少なくとも一方に基づいて、出力部26によって出力する刺激(ここでは視覚刺激、聴覚刺激、触覚刺激)の出力仕様を決定する。例えば、出力仕様決定部50は、環境情報取得部40が取得した環境情報と、生体情報取得部42が取得した生体情報との少なくとも一方に基づいて、表示部26Aによって表示されるコンテンツ画像PSの表示仕様(出力仕様)を決定するともいえる。出力仕様とは、出力部26によって出力される刺激を、どのように出力させるかを示す指標であるが、詳しくは後述する。出力仕様決定部50の処理については後述する。なお、出力選択部48によって、環境情報などに基づいて対象機器を選定する場合などには、出力仕様決定部50を設けなくてもよい。この場合例えば、情報提供装置10は、環境情報などから出力仕様を決定することなく、選定した対象機器に対して、任意の出力仕様で刺激を出力させてよい。 The output specification determination unit 50 outputs a stimulus output by the output unit 26 based on at least one of the environmental information acquired by the environmental information acquisition unit 40 and the biological information acquired by the biological information acquisition unit 42 (here, a visual stimulus, Determine the output specifications of auditory and tactile stimuli). For example, the output specification determination unit 50 is a content image PS displayed by the display unit 26A based on at least one of the environmental information acquired by the environmental information acquisition unit 40 and the biometric information acquired by the biometric information acquisition unit 42. It can be said that the display specifications (output specifications) are determined. The output specification is an index showing how the stimulus output by the output unit 26 is output, and the details will be described later. The processing of the output specification determination unit 50 will be described later. When the output selection unit 48 selects the target device based on environmental information or the like, the output specification determination unit 50 may not be provided. In this case, for example, the information providing device 10 may output the stimulus to the selected target device with an arbitrary output specification without determining the output specification from the environmental information or the like.
 コンテンツ画像取得部52は、コンテンツ画像受信部28Aを介して、コンテンツ画像データを取得する。 The content image acquisition unit 52 acquires content image data via the content image receiving unit 28A.
 出力制御部54は、出力部26を制御して、出力を行わせる。出力制御部54は、出力選択部48が選択した対象機器に対して、出力仕様決定部50が決定した出力仕様で、出力を行わせる。例えば、出力制御部54は、表示部26Aを制御して、コンテンツ画像取得部52が取得したコンテンツ画像PSを、メイン像PMと重畳し、かつ、出力仕様決定部50が決定した表示仕様となるように、表示させる。なお、出力制御部54がハードウェアである場合には、多感式感覚提供器と呼んでもよい。 The output control unit 54 controls the output unit 26 to output. The output control unit 54 causes the target device selected by the output selection unit 48 to output with the output specifications determined by the output specification determination unit 50. For example, the output control unit 54 controls the display unit 26A to superimpose the content image PS acquired by the content image acquisition unit 52 on the main image PM, and has a display specification determined by the output specification determination unit 50. To display. When the output control unit 54 is hardware, it may be called a multi-sensory sensory provider.
 情報提供装置10は、以上説明したような構成となっている。 The information providing device 10 has the configuration as described above.
 (処理内容)
 次に、情報提供装置10による処理内容、より詳しくは、環境情報や生体情報に基づいて出力部26に出力させる処理内容について、説明する。図4は、本実施形態に係る情報提供装置の処理内容を説明するフローチャートである。
(Processing content)
Next, the processing content by the information providing device 10 and, more specifically, the processing content to be output to the output unit 26 based on the environmental information and the biological information will be described. FIG. 4 is a flowchart illustrating the processing contents of the information providing device according to the present embodiment.
 (環境情報の取得)
 図4に示すように、情報提供装置10は、環境情報取得部40によって、環境センサ20が検出した環境情報を取得する(ステップS10)。本実施形態では、環境情報取得部40は、カメラ20Aから、情報提供装置10(ユーザU)の周辺を撮像した画像データを取得し、マイク20Bから、情報提供装置10(ユーザU)の周辺の音声データを取得し、GNSS受信機20Cから、情報提供装置10(ユーザU)の位置情報を取得し、加速度センサ20Dから、情報提供装置10(ユーザU)の加速度情報を取得し、ジャイロセンサ20Eから、情報提供装置10(ユーザU)の向き情報、すなわち姿勢情報を取得し、光センサ20Fから、情報提供装置10(ユーザU)の周辺の赤外線及び紫外線の強度情報を取得し、温度センサ20Gから、情報提供装置(ユーザU)の周辺の温度情報を取得し、湿度センサ20Hから、情報提供装置10(ユーザU)の周辺の湿度情報を取得する。環境情報取得部40は、これらの環境情報を、所定期間ごとに、逐次取得する。環境情報取得部40は、それぞれの環境情報を、同じタイミングで取得してもよいし、それぞれの環境情報を異なるタイミングで取得してもよい。また、次の環境情報を取得するまでの所定期間は、任意に設定してよく、環境情報毎に所定期間を同じにしてもよいし、異ならせてもよい。
(Acquisition of environmental information)
As shown in FIG. 4, the information providing device 10 acquires the environmental information detected by the environment sensor 20 by the environment information acquisition unit 40 (step S10). In the present embodiment, the environmental information acquisition unit 40 acquires image data obtained by capturing an image of the periphery of the information providing device 10 (user U) from the camera 20A, and acquires image data around the information providing device 10 (user U) from the microphone 20B. The voice data is acquired, the position information of the information providing device 10 (user U) is acquired from the GNSS receiver 20C, the acceleration information of the information providing device 10 (user U) is acquired from the acceleration sensor 20D, and the gyro sensor 20E is acquired. The direction information of the information providing device 10 (user U), that is, the attitude information is acquired from, and the intensity information of infrared rays and ultraviolet rays around the information providing device 10 (user U) is acquired from the optical sensor 20F, and the temperature sensor 20G is obtained. The temperature information around the information providing device (user U) is acquired from, and the humidity information around the information providing device 10 (user U) is acquired from the humidity sensor 20H. The environmental information acquisition unit 40 sequentially acquires these environmental information at predetermined intervals. The environmental information acquisition unit 40 may acquire each environmental information at the same timing, or may acquire each environmental information at different timings. Further, the predetermined period until the next environmental information is acquired may be arbitrarily set, and the predetermined period may be the same or different for each environmental information.
 (危険状態の判定)
 環境情報を取得したら、情報提供装置10は、環境特定部44により、環境情報に基づき、ユーザUの周辺の環境が危険な状態であるかを示す危険状態であるかを判定する(ステップS12)。
(Judgment of dangerous condition)
After acquiring the environment information, the information providing device 10 determines whether the environment around the user U is in a dangerous state based on the environment information by the environment specifying unit 44 (step S12). ..
 環境特定部44は、カメラ20Aが撮像した情報提供装置10の周辺の画像に基づき、危険状態であるかを判定する。以下、カメラ20Aが撮像した情報提供装置10の周辺の画像を、適宜、周辺画像と記載する。環境特定部44は、例えば、周辺画像に写っている対象物を特定して、特定した対象物の種類に基づいて、危険状態であるかを判定する。より詳しくは、環境特定部44は、周辺画像に写っている対象物が、予め設定した特定対象物である場合、危険状態であると判断し、特定対象物でない場合、危険状態でないと判断してよい。特定対象物は、任意に設定してよいが、例えば、火事であることを示す炎や、車両や、工事中であることを示す看板など、ユーザUの危険を招く可能性がある対象物であってよい。また、環境特定部44は、時系列で連続して撮像された複数の周辺画像に基づいて、危険状態であるかを判断してもよい。例えば、環境特定部44は、時系列で連続して撮像された複数の周辺画像のそれぞれについて、対象物を特定して、それらの対象物が特定対象物であり、かつ同じ対象物であるかを判断する。そして、環境特定部44は、同じ特定対象物が写っている場合、時系列において後に撮像された周辺画像に写っている特定対象物ほど、画像内において相対的に大きくなっているかを、すなわちその特定対象物がユーザUに近づいてきているかを、判断する。そして、環境特定部44は、後に撮像された周辺画像に写っている特定対象物ほど大きくなっている場合、すなわち特定対象物がユーザUに近づいてきている場合に、危険状態であると判断する。一方、環境特定部44は、後に撮像された周辺画像に写っている特定対象物ほど大きくなっていない場合、すなわち特定対象物がユーザUに近づいてきてない場合には、危険状態でないと判断する。このように、環境特定部44は、1つの周辺画像に基づいて危険状態かを判断してもよいし、時系列で連続して撮像された複数の周辺画像に基づいて危険状態かを判断してもよい。例えば、環境特定部44は、周辺画像に写っている対象物の種類に応じて、判断方法を切り替えてよい。環境特定部44は、火事を示す炎など、1つの周辺画像から危険判断できる特定対象物が写っている場合には、1つの周辺画像から、危険状態であると判断してもよい。また例えば、環境特定部44は、車両など、1つの周辺画像から危険判断できない特定対象物が写っている場合には、時系列で連続して撮像された複数の周辺画像に基づいて危険状態の判断を行ってよい。 The environment specifying unit 44 determines whether or not it is in a dangerous state based on the image around the information providing device 10 captured by the camera 20A. Hereinafter, the image of the periphery of the information providing device 10 captured by the camera 20A will be appropriately referred to as a peripheral image. The environment specifying unit 44 identifies, for example, an object shown in a peripheral image, and determines whether or not it is in a dangerous state based on the type of the specified object. More specifically, the environment specifying unit 44 determines that the object shown in the peripheral image is in a dangerous state when it is a preset specific object, and determines that it is not in a dangerous state when it is not a specific object. It's okay. The specific object may be set arbitrarily, but it may be an object that may pose a danger to the user U, such as a flame indicating that it is a fire, a vehicle, or a sign indicating that construction is underway. It may be there. Further, the environment specifying unit 44 may determine whether or not it is in a dangerous state based on a plurality of peripheral images continuously captured in time series. For example, the environment specifying unit 44 identifies an object for each of a plurality of peripheral images continuously captured in time series, and whether the object is a specific object and is the same object. To judge. Then, when the same specific object is captured, the environment specifying unit 44 determines whether the specific object reflected in the peripheral image captured later in the time series is relatively larger in the image, that is, the specific object. It is determined whether the specific object is approaching the user U. Then, the environment specifying unit 44 determines that it is in a dangerous state when the specific object is larger than the specific object shown in the peripheral image captured later, that is, when the specific object is approaching the user U. .. On the other hand, the environment specifying unit 44 determines that it is not in a dangerous state when it is not as large as the specific object shown in the peripheral image captured later, that is, when the specific object is not approaching the user U. .. In this way, the environment specifying unit 44 may determine whether it is a dangerous state based on one peripheral image, or determine whether it is a dangerous state based on a plurality of peripheral images continuously captured in time series. You may. For example, the environment specifying unit 44 may switch the determination method according to the type of the object shown in the peripheral image. When a specific object that can be judged as dangerous from one peripheral image such as a flame indicating a fire is shown, the environment specifying unit 44 may determine from one peripheral image that it is in a dangerous state. Further, for example, when a specific object such as a vehicle whose danger cannot be determined is captured from one peripheral image, the environment specifying unit 44 is in a dangerous state based on a plurality of peripheral images continuously captured in time series. You may make a judgment.
 なお、環境特定部44は、任意の方法で周辺画像に写っている対象物の特定を行ってよいが、例えば、学習モデル30Aを用いて対象物を特定してもよい。この場合例えば、学習モデル30Aは、画像のデータと、その画像に写っている対象物の種類を示す情報とを1つのデータセットとし、複数のデータセットを教師データとして学習して構築された、AIモデルとなっている。環境特定部44は、学習済みの学習モデル30Aに、周辺画像の画像データを入力して、その周辺画像に写っている対象物の種類を特定した情報を取得して、対象物の特定を行う。 The environment specifying unit 44 may specify the object shown in the peripheral image by any method, but for example, the learning model 30A may be used to specify the object. In this case, for example, the learning model 30A is constructed by learning image data and information indicating the type of an object shown in the image as one data set and learning a plurality of data sets as teacher data. It is an AI model. The environment specifying unit 44 inputs the image data of the peripheral image into the learned learning model 30A, acquires the information specifying the type of the object reflected in the peripheral image, and identifies the object. ..
 また、環境特定部44は、周辺画像に加えて、GNSS受信機20Cが取得した位置情報にも基づいて、危険状態であるかを判断してよい。この場合、環境特定部44は、GNSS受信機20Cが取得した情報提供装置10(ユーザU)の位置情報と、地図データ30Bとに基づいて、ユーザUの居場所を示す居場所情報を取得する。居場所情報とは、ユーザU(情報提供装置10)が、どのような種類の場所にいるかを示す情報である。すなわち例えば、居場所情報は、ユーザUがショッピングセンターにいる旨の情報や、道路上にいる旨の情報などである。環境特定部44は、地図データ30Bを読み出して、ユーザUの現在位置に対して所定距離範囲内にある構造物や自然物の種類を特定し、その構造物や自然物から、居場所情報を特定する。例えば、ユーザUの現在位置がショッピングセンターの座標と重なる場合には、ユーザUがショッピングセンターにいる旨を、居場所情報として特定する。そして、環境特定部44は、居場所情報と周辺画像から特定した対象物の種類とが、特定の関係にある場合に、危険状態であると判断し、特定の関係にない場合には、危険状態でないと判断する。特定の関係は、任意に設定してよいが、例えば、ある居場所においてその対象物が存在した場合には危険を招く可能性がある、対象物と居場所との組み合わせを、特定の関係として設定してよい。 Further, the environment specifying unit 44 may determine whether or not it is in a dangerous state based on the position information acquired by the GNSS receiver 20C in addition to the peripheral image. In this case, the environment specifying unit 44 acquires the location information indicating the location of the user U based on the location information of the information providing device 10 (user U) acquired by the GNSS receiver 20C and the map data 30B. The whereabouts information is information indicating what kind of place the user U (information providing device 10) is in. That is, for example, the whereabouts information is information that the user U is in the shopping center, information that the user U is on the road, and the like. The environment specifying unit 44 reads out the map data 30B, identifies the type of the structure or the natural object within a predetermined distance range with respect to the current position of the user U, and specifies the location information from the structure or the natural object. For example, when the current position of the user U overlaps with the coordinates of the shopping center, it is specified as the location information that the user U is in the shopping center. Then, the environment specifying unit 44 determines that the location information and the type of the object specified from the surrounding image are in a dangerous state when they have a specific relationship, and when they do not have a specific relationship, the dangerous state. Judge that it is not. A specific relationship may be set arbitrarily, but for example, a combination of an object and a whereabouts, which may pose a danger if the object exists in a certain place, is set as a specific relationship. It's okay.
 また、環境特定部44は、マイク20Bが取得した音声情報に基づいて、危険状態であるか判断する。以下、マイク20Bが取得した情報提供装置10の周辺の音声情報を、適宜、周辺音声と記載する。環境特定部44は、例えば、周辺音声に含まれている音声の種類を特定して、特定した音声の種類に基づいて、危険状態であるかを判定する。より詳しくは、環境特定部44は、周辺音声に含まれている音声の種類が、予め設定した特定音声である場合、危険状態であると判断し、特定音声でない場合、危険状態でないと判断してよい。特定音声は、任意に設定してよいが、例えば、火事であることを示す音声や、車両の音声や、工事中であることを示す音声など、ユーザUの危険を招く可能性がある音声であってよい。 Further, the environment specifying unit 44 determines whether or not it is in a dangerous state based on the voice information acquired by the microphone 20B. Hereinafter, the audio information around the information providing device 10 acquired by the microphone 20B will be appropriately referred to as peripheral audio. The environment specifying unit 44 identifies, for example, the type of voice included in the peripheral voice, and determines whether or not it is in a dangerous state based on the type of the specified voice. More specifically, the environment specifying unit 44 determines that if the type of voice included in the peripheral voice is a preset specific voice, it is in a dangerous state, and if it is not a specific voice, it is determined that it is not in a dangerous state. It's okay. The specific voice may be set arbitrarily, but for example, a voice indicating that it is a fire, a voice indicating that the vehicle is under construction, or a voice indicating that the user U is under construction, which may pose a danger to the user U. It may be there.
 なお、環境特定部44は、任意の方法で周辺音声に含まれている音声の種類の特定を行ってよいが、例えば、学習モデル30Aを用いて対象物を特定してもよい。この場合例えば、学習モデル30Aは、音声データ(例えば音の周波数と強度を示すデータ)と、その音声の種類を示す情報とを1つのデータセットとし、複数のデータセットを教師データとして学習して構築された、AIモデルとなっている。環境特定部44は、学習済みの学習モデル30Aに、周辺音声の音声データを入力して、周辺音声に含まれている音声の種類を特定した情報を取得して、音声の種類の特定を行う。 The environment specifying unit 44 may specify the type of voice included in the peripheral voice by any method, but may specify the object by using, for example, the learning model 30A. In this case, for example, in the learning model 30A, voice data (for example, data indicating the frequency and intensity of sound) and information indicating the type of the voice are used as one data set, and a plurality of data sets are learned as teacher data. It is a built AI model. The environment specifying unit 44 inputs the voice data of the peripheral voice into the learned learning model 30A, acquires the information specifying the type of the voice included in the peripheral voice, and specifies the voice type. ..
 また、環境特定部44は、周辺音声に加えて、GNSS受信機20Cが取得した位置情報にも基づいて、危険状態であるかを判断してよい。この場合、環境特定部44は、GNSS受信機20Cが取得した情報提供装置10(ユーザU)の位置情報と、地図データ30Bとに基づいて、ユーザUの居場所を示す居場所情報を取得する。そして、環境特定部44は、居場所情報と周辺音声から特定した音声の種類とが、特定の関係にある場合に、危険状態であると判断し、特定の関係にない場合には、危険状態でないと判断する。特定の関係は、任意に設定してよいが、例えば、ある居場所においてその音声が発生する場合には危険を招く可能性がある、音声と居場所との組み合わせを、特定の関係として設定してよい。 Further, the environment specifying unit 44 may determine whether or not it is in a dangerous state based on the position information acquired by the GNSS receiver 20C in addition to the peripheral voice. In this case, the environment specifying unit 44 acquires the location information indicating the location of the user U based on the location information of the information providing device 10 (user U) acquired by the GNSS receiver 20C and the map data 30B. Then, the environment specifying unit 44 determines that the location information and the type of voice specified from the surrounding voice are in a dangerous state when they have a specific relationship, and when they do not have a specific relationship, they are not in a dangerous state. Judge. The specific relationship may be set arbitrarily, but for example, a combination of sound and whereabouts, which may be dangerous if the sound is generated in a certain place, may be set as a specific relationship. ..
 このように、本実施形態では、環境特定部44は、周辺画像と周辺音声とに基づいて、危険状態を判断する。ただし、危険状態の判断方法は以上に限られず任意であり、例えば、環境特定部44は、周辺画像と周辺音声とのいずれか一方に基づいて、危険状態を判断してもよい。また、環境特定部44は、カメラ20Aが撮像した情報提供装置10の周辺の画像と、マイク20Bが検出した情報提供装置10の周辺の音声と、GNSS受信機20Cが取得した位置情報との少なくとも1つに基づいて、危険状態であるかを判定してよい。また、本実施形態においては、危険状態の判断は必須でなく、実施されなくてもよい。 As described above, in the present embodiment, the environment specifying unit 44 determines the dangerous state based on the peripheral image and the peripheral sound. However, the method for determining the dangerous state is not limited to the above and is arbitrary. For example, the environment specifying unit 44 may determine the dangerous state based on either the peripheral image or the peripheral sound. Further, the environment specifying unit 44 has at least an image of the periphery of the information providing device 10 captured by the camera 20A, a sound around the information providing device 10 detected by the microphone 20B, and a position information acquired by the GNSS receiver 20C. You may determine if you are in a dangerous state based on one. Further, in the present embodiment, the determination of the dangerous state is not essential and may not be carried out.
 (危険通知内容の設定)
 危険状態と判断した場合(ステップS10、Yes)、情報提供装置10は、出力制御部54により、危険状態である旨を通知するための通知内容である危険通知内容を設定する(ステップS12)。情報提供装置10は、危険状態の内容に基づいて、危険通知内容を設定する。危険状態の内容は、どのような危険が迫っているかを示す情報であり、周辺画像に写っている対象物の種類や、周辺音声に含まれている音声の種類などから特定される。例えば、対象物が車両であって近づいている場合には、危険状態の内容は、「車両が近づいている」ということになる。そして、危険通知内容は、危険状態の内容を示す情報である。例えば、危険状態の内容が、車両が近づいているものである場合、危険通知内容は、車両が近づいていることを示す情報となる。
(Setting of danger notification content)
When it is determined that the dangerous state is determined (step S10, Yes), the information providing device 10 sets the danger notification content, which is the notification content for notifying the dangerous state, by the output control unit 54 (step S12). The information providing device 10 sets the danger notification content based on the content of the danger state. The content of the dangerous state is information indicating what kind of danger is imminent, and is specified from the type of the object shown in the peripheral image, the type of sound included in the peripheral sound, and the like. For example, when the object is a vehicle and is approaching, the content of the dangerous state is "the vehicle is approaching". The content of the danger notification is information indicating the content of the dangerous state. For example, when the content of the dangerous state is that the vehicle is approaching, the content of the danger notification is information indicating that the vehicle is approaching.
 危険通知内容は、後述のステップS26で選択された対象機器の種類に応じて異なるものとなる。例えば、表示部26Aが対象機器とされる場合は、危険通知内容は、コンテンツ画像PSの表示内容(コンテンツ)となる。すなわち、危険通知内容は、コンテンツ画像PSとして表示される。この場合例えば、危険通知内容は、「車が近づいてきているので注意!」という内容を示す画像データとある。一方、音声出力部26Bが対象機器とされる場合は、危険通知内容は、音声出力部26Bから出力される音声の内容となる。この場合例えば、危険通知内容は、「車が近づいています。気を付けてください」という音声を発するための音声データとなる。また、感覚刺激出力部26Cが対象機器とされる場合は、危険通知内容は、感覚刺激出力部26Cから出力される感覚刺激の内容となる。この場合例えば、危険通知内容は、ユーザUの注意を引くような触覚刺激となる。 The content of the danger notification differs depending on the type of the target device selected in step S26 described later. For example, when the display unit 26A is the target device, the danger notification content is the display content (content) of the content image PS. That is, the danger notification content is displayed as the content image PS. In this case, for example, the content of the danger notification is image data indicating the content "Be careful because the car is approaching!". On the other hand, when the voice output unit 26B is the target device, the danger notification content is the content of the voice output from the voice output unit 26B. In this case, for example, the content of the danger notification is voice data for issuing a voice saying "A car is approaching. Please be careful". When the sensory stimulus output unit 26C is the target device, the danger notification content is the content of the sensory stimulus output from the sensory stimulus output unit 26C. In this case, for example, the danger notification content is a tactile stimulus that attracts the attention of the user U.
 なお、ステップS14の危険通知内容の設定は、ステップS12で危険状態であると判断された後であって、後段のステップS38で危険通知内容を出力する前の任意のタイミングで実行されてよく、例えば後段のステップS32で対象機器を選択した後に実行されてもよい。 The setting of the danger notification content in step S14 may be executed at an arbitrary timing after the danger notification content is determined in step S12 and before the danger notification content is output in the subsequent step S38. For example, it may be executed after selecting the target device in the subsequent step S32.
 (環境スコアの算出)
 危険状態でないと判断した場合(ステップS12;No)、情報提供装置10は、環境特定部44により、ステップS16からステップS22に示すように、環境情報に基づいて、各種の環境スコアを算出する。環境スコアとは、ユーザU(情報提供装置10)が置かれている環境を特定するためのスコアである。具体的には、環境特定部44は、環境スコアとして、姿勢スコアを算出し(ステップS16)、居場所スコアを算出し(ステップS18)、動きスコアを算出し(ステップS20)、安全性スコアを算出する(ステップS22)。ステップS16からステップS22の順番は、これに限られず任意である。なお、ステップS14で危険通知内容を設定した場合にも、ステップS16からステップS22に示すように、各種の環境スコアを算出する。以下、環境スコアについてより具体的に説明する。
(Calculation of environmental score)
When it is determined that the state is not dangerous (step S12; No), the information providing device 10 calculates various environmental scores based on the environmental information by the environment specifying unit 44 as shown in steps S16 to S22. The environment score is a score for specifying the environment in which the user U (information providing device 10) is placed. Specifically, the environment specifying unit 44 calculates the posture score (step S16), the whereabouts score (step S18), the movement score (step S20), and the safety score as the environment score. (Step S22). The order from step S16 to step S22 is not limited to this, and is arbitrary. Even when the danger notification content is set in step S14, various environmental scores are calculated as shown in steps S16 to S22. Hereinafter, the environmental score will be described more specifically.
 図5は、環境スコアの例を説明する表である。図5に示すように、環境特定部44は、環境のカテゴリーごとに、環境スコアを算出する。環境のカテゴリーとは、ユーザUの環境の種類を示しており、図5の例では、ユーザUの姿勢と、ユーザUの居場所と、ユーザUの動きと、ユーザUの周囲の環境の安全性と、を含む。また、環境特定部44は、環境のカテゴリーを、より具体的なサブカテゴリーに区分して、サブカテゴリー毎に環境スコアを算出する。 FIG. 5 is a table illustrating an example of an environmental score. As shown in FIG. 5, the environment specifying unit 44 calculates an environment score for each environment category. The environment category indicates the type of environment of user U. In the example of FIG. 5, the posture of user U, the location of user U, the movement of user U, and the safety of the environment around user U are shown. And, including. Further, the environment specifying unit 44 divides the environment category into more specific subcategories, and calculates the environment score for each subcategory.
 (姿勢スコア)
 環境特定部44は、ユーザUの姿勢のカテゴリーについての環境スコアとして、姿勢スコアを算出する。すなわち、姿勢スコアとは、ユーザUの姿勢を示す情報であり、ユーザUがどのような姿勢であるかを数値として示す情報といえる。環境特定部44は、複数種類の環境情報のうちの、ユーザUの姿勢に関連する環境情報に基づいて、姿勢スコアを算出する。ユーザUの姿勢に関連する環境情報としては、カメラ20Aによって取得された周辺画像と、ジャイロセンサ20Eによって検出された情報提供装置10の向きとが挙げられる。
(Posture score)
The environment specifying unit 44 calculates the posture score as the environment score for the posture category of the user U. That is, the posture score is information indicating the posture of the user U, and can be said to be information indicating what kind of posture the user U is in as a numerical value. The environment specifying unit 44 calculates the posture score based on the environment information related to the posture of the user U among the plurality of types of environment information. Environmental information related to the posture of the user U includes a peripheral image acquired by the camera 20A and the orientation of the information providing device 10 detected by the gyro sensor 20E.
 より詳しくは、図5の例では、ユーザUの姿勢のカテゴリーに対して、立っている状態というサブカテゴリーと、顔の向きが水平方向であるというサブカテゴリーが含まれている。環境特定部44は、カメラ20Aによって取得された周辺画像に基づいて、立っている状態というサブカテゴリーについての姿勢スコアを算出する。立っている状態というサブカテゴリーについての姿勢スコアは、立っている状態に対するユーザUの姿勢の一致度合いを示す数値といえる。立っている状態というサブカテゴリーについての姿勢スコアの算出方法は任意であってよいが、例えば、学習モデル30Aを用いて算出してもよい。この場合例えば、学習モデル30Aは、人の視界に写っている景色の画像データと、その人が立っているかを示す情報とを1つのデータセットとし、複数のデータセットを教師データとして学習して構築されたAIモデルとなっている。環境特定部44は、学習済みの学習モデル30Aに、周辺画像の画像データを入力することで、立っている状態に対する一致度を示す数値を取得して、姿勢スコアとする。なお、ここでは立っている状態に対する一致度としたが、立っている状態に限られず、例えば、座っている状態や寝ている状態などに対する一致度であってもよい。 More specifically, in the example of FIG. 5, the posture category of the user U includes a subcategory of standing and a subcategory of the face facing horizontally. The environment specifying unit 44 calculates the posture score for the sub-category of standing state based on the peripheral image acquired by the camera 20A. The posture score for the subcategory of the standing state can be said to be a numerical value indicating the degree of matching of the posture of the user U with the standing state. The method of calculating the posture score for the sub-category of standing may be arbitrary, but for example, it may be calculated using the learning model 30A. In this case, for example, in the learning model 30A, the image data of the scenery reflected in the field of view of a person and the information indicating whether the person is standing are used as one data set, and a plurality of data sets are learned as teacher data. It is a constructed AI model. By inputting the image data of the peripheral image into the learned learning model 30A, the environment specifying unit 44 acquires a numerical value indicating the degree of coincidence with the standing state and uses it as a posture score. Although the degree of agreement with respect to the standing state is used here, the degree of agreement is not limited to the standing state, and may be, for example, the degree of agreement with a sitting state or a sleeping state.
 また、環境特定部44は、ジャイロセンサ20Eによって検出された情報提供装置10の向きに基づいて、顔の向きが水平方向であるというサブカテゴリーについての姿勢スコアを算出する。顔の向きが水平方向というサブカテゴリーについての姿勢スコアは、ユーザUの姿勢(顔の向き)の、水平方向に対する一致度合いを示す数値といえる。顔の向きが水平方向というサブカテゴリーについての姿勢スコアの算出方法は任意であってよい。なお、ここでは顔の向きが水平方向であることに対する一致度としたが、水平方向に限られず、任意の方向であることに対する一致度としてもよい。 Further, the environment specifying unit 44 calculates the posture score for the sub-category that the face orientation is horizontal based on the orientation of the information providing device 10 detected by the gyro sensor 20E. The posture score for the subcategory in which the face orientation is horizontal can be said to be a numerical value indicating the degree of coincidence of the posture (face orientation) of the user U with respect to the horizontal direction. The method of calculating the posture score for the subcategory in which the face orientation is horizontal may be arbitrary. Although the degree of coincidence with respect to the horizontal direction of the face is used here, the degree of coincidence with respect to the horizontal direction may be used.
 このように、環境特定部44は、周辺画像と情報提供装置10の向きとに基づいて、ユーザUの姿勢を示す情報(ここでは姿勢スコア)を、設定するといえる。ただし、環境特定部44は、ユーザUの姿勢を示す情報を設定するために、周辺画像と情報提供装置10の向きとを用いることに限られず、任意の環境情報を用いてよく、例えば周辺画像と情報提供装置10の向きとの少なくとも一方を用いてもよい。 In this way, it can be said that the environment specifying unit 44 sets information (here, the posture score) indicating the posture of the user U based on the peripheral image and the orientation of the information providing device 10. However, the environment specifying unit 44 is not limited to using the peripheral image and the orientation of the information providing device 10 in order to set the information indicating the posture of the user U, and may use arbitrary environmental information, for example, the peripheral image. And at least one of the orientation of the information providing device 10 may be used.
 (居場所スコア)
 環境特定部44は、ユーザUの居場所のカテゴリーについての環境スコアとして、居場所スコアを算出する。すなわち、居場所スコアとは、ユーザUの居場所を示す情報であり、ユーザUがどのような種類の場所に位置しているかを数値として示す情報といえる。環境特定部44は、複数種類の環境情報のうちの、ユーザUの居場所に関連する環境情報に基づいて、居場所スコアを算出する。ユーザUの居場所に関連する環境情報としては、カメラ20Aによって取得された周辺画像と、GNSS受信機20Cによって取得された情報提供装置10の位置情報と、マイク20Bによって取得された周辺音声とが挙げられる。
(Whereabouts score)
The environment specifying unit 44 calculates the whereabouts score as the environment score for the category of the whereabouts of the user U. That is, the whereabouts score is information indicating the whereabouts of the user U, and can be said to be information indicating what kind of place the user U is located in as a numerical value. The environment specifying unit 44 calculates the location score based on the environment information related to the location of the user U among the plurality of types of environment information. Environmental information related to the location of the user U includes peripheral images acquired by the camera 20A, position information of the information providing device 10 acquired by the GNSS receiver 20C, and peripheral audio acquired by the microphone 20B. Be done.
 より詳しくは、図5の例では、ユーザUの居場所のカテゴリーに対して、電車内であるというサブカテゴリーと、線路上であるというサブカテゴリーと、電車内の音であるというサブカテゴリーが含まれている。環境特定部44は、カメラ20Aによって取得された周辺画像に基づいて、電車内であるというサブカテゴリーについての居場所スコアを算出する。電車内であるというサブカテゴリーについての居場所スコアは、電車内という場所に対するユーザUの居場所の一致度合いを示す数値といえる。電車内というサブカテゴリーについての居場所スコアの算出方法は任意であってよいが、例えば、学習モデル30Aを用いて算出してもよい。この場合例えば、学習モデル30Aは、人の視界に写っている景色の画像データと、その人が電車内にいるかを示す情報とを1つのデータセットとし、複数のデータセットを教師データとして学習して構築されたAIモデルとなっている。環境特定部44は、学習済みの学習モデル30Aに、周辺画像の画像データを入力することで、電車内という居場所に対する一致度を示す数値を取得して、居場所スコアとする。なお、ここでは電車内という居場所に対する一致度を算出したが、それに限られず、任意の種類の車両内に居ることに対する一致度を算出してもよい。 More specifically, in the example of FIG. 5, the category of the whereabouts of the user U includes a subcategory of being on the train, a subcategory of being on the railroad track, and a subcategory of being the sound in the train. ing. The environment specifying unit 44 calculates the whereabouts score for the subcategory of being in the train based on the peripheral image acquired by the camera 20A. The whereabouts score for the subcategory of being on the train can be said to be a numerical value indicating the degree of matching of the whereabouts of the user U with respect to the place of being on the train. The method of calculating the whereabouts score for the subcategory of being in the train may be arbitrary, but for example, it may be calculated using the learning model 30A. In this case, for example, in the learning model 30A, the image data of the scenery reflected in the field of view of a person and the information indicating whether the person is in the train are used as one data set, and a plurality of data sets are learned as teacher data. It is an AI model constructed by. By inputting the image data of the peripheral image into the learned learning model 30A, the environment specifying unit 44 acquires a numerical value indicating the degree of coincidence with the location in the train and uses it as the location score. Although the degree of coincidence with respect to the location in the train is calculated here, the degree of coincidence with respect to being in any type of vehicle may be calculated without limitation.
 環境特定部44は、GNSS受信機20Cによって取得された情報提供装置10の位置情報に基づいて、線路上にいるというサブカテゴリーについての居場所スコアを算出する。線路上にいるというサブカテゴリーについての居場所スコアは、線路上という居場所に対するユーザUの居場所の一致度合いを示す数値といえる。線路上というサブカテゴリーについての居場所スコアの算出方法は任意であってよいが、例えば、地図データ30Bを用いてもよい。例えば、環境特定部44は、地図データ30Bを読み出して、ユーザUの現在位置が線路の座標と重なる場合には、線路上という居場所に対するユーザUの居場所の一致度合いが高くなるように、居場所スコアを算出する。なお、ここでは線路上への一致度合いを算出したが、それに限られず、任意の種類の構造物や自然物などの位置との一致度合いを算出してもよい。 The environment specifying unit 44 calculates the whereabouts score for the subcategory of being on the railroad track based on the position information of the information providing device 10 acquired by the GNSS receiver 20C. The whereabouts score for the subcategory of being on the railroad track can be said to be a numerical value indicating the degree of matching of the whereabouts of the user U with the whereabouts of being on the railroad track. The method of calculating the whereabouts score for the sub-category of being on the railroad track may be arbitrary, but for example, map data 30B may be used. For example, the environment specifying unit 44 reads out the map data 30B, and when the current position of the user U overlaps with the coordinates of the railroad track, the location score is such that the degree of matching of the user U's location with the location on the track is high. Is calculated. Although the degree of coincidence on the track is calculated here, the degree of coincidence with the position of any kind of structure or natural object may be calculated without limitation.
 環境特定部44は、マイク20Bによって取得された周辺音声に基づいて、電車内の音であるというサブカテゴリーについての居場所スコアを算出する。電車内の音というサブカテゴリーについての居場所スコアは、電車内の音に対する周辺音声の一致度合いを示す数値といえる。電車内の音というサブカテゴリーについての居場所スコアの算出方法は任意であってよいが、例えば、上述のように周辺音声に基づいて危険状態であるか判断する方法と同様の方法で、すなわち例えば、周辺音声が特定の種類の音声であるかを判断することによって、判断してよい。なお、ここでは電車内の音への一致度合いを算出したが、それに限られず任意の場所の音との一致度合いを算出してもよい。 The environment specifying unit 44 calculates the whereabouts score for the subcategory that it is the sound in the train based on the peripheral voice acquired by the microphone 20B. The whereabouts score for the subcategory of sounds in the train can be said to be a numerical value indicating the degree of matching of the surrounding sounds with the sounds in the train. The method of calculating the whereabouts score for the subcategory of sound in the train may be arbitrary, but for example, in the same manner as the method of determining whether or not a dangerous state is based on the surrounding voice as described above, that is, for example, for example. Judgment may be made by determining whether the peripheral sound is a specific type of sound. Although the degree of matching with the sound in the train is calculated here, the degree of matching with the sound in any place may be calculated without limitation.
 このように、環境特定部44は、周辺画像と周辺音声と情報提供装置10の位置情報とに基づいて、ユーザUの居場所を示す情報(ここでは居場所スコア)を、設定するといえる。ただし、環境特定部44は、ユーザUの居場所を示す情報を設定するために、周辺画像と周辺音声と情報提供装置10の位置情報を用いることに限られず、任意の環境情報を用いてよく、例えば周辺画像と周辺音声と情報提供装置10の位置情報との少なくとも1つを用いてもよい。 In this way, it can be said that the environment specifying unit 44 sets information indicating the whereabouts of the user U (here, the whereabouts score) based on the peripheral image, the peripheral voice, and the position information of the information providing device 10. However, the environment specifying unit 44 is not limited to using the peripheral image, the peripheral voice, and the position information of the information providing device 10 in order to set the information indicating the location of the user U, and may use any environmental information. For example, at least one of a peripheral image, a peripheral sound, and a position information of the information providing device 10 may be used.
 (動きスコア)
 環境特定部44は、ユーザUの動きのカテゴリーについての環境スコアとして、動きスコアを算出する。すなわち、動きスコアとは、ユーザUの動きを示す情報であり、ユーザUがどのように動いているかを数値として示す情報といえる。環境特定部44は、複数種類の環境情報のうちの、ユーザUの動きに関連する環境情報に基づいて、動きスコアを算出する。ユーザUの動きに関連する環境情報としては、加速度センサ20Dによって取得された加速度情報が挙げられる。
(Movement score)
The environment specifying unit 44 calculates the movement score as the environment score for the movement category of the user U. That is, the movement score is information indicating the movement of the user U, and can be said to be information indicating how the user U is moving as a numerical value. The environment specifying unit 44 calculates the motion score based on the environmental information related to the motion of the user U among the plurality of types of environmental information. Examples of the environmental information related to the movement of the user U include the acceleration information acquired by the acceleration sensor 20D.
 より詳しくは、図5の例では、ユーザUの動きのカテゴリーに対して、動いているというサブカテゴリーが含まれている。環境特定部44は、加速度センサ20Dによって取得された情報提供装置10の加速度情報に基づいて、動いているというサブカテゴリーについての居場所スコアを算出する。動いているというサブカテゴリーについての動きスコアは、ユーザUの現在の状況の、ユーザUが動いていることに対する一致度合いを示す数値といえる。動いているというサブカテゴリーについての動きスコアの算出方法は任意であってよいが、例えば所定期間における加速度の変化から、動きスコアを算出してもよい。例えば所定期間における加速度の変化がある場合には、ユーザUが動いていることに対する一致度合いが高くなるように、動きスコアを算出する。また例えば、情報提供装置10の位置情報を取得して、所定期間における位置の変化度合いに基づいて、動きスコアを算出してもよい。この場合、所定期間における位置の変化量から、スピードも予測でき、車両や徒歩など、移動手段も特定できる。なお、ここでは動いていることに対する一致度を算出したが、それに限られず、例えば所定の速度で動いていることに対する一致度を算出してもよい。 More specifically, in the example of FIG. 5, a subcategory that the user U is moving is included with respect to the movement category of the user U. The environment specifying unit 44 calculates the whereabouts score for the subcategory of moving based on the acceleration information of the information providing device 10 acquired by the acceleration sensor 20D. The movement score for the subcategory of moving can be said to be a numerical value indicating the degree of agreement between the current situation of the user U and the movement of the user U. The method of calculating the movement score for the subcategory of moving may be arbitrary, but for example, the movement score may be calculated from the change in acceleration in a predetermined period. For example, when there is a change in acceleration in a predetermined period, the movement score is calculated so that the degree of agreement with the movement of the user U is high. Further, for example, the position information of the information providing device 10 may be acquired and the movement score may be calculated based on the degree of change in the position in a predetermined period. In this case, the speed can be predicted from the amount of change in position during a predetermined period, and the means of transportation such as a vehicle or walking can be specified. Although the degree of coincidence for moving is calculated here, the degree of coincidence for moving at a predetermined speed may be calculated, for example.
 このように、環境特定部44は、情報提供装置10の加速度情報や情報提供装置10の位置情報に基づいて、ユーザUの動きを示す情報(ここでは動きスコア)を、設定するといえる。ただし、環境特定部44は、ユーザUの動きを示す情報を設定するために、加速度情報と位置情報を用いることに限られず、任意の環境情報を用いてよく、例えば加速度情報と位置情報との少なくとも1つを用いてもよい。 As described above, it can be said that the environment specifying unit 44 sets the information indicating the movement of the user U (here, the movement score) based on the acceleration information of the information providing device 10 and the position information of the information providing device 10. However, the environment specifying unit 44 is not limited to using the acceleration information and the position information in order to set the information indicating the movement of the user U, and may use any environment information, for example, the acceleration information and the position information. At least one may be used.
 (安全性スコア)
 環境特定部44は、ユーザUの安全性のカテゴリーについての環境スコアとして、安全性スコアを算出する。すなわち、安全性スコアとは、ユーザUの安全性を示す情報であり、ユーザUが安全な環境にいるかを数値として示す情報といえる。環境特定部44は、複数種類の環境情報のうちの、ユーザUの安全性に関連する環境情報に基づいて、安全性スコアを算出する。ユーザUの安全性に関連する環境情報としては、カメラ20Aによって取得される周辺画像と、マイク20Bによって取得される周辺音声と、光センサ20Fによって検出される光の強度情報と、温度センサ20Gによって検出される周辺の温度情報と、湿度センサ20Hによって検出される周辺の湿度情報とが挙げられる。
(Safety score)
The environment specifying unit 44 calculates the safety score as the environment score for the safety category of the user U. That is, the safety score is information indicating the safety of the user U, and can be said to be information indicating whether the user U is in a safe environment as a numerical value. The environment specifying unit 44 calculates the safety score based on the environmental information related to the safety of the user U among the plurality of types of environmental information. Environmental information related to the safety of the user U includes the peripheral image acquired by the camera 20A, the peripheral sound acquired by the microphone 20B, the light intensity information detected by the optical sensor 20F, and the temperature sensor 20G. Examples include the detected ambient temperature information and the ambient humidity information detected by the humidity sensor 20H.
 より詳しくは、図5の例では、ユーザUの安全性のカテゴリーに対して、明るいというサブカテゴリーと、赤外線や紫外線が適量であるというサブカテゴリーと、適した温度であるというサブカテゴリーと、適した湿度であるというサブカテゴリーと、危険物があるというサブカテゴリーとが含まれている。環境特定部44は、光センサ20Fによって取得された周辺の可視光の強度に基づいて、明るいというサブカテゴリーについての安全性スコアを算出する。明るいというサブカテゴリーについての安全性スコアは、十分な明るさに対する周辺の明るさの一致度合いを示す数値といえる。明るいというサブカテゴリーについての安全性スコアの算出方法は任意であってよいが、例えば、光センサ20Fが検出した可視光の強度に基づいて算出してよい。また例えば、カメラ20Aで撮像した画像の輝度に基づいて、明るいというサブカテゴリーについての安全性スコアを算出してもよい。なお、ここでは十分な明るさに対する一致度を算出したが、それに限られず、任意の明るさ度合いに対する一致度を算出してもよい。 More specifically, in the example of FIG. 5, for the safety category of the user U, the subcategory of being bright, the subcategory of having an appropriate amount of infrared rays and ultraviolet rays, and the subcategory of having an appropriate temperature are suitable. It includes a sub-category of high humidity and a sub-category of dangerous goods. The environment specifying unit 44 calculates a safety score for the subcategory of brightness based on the intensity of visible light in the surroundings acquired by the optical sensor 20F. The safety score for the bright subcategory can be said to be a numerical value indicating the degree of matching of the surrounding brightness with sufficient brightness. The method of calculating the safety score for the subcategory of bright may be arbitrary, but for example, it may be calculated based on the intensity of visible light detected by the optical sensor 20F. Further, for example, a safety score for the subcategory of brightness may be calculated based on the brightness of the image captured by the camera 20A. Although the degree of coincidence with respect to sufficient brightness is calculated here, the degree of coincidence with respect to any degree of brightness may be calculated without limitation.
 環境特定部44は、光センサ20Fによって取得された周辺の赤外線や紫外線の強度に基づいて、赤外線や紫外線が適量であるというサブカテゴリーについての安全性スコアを算出する。赤外線や紫外線が適量であるというサブカテゴリーについての安全性スコアは、赤外線や紫外線の適切な強度に対する、周辺の赤外線や紫外線の強度の一致度合いを示す数値といえる。赤外線や紫外線が適量であるというサブカテゴリーについての安全性スコアの算出方法は任意であってよいが、例えば、光センサ20Fが検出した赤外線や紫外線の強度に基づいて算出してよい。なお、ここでは赤外線や紫外線の適切な強度に対する一致度を算出したが、それに限られず、赤外線や紫外線の任意の強度に対する一致度を算出してもよい。 The environment specifying unit 44 calculates the safety score for the subcategory that the amount of infrared rays and ultraviolet rays is appropriate based on the intensity of infrared rays and ultraviolet rays in the vicinity acquired by the optical sensor 20F. The safety score for the subcategory that the amount of infrared rays and ultraviolet rays is appropriate can be said to be a numerical value indicating the degree of matching of the intensities of surrounding infrared rays and ultraviolet rays with the appropriate intensities of infrared rays and ultraviolet rays. The method of calculating the safety score for the subcategory that the amount of infrared rays or ultraviolet rays is appropriate may be arbitrary, but for example, it may be calculated based on the intensity of infrared rays or ultraviolet rays detected by the optical sensor 20F. Although the degree of coincidence with respect to the appropriate intensity of infrared rays and ultraviolet rays is calculated here, the degree of coincidence with respect to any intensity of infrared rays and ultraviolet rays may be calculated without limitation.
 環境特定部44は、温度センサ20Gによって取得された周辺の温度に基づいて、適した温度であるというサブカテゴリーについての安全性スコアを算出する。適した温度であるというサブカテゴリーについての安全性スコアは、適した温度に対する、周辺の温度の一致度合いを示す数値といえる。適した温度というサブカテゴリーについての安全性スコアの算出方法は任意であってよいが、例えば、温度センサ20Gが検出した周辺の温度に基づいて算出してよい。なお、ここでは適した温度に対する一致度を算出したが、それに限られず、任意の温度に対する一致度を算出してもよい。 The environment specifying unit 44 calculates a safety score for the subcategory that the temperature is suitable based on the ambient temperature acquired by the temperature sensor 20G. The safety score for the subcategory of suitable temperature can be said to be a numerical value indicating the degree of agreement between the ambient temperature and the suitable temperature. The method of calculating the safety score for the subcategory of suitable temperature may be arbitrary, but may be calculated based on, for example, the ambient temperature detected by the temperature sensor 20G. Although the degree of coincidence with respect to a suitable temperature is calculated here, the degree of coincidence with respect to any temperature may be calculated without limitation.
 環境特定部44は、湿度センサ20Hによって取得された周辺の湿度に基づいて、適した湿度であるというサブカテゴリーについての安全性スコアを算出する。適した湿度であるというサブカテゴリーについての安全性スコアは、適した湿度に対する、周辺の湿度の一致度合いを示す数値といえる。適した湿度というサブカテゴリーについての安全性スコアの算出方法は任意であってよいが、例えば、湿度センサ20Hが検出した周辺の湿度に基づいて算出してよい。なお、ここでは適した湿度に対する一致度を算出したが、それに限られず、任意の湿度に対する一致度を算出してもよい。 The environment specifying unit 44 calculates a safety score for the subcategory that the humidity is suitable based on the surrounding humidity acquired by the humidity sensor 20H. The safety score for the subcategory of suitable humidity can be said to be a numerical value indicating the degree of agreement between the surrounding humidity and the suitable humidity. The method of calculating the safety score for the subcategory of suitable humidity may be arbitrary, but may be calculated based on, for example, the ambient humidity detected by the humidity sensor 20H. Although the degree of coincidence with respect to suitable humidity is calculated here, the degree of coincidence with respect to any humidity may be calculated without limitation.
 環境特定部44は、カメラ20Aによって取得された周辺画像に基づいて、危険物があるというサブカテゴリーについての安全性スコアを算出する。危険物があるというサブカテゴリーについての安全性スコアは、危険物があることに対する一致度合いを示す数値といえる。危険物があるというサブカテゴリーについての安全性スコアの算出方法は任意であってよいが、例えば、上述のように周辺画像に基づいて危険状態であるか判断する方法と同様の方法で、すなわち例えば、周辺画像に含まれる対象物が特定の対象物であるかを判断することによって、判断してよい。さらに、環境特定部44は、マイク20Bによって取得された周辺音声にも基づいて、危険物があるというサブカテゴリーについての安全性スコアを算出する。危険物があるというサブカテゴリーについての安全性スコアの算出方法は任意であってよいが、例えば、上述のように周辺音声に基づいて危険状態であるか判断する方法と同様の方法で、すなわち例えば、周辺音声が特定の種類の音声であるかを判断することによって、判断してよい。 The environment specifying unit 44 calculates the safety score for the subcategory that there is a dangerous substance based on the peripheral image acquired by the camera 20A. The safety score for the subcategory of dangerous goods can be said to be a numerical value indicating the degree of agreement with the presence of dangerous goods. The method of calculating the safety score for the subcategory that there is a dangerous substance may be arbitrary, but for example, it is the same method as the method of determining whether or not it is in a dangerous state based on the peripheral image as described above, that is, for example. , The judgment may be made by judging whether the object included in the peripheral image is a specific object. Further, the environment specifying unit 44 calculates a safety score for the subcategory that there is a dangerous substance based on the peripheral voice acquired by the microphone 20B. The method of calculating the safety score for the subcategory of dangerous goods may be arbitrary, but for example, in the same manner as the method of determining whether or not a dangerous state is based on the surrounding voice as described above, that is, for example. , The judgment may be made by judging whether the peripheral voice is a specific type of voice.
 (環境スコアの一例)
 図5では、環境D1から環境D4について算出された環境スコアが例示されている。環境D1から環境D4は、それぞれ、ユーザUが異なる環境にいる場合を示しており、それぞれの環境における、カテゴリー(サブカテゴリー)毎の環境スコアが算出されている。
(Example of environmental score)
FIG. 5 illustrates the environmental scores calculated for the environment D1 to the environment D4. Environments D1 to D4 indicate cases where the user U is in a different environment, and an environment score for each category (sub-category) in each environment is calculated.
 なお、図5に示す環境のカテゴリー及びサブカテゴリーの種類は一例であり、環境D1からD4における環境スコアの値も一例である。また、情報提供装置10は、このようにユーザUの環境を示す情報として、環境スコアのような数値で表すことで、誤差なども加味することが可能となり、より正確にユーザUの環境を推定することができる。言い換えれば、情報提供装置10は、環境情報を、3つ以上の度合いのいずれか(ここでは環境スコア)に分類することにより、正確にユーザUの環境を推定できるといえる。ただし、情報提供装置10が環境情報に基づき設定するユーザUの環境を示す情報は、環境スコアのような値であることに限られず、任意の方式のデータであってよく、例えば、Yes又はNoのいずれか二択を示す情報などであってもよい。 The types of environment categories and subcategories shown in FIG. 5 are examples, and the values of the environment scores in environments D1 to D4 are also examples. Further, the information providing device 10 can take an error or the like into consideration by expressing the information indicating the environment of the user U as a numerical value such as an environment score, and estimate the environment of the user U more accurately. can do. In other words, it can be said that the information providing device 10 can accurately estimate the environment of the user U by classifying the environmental information into any of three or more degrees (here, the environmental score). However, the information indicating the environment of the user U set by the information providing device 10 based on the environment information is not limited to a value such as an environment score, and may be data of any method, for example, Yes or No. Information indicating either of the two options may be used.
 (環境パターンの決定)
 情報提供装置10は、図4に示すステップS16からステップS22において、以上説明した方法で、各種環境スコアを算出する。図4に示すように、情報提供装置10は、環境スコアを算出したら、環境特定部44により、それぞれの環境スコアに基づいて、ユーザUが置かれている環境を示す環境パターンを決定する(ステップS24)。すなわち、環境特定部44は、環境スコアに基づいて、ユーザUがどのようか環境にいるかを判断する。環境情報や環境スコアが、環境センサ20によって検出された、ユーザUの環境の一部の要素を示す情報であるのに対し、環境パターンは、それら一部の要素を示す情報に基づき設定された、環境を総合的に示す指標であるといえる。
(Determination of environmental pattern)
The information providing device 10 calculates various environmental scores by the method described above in steps S16 to S22 shown in FIG. As shown in FIG. 4, after the information providing device 10 calculates the environment score, the environment specifying unit 44 determines an environment pattern indicating the environment in which the user U is placed based on each environment score (step). S24). That is, the environment specifying unit 44 determines how the user U is in the environment based on the environment score. While the environmental information and the environmental score are the information indicating some elements of the environment of the user U detected by the environment sensor 20, the environmental pattern is set based on the information indicating some elements. , It can be said that it is an index that comprehensively shows the environment.
 図6は、環境パターンの一例を示す表である。本実施形態では、環境特定部44は、環境スコアに基づいて、様々な環境に対応する環境パターンのうちから、ユーザUが置かれている環境に合致する環境パターンを選定する。本実施形態では、例えば、仕様設定用データベース30Cに、環境スコアの値と、環境パターンとを対応づけた対応情報(テーブル)が記録されている。環境特定部44は、環境情報と、この対応情報とに基づいて、環境パターンを決定する。具体的には、環境特定部44は、対応情報のなかから、算出した環境スコアの値に対応付けられた環境パターンを選択して、採用する環境パターンとして選定する。図6の例では、環境パターンPT1が、ユーザUが電車内で座っていることを示しており、環境パターンPT2が、ユーザUが歩道を歩いていることを示しており、環境パターンPT3が、ユーザUが暗い歩道を歩いていることを示しており、環境パターンPT4が、ユーザUがショッピングしていることを示している。 FIG. 6 is a table showing an example of an environmental pattern. In the present embodiment, the environment specifying unit 44 selects an environment pattern that matches the environment in which the user U is placed from among the environment patterns corresponding to various environments, based on the environment score. In the present embodiment, for example, correspondence information (table) in which the value of the environmental score and the environmental pattern are associated with each other is recorded in the specification setting database 30C. The environment specifying unit 44 determines the environment pattern based on the environment information and the corresponding information. Specifically, the environment specifying unit 44 selects an environment pattern associated with the calculated environment score value from the corresponding information, and selects it as the environment pattern to be adopted. In the example of FIG. 6, the environment pattern PT1 indicates that the user U is sitting in the train, the environment pattern PT2 indicates that the user U is walking on the sidewalk, and the environment pattern PT3 indicates that the user U is walking on the sidewalk. It indicates that the user U is walking on a dark sidewalk, and the environmental pattern PT4 indicates that the user U is shopping.
 図5及び図6の例では、環境D1においては、「立っている状態」の環境スコアは10で、「顔の向きが水平方向」の環境スコアが100ということから、ユーザUは、座ってほぼ水平に顔を向けていると予測できる。また、「電車内」の環境スコアが90、「線路上」の環境スコアが100、「電車内音」の環境スコア90となっていることから、ユーザUは電車の中にいることがわかる。また、「動いている」の環境スコアが100なので、ユーザUが、等速度か加速度を持つ移動をしていることがわかる。また、「明るい」の環境スコアは50であり、電車内なので外よりは暗いことがわかる。また、「赤外線や紫外線が適量」、「適した温度」、「適した湿度」の環境スコアは100であり、安全といえる。また、「危険物がある」という環境スコアは、映像的には10であり、音的には20であるため、これも安全と考えられる。すなわち、環境D1では、それぞれの環境スコアから、ユーザUは、電車内において移動中で座席に座り、しかも安全かつ快適な状況にあると推定することが可能であり、環境D1の環境パターンは、電車内で座っていることを示す環境パターンPT1とされる。 In the examples of FIGS. 5 and 6, in the environment D1, the environment score of "standing" is 10, and the environment score of "face orientation is horizontal" is 100. Therefore, the user U sits down. It can be predicted that the face is turned almost horizontally. Further, since the environmental score of "inside the train" is 90, the environmental score of "on the railroad track" is 100, and the environmental score of "sound in the train" is 90, it can be seen that the user U is in the train. Further, since the environment score of "moving" is 100, it can be seen that the user U is moving with a constant velocity or acceleration. In addition, the environmental score of "bright" is 50, which means that it is darker than the outside because it is inside the train. In addition, the environmental scores of "infrared rays and ultraviolet rays are appropriate", "suitable temperature", and "suitable humidity" are 100, which can be said to be safe. In addition, the environmental score of "there is a dangerous substance" is 10 in terms of images and 20 in terms of sound, so this is also considered safe. That is, in the environment D1, it is possible to estimate that the user U is in a safe and comfortable situation while moving in the train from each environment score, and the environment pattern of the environment D1 is It is said to be the environmental pattern PT1 indicating that the person is sitting on the train.
 また、図5及び図6の例では、環境D2においては、「立っている状態」の環境スコアは10で、「顔の向きが水平方向」の環境スコアが90ということから、ユーザUは、座ってほぼ水平に顔を向けていると予測できる。また、「電車内」の環境スコアが0、「線路上」の環境スコアが0、「電車内音」の環境スコア10となっていることから、ユーザUは電車の中にいないことがわかる。ここでは図示を省略しているが、環境D2においては、居場所の環境スコアに基づいて、ユーザUが道路上にあることも確認できる。また、「動いている」の環境スコアが100なので、ユーザUが、等速度か加速度を持つ移動をしていることがわかる。また、「明るい」の環境スコアは100であり、明るい屋外であることがわかる。また、「赤外線や紫外線が適量」は80であり、紫外線などの影響が少しあることが分かる。また、「適した温度」、「適した湿度」の環境スコアは100であり、安全といえる。また、「危険物がある」という環境スコアは、映像的には10であり、音的には20であるため、これも安全と考えられる。すなわち、環境D2では、それぞれの環境スコアから、ユーザUは歩道を徒歩で移動中であり、明るい屋外であり、危険物が認められないと推定することが可能であり、環境D2の環境パターンは、歩道を歩いていることを示す環境パターンPT2とされる。 Further, in the examples of FIGS. 5 and 6, in the environment D2, the environment score of the “standing state” is 10, and the environment score of the “face orientation in the horizontal direction” is 90. It can be predicted that he will sit and turn his face almost horizontally. Further, since the environmental score of "inside the train" is 0, the environmental score of "on the railroad track" is 0, and the environmental score of "sound in the train" is 10, it can be seen that the user U is not on the train. Although not shown here, in the environment D2, it can be confirmed that the user U is on the road based on the environment score of the place of residence. Further, since the environment score of "moving" is 100, it can be seen that the user U is moving with a constant velocity or acceleration. In addition, the environmental score of "bright" is 100, which indicates that it is a bright outdoor environment. Further, the "appropriate amount of infrared rays and ultraviolet rays" is 80, and it can be seen that there is a slight influence of ultraviolet rays and the like. Further, the environmental scores of "suitable temperature" and "suitable humidity" are 100, which can be said to be safe. In addition, the environmental score of "there is a dangerous substance" is 10 in terms of images and 20 in terms of sound, so this is also considered safe. That is, in the environment D2, it is possible to estimate from each environment score that the user U is moving on the sidewalk on foot, is bright outdoors, and no dangerous substance is recognized, and the environment pattern of the environment D2 is. , It is said to be the environmental pattern PT2 indicating that the person is walking on the sidewalk.
 また、図5及び図6の例では、環境D3においては、「立っている状態」の環境スコアは0で、「顔の向きが水平方向」の環境スコアが90ということから、ユーザUは、座ってほぼ水平に顔を向けていると予測できる。また、「電車内」の環境スコアが5、「線路上」の環境スコアが0、「電車内音」の環境スコア5となっていることから、ユーザUは電車の中にいないことがわかる。ここでは図示を省略しているが、環境D3においては、居場所の環境スコアに基づいて、ユーザUが道路上にあることも確認できる。また、「動いている」の環境スコアが100なので、ユーザUが、等速度か加速度を持つ移動をしていることがわかる。また、「明るい」の環境スコアは10であり、暗い環境であることがわかる。また、「赤外線や紫外線が適量」は100であり、安全であることが分かる。また、「適した温度」の環境スコアは75であり、標準より暑かったり寒かったりするといえる。また、「危険物がある」という環境スコアは、映像的には90であり、音的には80であるため、何かが音を出して近づいてきていることが分かる。また、図示していないが、音や映像から対象物を判定でき、ここでは前方より車が近づいていて音は車のエンジン音であると判断できる。すなわち、環境D3では、それぞれの環境スコアから、ユーザUは歩道を徒歩で移動中であり、暗い屋外であり、危険物として車両が近づいていると推定することが可能であり、環境D3の環境パターンは、暗い歩道を歩いていることを示す環境パターンPT3とされる。 Further, in the examples of FIGS. 5 and 6, in the environment D3, the environment score of the “standing state” is 0, and the environment score of the “face orientation in the horizontal direction” is 90. It can be predicted that he will sit and turn his face almost horizontally. Further, since the environmental score of "inside the train" is 5, the environmental score of "on the railroad track" is 0, and the environmental score of "sound in the train" is 5, it can be seen that the user U is not on the train. Although not shown here, in the environment D3, it can be confirmed that the user U is on the road based on the environment score of the place of residence. Further, since the environment score of "moving" is 100, it can be seen that the user U is moving with a constant velocity or acceleration. Further, the environment score of "bright" is 10, which indicates that the environment is dark. Further, the "appropriate amount of infrared rays and ultraviolet rays" is 100, which shows that it is safe. In addition, the environmental score of "suitable temperature" is 75, which can be said to be hotter or colder than the standard. Further, since the environmental score of "there is a dangerous substance" is 90 in the image and 80 in the sound, it can be seen that something is making a sound and approaching. Further, although not shown, the object can be determined from the sound and the image, and here it can be determined that the car is approaching from the front and the sound is the engine sound of the car. That is, in the environment D3, it is possible to estimate from each environment score that the user U is moving on the sidewalk on foot, is dark outdoors, and the vehicle is approaching as a dangerous object, and the environment of the environment D3. The pattern is the environmental pattern PT3, which indicates walking on a dark sidewalk.
 また、図5及び図6の例では、環境D4においては、「立っている状態」の環境スコアは0で、「顔の向きが水平方向」の環境スコアが90ということから、ユーザUは、座ってほぼ水平に顔を向けていると予測できる。また、「電車内」の環境スコアが20、「線路上」の環境スコアが0、「電車内音」の環境スコア5となっていることから、ユーザUは電車の中にいないことがわかる。ここでは図示を省略しているが、環境D3においては、居場所の環境スコアに基づいて、ユーザUがショッピングセンターにあることも確認できる。また、「動いている」の環境スコアが80なので、ユーザUが緩やかに移動していることがわかる。また、「明るい」の環境スコアは70であり、比較的明るいが屋内の照明程度の明るさであることが予想できる。また、「赤外線や紫外線が適量」は100であり、安全であることが分かる。また、「適した温度」の環境スコアは100であり快適であるが、「適した湿度」の環境スコアが90なので、快適とまでは言い切れないといえる。また、「危険物がある」という環境スコアは、映像的には10であり、音的には20であるため、これも安全と考えられる。すなわち、環境D4では、それぞれの環境スコアから、ユーザUはショッピングセンターを徒歩で移動中であり、周辺は比較的明るく、危険物はないと推定することが可能であり、環境D4の環境パターンは、ショッピングしていることを示す環境パターンPT4とされる。 Further, in the examples of FIGS. 5 and 6, in the environment D4, the environment score of the "standing state" is 0, and the environment score of the "face orientation in the horizontal direction" is 90. It can be predicted that he will sit and turn his face almost horizontally. Further, since the environmental score of "inside the train" is 20, the environmental score of "on the railroad track" is 0, and the environmental score of "sound in the train" is 5, it can be seen that the user U is not on the train. Although not shown here, in the environment D3, it can be confirmed that the user U is in the shopping center based on the environment score of the place of residence. Further, since the environment score of "moving" is 80, it can be seen that the user U is moving slowly. In addition, the environmental score of "bright" is 70, and it can be expected that the environment score is relatively bright but as bright as indoor lighting. Further, the "appropriate amount of infrared rays and ultraviolet rays" is 100, which shows that it is safe. Further, the environmental score of "suitable temperature" is 100, which is comfortable, but the environmental score of "suitable humidity" is 90, so it cannot be said that it is comfortable. In addition, the environmental score of "there is a dangerous substance" is 10 in terms of images and 20 in terms of sound, so this is also considered safe. That is, in the environment D4, it is possible to estimate from each environment score that the user U is moving in the shopping center on foot, the surrounding area is relatively bright, and there are no dangerous substances, and the environment pattern of the environment D4 is. , The environmental pattern PT4 indicating that the person is shopping.
 (対象機器と基準出力仕様の設定)
 環境パターンを選定したら、情報提供装置10は、図4に示すように、出力選択部48と出力仕様決定部50により、環境パターンに基づき、出力部26の中から作動させる対象機器を選定し、基準出力仕様を設定する(ステップS26)。
(Target device and standard output specification settings)
After selecting the environment pattern, the information providing device 10 selects the target device to be operated from the output units 26 based on the environment pattern by the output selection unit 48 and the output specification determination unit 50, as shown in FIG. The reference output specification is set (step S26).
 (対象機器の設定)
 対象機器とは、上述のように、出力部26のうちで作動させる機器であり、本実施形態では、出力選択部48は、環境情報に基づき、より好ましくは環境パターンに基づき、表示部26Aと音声出力部26Bと感覚刺激出力部26Cのうちから、対象機器を選定する。環境パターンは現在のユーザUの環境を示す情報であるため、環境パターンに基づいて対象機器を選定することで、現在のユーザUの環境に応じた適切な感覚刺激を選択することができる。
(Setting of target device)
As described above, the target device is a device that is operated in the output unit 26, and in the present embodiment, the output selection unit 48 and the display unit 26A are based on environmental information, more preferably based on an environmental pattern. The target device is selected from the voice output unit 26B and the sensory stimulation output unit 26C. Since the environment pattern is information indicating the environment of the current user U, by selecting the target device based on the environment pattern, it is possible to select an appropriate sensory stimulus according to the environment of the current user U.
 例えば、出力選択部48は、環境情報に基づき、ユーザUが周辺の環境を視認する必要性が高いかを判断し、その判断結果に基づき、表示部26Aを対象機器とするかを判断してよい。この場合例えば、出力仕様決定部50は、周辺の環境を視認する必要性が、所定の基準より低い場合に、表示部26Aを対象機器として選定し、所定の基準以上となる場合に、表示部26Aを対象機器としない。ユーザUが周辺の環境を視認する必要性が高いかの判断は任意に行ってよいが、例えば、ユーザUが移動中であったり、危険物があったりする場合などに、所定の基準以上となると判断してよい。 For example, the output selection unit 48 determines whether it is highly necessary for the user U to visually recognize the surrounding environment based on the environment information, and determines whether the display unit 26A is the target device based on the determination result. good. In this case, for example, the output specification determination unit 50 selects the display unit 26A as the target device when the necessity of visually recognizing the surrounding environment is lower than the predetermined standard, and the display unit 50 exceeds the predetermined standard. 26A is not the target device. It may be arbitrarily determined whether or not the user U needs to visually recognize the surrounding environment, but for example, when the user U is moving or there is a dangerous object, it is set to be equal to or higher than a predetermined standard. You may judge that it will be.
 また例えば、出力選択部48は、環境情報に基づき、ユーザUが周辺の音を聞く必要性が高いかを判断し、その判断結果に基づき、音声出力部26Bを対象機器とするかを判断してよい。この場合例えば、出力仕様決定部50は、周辺の音を聞く必要性が、所定の基準より低い場合に、音声出力部26Bを対象機器として選定し、所定の基準以上となる場合に、音声出力部26Bを対象機器としない。ユーザUが周辺の音を聞く必要性が高いかの判断は任意に行ってよいが、例えば、ユーザUが移動中であったり、危険物があったりする場合などに、所定の基準以上となると判断してよい。 Further, for example, the output selection unit 48 determines whether the user U has a high need to hear surrounding sounds based on the environmental information, and determines whether the voice output unit 26B is the target device based on the determination result. It's okay. In this case, for example, the output specification determination unit 50 selects the audio output unit 26B as the target device when the necessity of listening to the surrounding sound is lower than the predetermined standard, and outputs the audio when the necessity exceeds the predetermined standard. Part 26B is not the target device. It may be arbitrarily determined whether the user U has a high need to hear the surrounding sounds, but for example, when the user U is moving or there is a dangerous object, the standard is exceeded. You may judge.
 また例えば、出力選択部48は、環境情報に基づき、ユーザUが触覚刺激を受けてよいかを判断し、その判断結果に基づき、感覚刺激出力部26Cを対象機器とするかを判断してよい。この場合例えば、出力仕様決定部50は、触覚刺激を受けてよいと判断した場合に、感覚刺激出力部26Cを対象機器として選定し、触覚刺激を受けてよくないと判断した場合に、感覚刺激出力部26Cを対象機器としない。ユーザUが触覚刺激を受けてよいかの判断は任意に行ってよいが、例えば、ユーザUが移動中であったり、危険物があったりする場合などに、触覚刺激を受けてよくないと判断してよい。 Further, for example, the output selection unit 48 may determine whether the user U may receive the tactile stimulus based on the environmental information, and may determine whether the sensory stimulus output unit 26C is the target device based on the determination result. .. In this case, for example, when the output specification determination unit 50 determines that the tactile stimulus may be received, the sensory stimulus output unit 26C is selected as the target device, and when it is determined that the tactile stimulus is not acceptable, the sensory stimulus is stimulated. The output unit 26C is not the target device. It may be arbitrarily determined whether the user U can receive the tactile stimulus, but for example, when the user U is moving or there is a dangerous object, it is determined that the user U is not allowed to receive the tactile stimulus. You can do it.
 以上で、出力選択部48による対象機器の選定方法の例を説明したが、より具体的には、出力選択部48は、例えば後述の図8に示すように、環境パターンと対象機器との関係を示すテーブルに基づいて、対象機器を選定することが好ましい。 The example of the method of selecting the target device by the output selection unit 48 has been described above. More specifically, the output selection unit 48 has a relationship between the environment pattern and the target device, for example, as shown in FIG. 8 described later. It is preferable to select the target device based on the table showing.
 (基準出力仕様の設定)
 また、出力仕様決定部50は、環境情報に基づき、より好ましくは環境パターンに基づき、基準となる出力仕様である基準出力仕様を決定する。出力仕様とは、出力部26によって出力される刺激を、どのように出力させるかを示す指標である。例えば、表示部26Aの出力仕様は、出力するコンテンツ画像PSをどのように表示させるかを示すものであり、表示仕様と言い換えることもできる。表示部26Aの出力仕様としては、例えば、コンテンツ画像PSの大きさ(面積)と、コンテンツ画像PSの透明度と、コンテンツ画像PSの表示内容(コンテンツ)とが挙げられる。コンテンツ画像PSの大きさとは、表示部26Aの画面内のうちのコンテンツ画像PSが占める面積を指す。コンテンツ画像PSの透明度とは、コンテンツ画像PSの透明度の度合いを指す。コンテンツ画像PSの透明度が高いほど、背景像PAとしてユーザUの目に入射する光が、コンテンツ画像PSをより多く透過して、コンテンツ画像PSに重畳されている背景像PAが、より鮮明に視認されることとなる。このように、出力仕様決定部50は、環境パターンに基づき、表示部26Aの出力仕様として、コンテンツ画像PSの大きさと透明度と表示内容とを決定するといえる。ただし、表示部26Aの出力仕様は、コンテンツ画像PSの大きさと透明度と表示内容との全てであることに限られない。例えば、表示部26Aの出力仕様は、コンテンツ画像PSの大きさと透明度と表示内容との少なくとも1つであってよいし、他のものであってもよい。
(Setting of standard output specifications)
Further, the output specification determination unit 50 determines the reference output specification, which is the reference output specification, based on the environmental information, more preferably based on the environmental pattern. The output specification is an index showing how the stimulus output by the output unit 26 is output. For example, the output specification of the display unit 26A indicates how to display the content image PS to be output, and can be rephrased as the display specification. Examples of the output specifications of the display unit 26A include the size (area) of the content image PS, the transparency of the content image PS, and the display content (content) of the content image PS. The size of the content image PS refers to the area occupied by the content image PS in the screen of the display unit 26A. The transparency of the content image PS refers to the degree of transparency of the content image PS. The higher the transparency of the content image PS, the more light incident on the user U's eyes as the background image PA is transmitted through the content image PS, and the background image PA superimposed on the content image PS is visually recognized more clearly. Will be done. In this way, it can be said that the output specification determination unit 50 determines the size, transparency, and display content of the content image PS as the output specifications of the display unit 26A based on the environment pattern. However, the output specifications of the display unit 26A are not limited to all of the size, transparency, and display content of the content image PS. For example, the output specification of the display unit 26A may be at least one of the size, transparency, and display content of the content image PS, or may be another.
 例えば、出力仕様決定部50は、環境情報に基づき、ユーザUが周辺の環境を視認する必要性が高いかを判断し、その判断結果に基づき、表示部26Aの出力仕様(基準出力仕様)を決定してよい。この場合、出力仕様決定部50は、周辺の環境を視認する必要性が高いほど、環境像PMの視認度が高くなるように、表示部26Aの出力仕様(基準出力仕様)を決定する。ここでの視認度とは、環境像PMの視認のし易さを指す。例えば、出力仕様決定部50は、周辺の環境を視認する必要性が高いほど、コンテンツ画像PSの大きさを小さくしてもよく、コンテンツ画像PSの透明度を高くしてもよく、コンテンツ画像PSの表示内容の制限を増やしてもよく、また、これらを組み合わせてもよい。なお、コンテンツ画像PSの表示内容の制限を増やす場合には、例えば、表示内容から配信画像を除外して、表示内容を、ナビゲーション画像及び通知画像の少なくとも一方にすることが挙げられる。また、ユーザUが周辺の環境を視認する必要性が高いかの判断は任意に行ってよいが、例えば、ユーザUが移動中であったり、危険物があったりする場合などが挙げられる。 For example, the output specification determination unit 50 determines whether it is highly necessary for the user U to visually recognize the surrounding environment based on the environment information, and based on the determination result, determines the output specification (reference output specification) of the display unit 26A. You may decide. In this case, the output specification determination unit 50 determines the output specification (reference output specification) of the display unit 26A so that the higher the necessity of visually recognizing the surrounding environment, the higher the visibility of the environment image PM. The visibility here refers to the ease of viewing the environment image PM. For example, the output specification determination unit 50 may reduce the size of the content image PS or increase the transparency of the content image PS as the necessity of visually recognizing the surrounding environment increases. The restrictions on the display contents may be increased, or these may be combined. In order to increase the limitation on the display content of the content image PS, for example, the distribution image may be excluded from the display content and the display content may be at least one of the navigation image and the notification image. Further, it may be arbitrarily determined whether or not the user U needs to visually recognize the surrounding environment, and examples thereof include the case where the user U is moving or there is a dangerous object.
 図7は、コンテンツ画像の出力仕様のレベルの例を説明する模式図である。本実施形態では、出力仕様決定部50は、コンテンツ画像PSの出力仕様をレベルに分類して、環境情報に基づき、出力仕様のレベルを選定してよい。この場合、コンテンツ画像PSの出力仕様は、レベル毎に、環境像PMの視認度が異なるように設定される。本実施形態では、レベルが高いほど、出力刺激が強くなって環境像PMの視認度が下がるように、出力仕様の各レベルが設定される。そのため、出力仕様決定部50は、周辺の環境を視認する必要性が低いほど、出力仕様のレベルを高く設定する。図7の例では、レベル0においては、コンテンツ像PSが表示されず、環境像PMのみが視認されるため、環境像PMの視認度が最も高くなる。 FIG. 7 is a schematic diagram illustrating an example of the level of the output specification of the content image. In the present embodiment, the output specification determination unit 50 may classify the output specifications of the content image PS into levels and select the level of the output specifications based on the environmental information. In this case, the output specifications of the content image PS are set so that the visibility of the environment image PM is different for each level. In the present embodiment, each level of the output specification is set so that the higher the level, the stronger the output stimulus and the lower the visibility of the environmental image PM. Therefore, the output specification determination unit 50 sets the level of the output specification higher as the necessity of visually recognizing the surrounding environment is lower. In the example of FIG. 7, at level 0, the content image PS is not displayed and only the environment image PM is visually recognized, so that the visibility of the environment image PM is the highest.
 図7に示すように、レベル1においては、コンテンツ像PSが表示されるが、コンテンツ像PSの表示内容が制限されている。ここでは、表示内容から配信画像が除外され、表示内容がナビゲーション画像及び通知画像の少なくとも一方となっている。また、レベル1においては、コンテンツ画像PSの大きさが小さく設定されている。レベル1においては、ナビゲーション画像及び通知画像の表示が必要になった場合にのみ、コンテンツ像PSが、環境像PMに重畳して表示される。そのため、レベル1での環境像PMの視認度は、コンテンツ像PSが表示される分、レベル0よりも低いが、コンテンツ像PSの表示内容が制限されているため、高めになっている。 As shown in FIG. 7, at level 1, the content image PS is displayed, but the display content of the content image PS is limited. Here, the distribution image is excluded from the display content, and the display content is at least one of the navigation image and the notification image. Further, at level 1, the size of the content image PS is set small. At level 1, the content image PS is superimposed and displayed on the environment image PM only when it is necessary to display the navigation image and the notification image. Therefore, the visibility of the environment image PM at level 1 is lower than level 0 because the content image PS is displayed, but it is higher because the display content of the content image PS is limited.
 図7に示すように、レベル2においては、コンテンツ像PSの表示内容は制限されないが、コンテンツ画像PSの大きさが制限されて、コンテンツ画像PSの大きさが小さく設定されている。レベル2での環境像PMの視認度は、表示内容が制限されない分、レベル1よりも低くなっている。 As shown in FIG. 7, at level 2, the display content of the content image PS is not limited, but the size of the content image PS is limited and the size of the content image PS is set small. The visibility of the environment image PM at level 2 is lower than that at level 1 because the display content is not limited.
 図7に示すように、レベル3においては、コンテンツ像PSの表示内容及び大きさは制限されずに、例えば表示部26Aの画面全体でコンテンツ像PSが表示されている。ただし、レベル3においては、コンテンツ画像PSの透明度が制限されて、透明度が高く設定される。そのため、レベル3においては、半透明のコンテンツ画像PSと、コンテンツ画像PSに重畳する環境像PMとが、視認される。レベル3での環境像PMの視認度は、コンテンツ像PSの大きさが制限されない分、レベル2よりも低くなっている。 As shown in FIG. 7, at level 3, the display content and size of the content image PS are not limited, and for example, the content image PS is displayed on the entire screen of the display unit 26A. However, at level 3, the transparency of the content image PS is limited, and the transparency is set high. Therefore, at level 3, the translucent content image PS and the environment image PM superimposed on the content image PS are visually recognized. The visibility of the environment image PM at level 3 is lower than that of level 2 because the size of the content image PS is not limited.
 図7に示すように、レベル4においては、コンテンツ像PSの表示内容、大きさ及び透明度は制限されずに、例えば、表示部26Aの画面全体で透明度がゼロとなるように、コンテンツ像PSが表示されている。レベル4においては、コンテンツ像PSの透明度がゼロ(不透明)となっているため、環境像PMが視認されず、コンテンツ像PSのみが視認される。そのため、レベル4での環境像PMの視認度は、最低になっている。なお、レベル4において、例えば、表示部26Aの画面の一部の領域に、ユーザUの視野範囲に入る画像を、環境像PMとして表示させてもよい。 As shown in FIG. 7, at level 4, the display content, size, and transparency of the content image PS are not limited, and for example, the content image PS has zero transparency on the entire screen of the display unit 26A. It is displayed. At level 4, since the transparency of the content image PS is zero (opaque), the environment image PM is not visually recognized, and only the content image PS is visually recognized. Therefore, the visibility of the environmental image PM at level 4 is the lowest. At level 4, for example, an image that falls within the visual field range of the user U may be displayed as an environment image PM in a part of the screen of the display unit 26A.
 以上では表示部26Aの出力仕様について説明したが、出力仕様決定部50は、音声出力部26Bや感覚刺激出力部26Cについても、出力仕様を決定する。音声出力部26Bの出力仕様(音声仕様)としては、音量や、音響の有無や度合いなどが挙げられる。音響とは、例えばサラウンドや立体音場などの、特殊な効果を示している。音量が大きかったり、音響の度合いが大きかったりするほど、ユーザUへの聴覚刺激の度合いを強くできる。例えば、出力仕様決定部50は、環境情報に基づき、ユーザUが周辺の音を聞く必要性が高いかを判断し、その判断結果に基づき、音声出力部26Bの出力仕様(基準出力仕様)を決定してよい。この場合、出力仕様決定部50は、周辺の音を聞く必要性が低いほど、音量が大きくなったり、音響の度合いが大きくなったりするように、音声出力部26Bの出力仕様(基準出力仕様)を決定する。ユーザUが周辺の音を聞く必要性が高いかの判断は任意に行ってよいが、例えば、ユーザUが移動中であったり、危険物があったりする場合などが挙げられる。なお、出力仕様決定部50は、音声出力部26Bの出力仕様についても、表示部26Aの出力仕様と同様に、レベルを設定してよい。 Although the output specifications of the display unit 26A have been described above, the output specification determination unit 50 also determines the output specifications of the voice output unit 26B and the sensory stimulation output unit 26C. Examples of the output specifications (voice specifications) of the voice output unit 26B include volume, presence / absence and degree of sound. Acoustic refers to special effects such as surround and three-dimensional sound fields. The louder the volume and the louder the degree of sound, the stronger the degree of auditory stimulation to the user U can be. For example, the output specification determination unit 50 determines whether it is highly necessary for the user U to hear surrounding sounds based on the environmental information, and based on the determination result, determines the output specification (reference output specification) of the audio output unit 26B. You may decide. In this case, the output specification determination unit 50 has an output specification (reference output specification) of the audio output unit 26B so that the lower the need to hear the surrounding sound, the louder the volume and the louder the degree of sound. To decide. It may be arbitrarily determined whether or not the user U has a high need to hear the surrounding sounds, and examples thereof include the case where the user U is moving or there is a dangerous object. The output specification determination unit 50 may set the level of the output specification of the audio output unit 26B in the same manner as the output specification of the display unit 26A.
 感覚刺激出力部26Cの出力仕様としては、触覚刺激の強さや触覚刺激を出力する頻度などが挙げられる。触覚刺激の強さや頻度が高いほど、ユーザUへの触覚刺激の度合いを強くできる。例えば、出力仕様決定部50は、環境情報に基づき、ユーザUが触覚刺激を受けるのに適している状態かを判断し、その判断結果に基づき、感覚刺激出力部26Cの出力仕様(基準出力仕様)を決定してよい。この場合、出力仕様決定部50は、触覚刺激を受けるのに適しているほど、触覚刺激の強さが高くなったり、触覚刺激の頻度が多くなったりするように、感覚刺激出力部26Cの出力仕様(基準出力仕様)を決定する。ユーザUが触覚刺激を受けるのに適している状態かの判断は任意に行ってよいが、例えば、ユーザUが移動中であったり、危険物があったりする場合などが挙げられる。なお、出力仕様決定部50は、感覚刺激出力部26Cの出力仕様についても、表示部26Aの出力仕様と同様に、レベルを設定してよい。 The output specifications of the sensory stimulus output unit 26C include the strength of the tactile stimulus and the frequency of outputting the tactile stimulus. The higher the intensity and frequency of the tactile stimulus, the stronger the degree of the tactile stimulus to the user U can be. For example, the output specification determination unit 50 determines whether the user U is in a state suitable for receiving a tactile stimulus based on the environmental information, and based on the determination result, the output specification (reference output specification) of the sensory stimulus output unit 26C. ) May be determined. In this case, the output specification determination unit 50 outputs the sensory stimulus output unit 26C so that the strength of the tactile stimulus increases and the frequency of the tactile stimulus increases as the output specification determination unit 50 is suitable for receiving the tactile stimulus. Determine the specifications (reference output specifications). It may be arbitrarily determined whether the user U is in a state suitable for receiving the tactile stimulus, and examples thereof include the case where the user U is moving or there is a dangerous object. The output specification determination unit 50 may set the level of the output specification of the sensory stimulation output unit 26C in the same manner as the output specification of the display unit 26A.
 (対象機器と基準出力仕様の設定の具体例)
 出力選択部48及び出力仕様決定部50は、環境パターンと、対象機器及び基準出力仕様との関係に基づいて、対象機器及び基準出力仕様を決定することがより好ましい。図8は、環境パターンと、対象機器及び基準出力仕様との関係を示す表である。出力選択部48及び出力仕様決定部50は、環境パターンと対象機器及び基準出力仕様との関係を示す関係情報に基づいて、対象機器及び基準出力仕様を決定する。関係情報とは、環境パターンと、対象機器及び基準出力仕様とが、関連付けて記憶されている情報(テーブル)であり、例えば仕様設定用データベース30Cに記憶されている。関係情報においては、出力部26の種類毎に、すなわちここでは表示部26A、音声出力部26B、及び感覚刺激出力部26Cのそれぞれについて、基準出力仕様が設定されている。出力選択部48及び出力仕様決定部50は、この関係情報と、環境特定部44が設定した環境パターンとに基づいて、対象機器及び基準出力仕様を決定する。具体的には、出力選択部48及び出力仕様決定部50は、関係情報を読み出して、関係情報のなかから、環境特定部44が設定した環境パターンに対応付けられた対象機器及び基準出力仕様を選択して、対象機器及び基準出力仕様を決定する。
(Specific example of setting the target device and standard output specifications)
It is more preferable that the output selection unit 48 and the output specification determination unit 50 determine the target device and the reference output specification based on the relationship between the environment pattern and the target device and the reference output specification. FIG. 8 is a table showing the relationship between the environmental pattern, the target device, and the reference output specifications. The output selection unit 48 and the output specification determination unit 50 determine the target device and the reference output specification based on the relational information indicating the relationship between the environment pattern and the target device and the reference output specification. The relational information is information (table) in which the environment pattern, the target device, and the reference output specification are stored in association with each other, and is stored in, for example, the specification setting database 30C. In the related information, reference output specifications are set for each type of the output unit 26, that is, here, for each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C. The output selection unit 48 and the output specification determination unit 50 determine the target device and the reference output specification based on this related information and the environment pattern set by the environment identification unit 44. Specifically, the output selection unit 48 and the output specification determination unit 50 read out the relational information, and from the relational information, select the target device and the reference output specification associated with the environment pattern set by the environment identification unit 44. Select to determine the target device and reference output specifications.
 図8の例では、電車内を座っているとされた環境パターンPT1に対しては、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの全てが対象機器とされ、それらの基準出力仕様のレベルが、4に割り当てられている。なお、レベルが高いほど、出力刺激が高いことを示している。また、歩道を歩いているとされた環境パターンPT2に対しては、ほぼ安全快適な状況であるが、歩行しているため前方注意は必要と考えられるため、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの全てが対象機器とされ、それらの基準出力仕様のレベルが、3に割り当てられている。また、暗い歩道を歩いているとされた環境パターンPT3に対しては、安全な状況とは言えず、前方を注視し、外音も良く聞こえている状態でないとならないため、音声出力部26B、及び感覚刺激出力部26Cが対象機器とされ、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの基準出力仕様のレベルが、それぞれ、0、2、2に割り当てられている。また、ショッピングしているとされた環境パターンPT4に対しては、ほぼ安全であるが、ショッピングセンターだけに、気が散るほどの情報提供は不要と想定されるため、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの全てが対象機器とされ、それらの基準出力仕様のレベルが、2に割り当てられている。ただし、図8における、環境パターン毎の、対象機器や基準出力仕様の割り当ては、一例であり、適宜設定されてよい。 In the example of FIG. 8, for the environmental pattern PT1 that is supposed to be sitting in the train, the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C are all targeted devices, and their reference outputs are used. The specification level is assigned to 4. The higher the level, the higher the output stimulus. In addition, the environment pattern PT2, which is said to be walking on the sidewalk, is almost safe and comfortable, but since it is considered that forward attention is required because the person is walking, the display unit 26A, the audio output unit 26B, and the like. And all of the sensory stimulus output unit 26C are targeted devices, and the level of their reference output specifications is assigned to 3. In addition, for the environmental pattern PT3, which is said to be walking on a dark sidewalk, it cannot be said that the situation is safe, and it is necessary to look ahead and hear the outside sound well. And the sensory stimulus output unit 26C is the target device, and the levels of the reference output specifications of the display unit 26A, the voice output unit 26B, and the sensory stimulus output unit 26C are assigned to 0, 2, and 2, respectively. In addition, although it is almost safe for the environmental pattern PT4 that is said to be shopping, it is assumed that it is not necessary to provide distracting information only to the shopping center, so the display unit 26A and the audio output unit All of 26B and the sensory stimulus output unit 26C are targeted devices, and the level of their reference output specifications is assigned to 2. However, the allocation of the target device and the reference output specification for each environment pattern in FIG. 8 is an example and may be set as appropriate.
 このように、本実施形態では、情報提供装置10は、予め設定された、環境パターンと対象機器及び基準出力仕様との関係に基づき、対象機器及び基準出力仕様を設定している。ただし、対象機器及び基準出力仕様の設定方法はこれに限られず、情報提供装置10は、環境センサ20が検出した環境情報に基づいて、任意の方法で、対象機器及び基準出力仕様を設定してよい。また、情報提供装置10は、環境情報に基づいて、対象機器及び基準出力仕様の両方を選定することに限られず、対象機器及び基準出力仕様の少なくとも一方を選定するものであってもよい。 As described above, in the present embodiment, the information providing device 10 sets the target device and the reference output specification based on the relationship between the environment pattern and the target device and the reference output specification set in advance. However, the setting method of the target device and the reference output specification is not limited to this, and the information providing device 10 sets the target device and the reference output specification by an arbitrary method based on the environmental information detected by the environment sensor 20. good. Further, the information providing device 10 is not limited to selecting both the target device and the reference output specification based on the environmental information, and may select at least one of the target device and the reference output specification.
 (生体情報の取得)
 また、図4に示すように、情報提供装置10は、生体情報取得部42によって、生体センサ22が検出したユーザUの生体情報を取得する(ステップS28)。生体情報取得部42は、脈波センサ22Aから、ユーザUの脈波情報を取得し、脳波センサ22Bから、ユーザUの脳波情報を取得する。図9は、脈波の一例を示すグラフである。図9に示すように、脈波は、所定時間毎にR波WRと呼ばれるピークが現れる波形となる。心臓は、自律神経系によって支配されていて、脈拍数は、心臓を動かすトリガーとなる電気信号を細胞レベルで発生させて動いている。通常、脈拍数は、交感神経の興奮によってアドレナリンが分泌されると増加し、副交感神経の興奮によってアセチルコリンが分泌されると減少する。上田信行「心電図R-R間隔のパワースペクトル解析を用いた糖尿病性自立神経障害の評価」(糖尿病35(1):17~23,1992年)によれば、自律神経の機能は、図9の例に示したような脈波の時間波形における、R-R間隔の変動を調べることで分かるとされている。R-R感覚とは、時系列で連続するR波WR同士の間隔である。心電とは、細胞レベルでみると、脱分極・活動電位と再分極・静止電位の繰り返しであり、この電気活動を体表から検出することで、心電図検出することができる。なお、脈波の伝わる速さは、とても速くて、心臓の打つのとほとんど同時に体中に伝わるので、心臓の拍動は脈波とも同期しているといっても良い。心臓の打つ脈波と、心電図のR波とは同期していることから脈波のR-R間隔は、心電図のR-R間隔と等価と考えることができる。脈波R-R間隔の変動とは、時間微分値ともいえるので、微分値を計算し、変動の大きさを検出することで、装着者の意思とはほぼ無関係に、生体の自律神経の活発化度合いや、沈静化度合い、即ち、心の乱れによるイライラ、満員電車による不快な思い、比較的短い時間で起きるストレスなどを、ある程度予測ができる。
(Acquisition of biometric information)
Further, as shown in FIG. 4, the information providing device 10 acquires the biometric information of the user U detected by the biometric sensor 22 by the biometric information acquisition unit 42 (step S28). The biological information acquisition unit 42 acquires the pulse wave information of the user U from the pulse wave sensor 22A, and acquires the brain wave information of the user U from the brain wave sensor 22B. FIG. 9 is a graph showing an example of a pulse wave. As shown in FIG. 9, the pulse wave is a waveform in which a peak called R wave WR appears at predetermined time intervals. The heart is dominated by the autonomic nervous system, and the pulse rate moves by generating electrical signals at the cellular level that trigger the movement of the heart. Normally, pulse rate increases when sympathetic excitement secretes adrenaline and decreases when parasympathetic excitement secretes acetylcholine. According to Nobuyuki Ueda, "Evaluation of Diabetic Independent Neuropathy Using Power Spectrum Analysis of ECG RR Spacing" (Diabetes 35 (1): 17-23, 1992), the function of the autonomic nerve is shown in FIG. It is said that it can be found by examining the fluctuation of the RR interval in the time waveform of the pulse wave as shown in the example. The RR sensation is the interval between R wave WRs that are continuous in time series. At the cellular level, electrocardiography is a repetition of depolarization / action potential and repolarization / resting potential, and by detecting this electrical activity from the body surface, electrocardiogram can be detected. It should be noted that the pulse wave travels at a very high speed and is transmitted throughout the body almost at the same time as the heart strikes, so it can be said that the heartbeat is also synchronized with the pulse wave. Since the pulse wave hit by the heart and the R wave of the electrocardiogram are synchronized, the RR interval of the pulse wave can be considered to be equivalent to the RR interval of the electrocardiogram. The fluctuation of the pulse wave RR interval can be said to be a time differential value, so by calculating the differential value and detecting the magnitude of the fluctuation, the activity of the autonomic nerves of the living body is almost irrelevant to the wearer's intention. It is possible to predict to some extent the degree of calming and the degree of calming, that is, frustration due to mental disorder, unpleasant feelings due to a crowded train, and stress that occurs in a relatively short time.
 一方で、脳波は、α波、β波といった波、脳全体に出現する基礎律動(背景脳波)活動検出して、その振幅を検出することで、脳全体としての活動が高まったり、低下したりしているということを、ある程度予測できる。例えば、前頭前野部分の脳の活発度合いから、視覚で刺激されているオブジェクトに対してどのくらいの興味を持っているか、などの注目度が分かる。 On the other hand, EEG is a wave such as α wave and β wave, and the activity of the whole brain is increased or decreased by detecting the basic rhythm (background EEG) activity that appears in the whole brain and detecting its amplitude. You can predict to some extent that you are doing it. For example, from the degree of activity of the brain in the prefrontal cortex, the degree of attention such as how much interest you have in the visually stimulated object can be known.
 (ユーザ状態の特定と出力仕様補正度の算出)
 図4に示すように、生体情報を取得したら、情報提供装置10は、ユーザ状態特定部46により、ユーザUの生体情報に基づき、ユーザUの精神状態を示すユーザ状態を特定して、ユーザ状態に基づいて、出力仕様補正度を算出する(ステップS30)。出力仕様補正度は、出力仕様決定部50が設定した基準出力仕様を補正するための値であり、基準出力仕様と出力仕様補正度とに基づいて、最終的な出力仕様が決定される。
(Specification of user status and calculation of output specification correction degree)
As shown in FIG. 4, after acquiring the biometric information, the information providing device 10 identifies the user state indicating the mental state of the user U based on the biometric information of the user U by the user state specifying unit 46, and the user state. The output specification correction degree is calculated based on (step S30). The output specification correction degree is a value for correcting the reference output specification set by the output specification determination unit 50, and the final output specification is determined based on the reference output specification and the output specification correction degree.
 図10は、ユーザ状態と出力仕様補正度との関係の一例を示す表である。本実施形態では、ユーザ状態特定部46は、ユーザUの脳波情報に基づき、ユーザ状態として、ユーザUの脳活性度を特定する。ユーザ状態特定部46は、ユーザUの脳波情報に基づき、任意の方法で脳活性度を特定してよいが、例えば、α波、β波の波形に対して周波数の特定な領域から、脳活性度を特定してよい。この場合例えば、ユーザ状態特定部46は、脳波の時間波形を高速フーリエ変換して、α波の高周波部分(例えば、10Hz~11.75Hz)のパワースペクトル量を計算する。α波の高周波部分のパワースペクトル量が大きい場合、リラックスしつつ非常に集中していることが予想できるため、ユーザ状態特定部46は、α波の高周波部分のパワースペクトル量が大きいほど、脳活性度が高いと判断する。ユーザ状態特定部46は、α波の高周波部分のパワースペクトル量が所定の数値範囲内の場合の脳活性度を、VA3とし、α波の高周波部分のパワースペクトル量が、脳活性度VA3とした場合の数値範囲より低い所定の数値範囲である場合の脳活性度を、VA2とし、α波の高周波部分のパワースペクトル量が、脳活性度VA2とした場合の数値範囲より低い所定の数値範囲である場合の脳活性度を、VA1とする。ここでは、脳活性度VA1、VA2、VA3の順で、脳活性度が高いものとする。なお、β波の高周波成分(例えば、18Hz~29.75Hz)のパワースペクトル量は、大きいほど、心理的な「警戒」「動揺」である可能性が高いため、β波の高周波成分のパワースペクトル量も用いて、脳活性度を特定してもよい。 FIG. 10 is a table showing an example of the relationship between the user state and the output specification correction degree. In the present embodiment, the user state specifying unit 46 specifies the brain activity of the user U as the user state based on the brain wave information of the user U. The user state specifying unit 46 may specify the brain activity by any method based on the brain wave information of the user U, and for example, the brain activity is from a specific region of the frequency with respect to the waveforms of the α wave and the β wave. You may specify the degree. In this case, for example, the user state specifying unit 46 performs a fast Fourier transform on the time waveform of the brain wave to calculate the power spectrum amount of the high frequency portion (for example, 10 Hz to 11.75 Hz) of the α wave. When the power spectrum amount of the high frequency part of the α wave is large, it can be expected that the user state identification unit 46 is very concentrated while relaxing. Therefore, the larger the power spectrum amount of the high frequency part of the α wave, the more the brain activity of the user state specifying unit 46. Judge that the degree is high. The user state specifying unit 46 sets the brain activity when the power spectrum amount of the high frequency part of the α wave is within a predetermined numerical range as VA3, and sets the power spectrum amount of the high frequency part of the α wave as the brain activity degree VA3. The brain activity in the case of a predetermined numerical range lower than the numerical range of the case is VA2, and the power spectral amount of the high frequency portion of the α wave is in the predetermined numerical range lower than the numerical range of the brain activity VA2. The brain activity in a certain case is defined as VA1. Here, it is assumed that the brain activity is higher in the order of VA1, VA2, and VA3. The larger the power spectrum amount of the high frequency component of the β wave (for example, 18 Hz to 29.75 Hz), the higher the possibility of psychological "warning" and "upset". Therefore, the power spectrum of the high frequency component of the β wave The amount may also be used to specify brain activity.
 ユーザ状態特定部46は、ユーザUの脳活性度に基づいて、出力仕様補正度を決定する。本実施形態においては、ユーザ状態(この例では脳活性度)と出力仕様補正度との関係を示す出力仕様補正度関係情報に基づいて、出力仕様補正度を決定する。出力仕様補正度関係情報とは、ユーザ状態と、出力仕様補正度とが、関連付けて記憶されている情報(テーブル)であり、例えば仕様設定用データベース30Cに記憶されている。出力仕様補正度関係情報においては、出力部26の種類毎に、すなわちここでは表示部26A、音声出力部26B、及び感覚刺激出力部26Cのそれぞれについて、出力仕様補正度が設定されている。ユーザ状態特定部46は、この出力仕様補正度関係情報と、特定したユーザ状態とに基づいて、出力仕様補正度を決定する。具体的には、ユーザ状態特定部46は、出力仕様補正度関係情報を読み出して、出力仕様補正度関係情報のなかから、設定したユーザUの脳活性度に対応付けられた出力仕様補正度を選択して、出力仕様補正度を決定する。図10の例では、脳活性度VA3に対して、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様補正度が、それぞれ-1に設定され、脳活性度VA2に対して、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様補正度が、それぞれ0に設定され、脳活性度VA1に対して、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様補正度が、それぞれ1に設定されている。ここでの出力仕様補正度は、値が大きいほど、出力仕様を高くするような値に設定されている。すなわち、ユーザ状態特定部46は、脳活性度が低いほど、出力仕様を高くするように、出力仕様補正度を設定する。なお、ここでの出力仕様を高くするとは、感覚刺激を強くすることを指し、以降も同様である。図10における出力仕様補正度の値は一例であり、適宜設定されてよい。 The user state specifying unit 46 determines the output specification correction degree based on the brain activity of the user U. In the present embodiment, the output specification correction degree is determined based on the output specification correction degree relation information indicating the relationship between the user state (brain activity in this example) and the output specification correction degree. The output specification correction degree-related information is information (table) in which the user state and the output specification correction degree are stored in association with each other, and is stored in, for example, the specification setting database 30C. In the output specification correction degree related information, the output specification correction degree is set for each type of the output unit 26, that is, here, for each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C. The user state specifying unit 46 determines the output specification correction degree based on the output specification correction degree related information and the specified user state. Specifically, the user state specifying unit 46 reads out the output specification correction degree related information, and from the output specification correction degree related information, outputs the output specification correction degree associated with the set brain activity of the user U. Select to determine the output specification correction degree. In the example of FIG. 10, the output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C is set to -1 with respect to the brain activity degree VA3, respectively, with respect to the brain activity degree VA2. , The output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulus output unit 26C is set to 0, respectively, and the display unit 26A, the voice output unit 26B, and the sensory stimulus output are set with respect to the brain activity VA1. The output specification correction degree of the unit 26C is set to 1, respectively. The output specification correction degree here is set to a value that increases the output specification as the value increases. That is, the user state specifying unit 46 sets the output specification correction degree so that the lower the brain activity, the higher the output specification. It should be noted that increasing the output specifications here means strengthening the sensory stimulus, and the same applies thereafter. The value of the output specification correction degree in FIG. 10 is an example and may be set as appropriate.
 また、ユーザ状態特定部46は、ユーザUの脈波情報に基づき、ユーザ状態として、ユーザUの心の安定度を特定する。本実施形態では、ユーザ状態特定部46は、ユーザUの脳波情報から、時系列で連続するR波WH同士の間隔長さの変動値を、すなわちR-R間隔の微分値を算出し、R-R間隔の微分値に基づいて、ユーザUの脳活性度を特定する。ユーザ状態特定部46は、R-R間隔の微分値が小さいほど、すなわちR波WH同士の間隔長さが変動していないほど、ユーザUの心の安定度を高いものとして特定する。図10の例では、ユーザ状態特定部46は、ユーザUの脈波情報から、心の安定度を、VB3、VB2、VB1の3つのいずれかに分類している。ユーザ状態特定部46は、R-R間隔の微分値が所定の数値範囲内の場合の心の安定度を、VB3とし、R-R間隔の微分値が、心の安定度VB3とした場合の数値範囲より高い所定の数値範囲である場合の心の安定度を、VB2とし、R-R間隔の微分値が、心の安定度VB2とした場合の数値範囲より低い所定の数値範囲である場合の心の安定度を、VB1とする。なお、心の安定度VB1、VB2、VB3の順で、心の安定度が高いものとする。 Further, the user state specifying unit 46 specifies the mental stability of the user U as the user state based on the pulse wave information of the user U. In the present embodiment, the user state specifying unit 46 calculates the fluctuation value of the interval length between continuous R waves WH in time series from the brain wave information of the user U, that is, the differential value of the RR interval, and R -The brain activity of the user U is specified based on the differential value of the R interval. The user state specifying unit 46 specifies that the smaller the differential value of the RR interval, that is, the more the interval length between the R waves WH does not fluctuate, the higher the mental stability of the user U is. In the example of FIG. 10, the user state specifying unit 46 classifies the mental stability into one of VB3, VB2, and VB1 from the pulse wave information of the user U. The user state specifying unit 46 sets the stability of the mind when the differential value of the RR interval is within a predetermined numerical range as VB3, and sets the differential value of the RR interval as the stability of the mind VB3. When the stability of the heart is VB2 in the case of a predetermined numerical range higher than the numerical range, and the differential value of the RR interval is a predetermined numerical range lower than the numerical range when the stability of the heart is VB2. Let VB1 be the stability of the mind. It is assumed that the stability of the mind is higher in the order of VB1, VB2, and VB3.
 ユーザ状態特定部46は、出力仕様補正度関係情報と、特定した心の安定度とに基づいて、出力仕様補正度を決定する。具体的には、ユーザ状態特定部46は、出力仕様補正度関係情報を読み出して、出力仕様補正度関係情報のなかから、設定したユーザUの心の安定度に対応付けられた出力仕様補正度を選択して、出力仕様補正度を決定する。図10の例では、心の安定度VB3に対して、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様補正度が、それぞれ1に設定され、心の安定度VB2に対して、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様補正度が、それぞれ0に設定され、心の安定度VB1に対して、表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様補正度が、それぞれ-1に設定されている。すなわち、ユーザ状態特定部46は、心の安定度が高いほど、出力仕様(感覚刺激)を高くするように、出力仕様補正度を設定する。なお、図10における出力仕様補正度の値は一例であり、適宜設定されてよい。 The user state specifying unit 46 determines the output specification correction degree based on the output specification correction degree related information and the specified mental stability. Specifically, the user state specifying unit 46 reads out the output specification correction degree related information, and from the output specification correction degree related information, the output specification correction degree associated with the set mental stability of the user U. Select to determine the output specification correction degree. In the example of FIG. 10, the output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C is set to 1 with respect to the mental stability VB3, respectively, with respect to the mental stability VB2. The output specification correction degree of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C is set to 0, respectively, and the display unit 26A, the voice output unit 26B, and the sensation are set with respect to the mental stability VB1. The output specification correction degree of the stimulus output unit 26C is set to -1, respectively. That is, the user state specifying unit 46 sets the output specification correction degree so that the higher the stability of the mind, the higher the output specification (sensory stimulation). The value of the output specification correction degree in FIG. 10 is an example and may be set as appropriate.
 このように、ユーザ状態特定部46は、予め設定された、ユーザ状態と出力仕様補正度との関係に基づき、出力仕様補正度を設定している。ただし、出力仕様補正度の設定方法はこれに限られず、情報提供装置10は、生体センサ22が検出した生体情報に基づいて、任意の方法で、出力仕様補正度を設定してよい。また、情報提供装置10は、脳波から特定した脳活性度と、脈波から特定した心の安定度の両方を用いて、出力仕様補正度を算出しているが、それに限られない。例えば、情報提供装置10は、脳波から特定した脳活性度と、脈波から特定した心の安定度の一方を用いて、出力仕様補正度を算出してよい。また、情報提供装置10は、生体情報を数値として扱っており、生体情報に基づいてユーザ状態を推定することで、生体情報の誤差なども加味することが可能となり、より正確にユーザUの心理状態を推定することができる。言い換えれば、情報提供装置10は、生体情報や、生体情報に基づいたユーザ状態を、3つ以上の度合いのいずれかに分類することにより、正確にユーザUの心理状態を推定できるといえる。ただし、情報提供装置10は、生体情報や、生体情報に基づいたユーザ状態を、3つ以上の度合いに分類することに限られず、例えば、Yes又はNoのいずれか二択を示す情報などとして扱ってもよい。 In this way, the user state specifying unit 46 sets the output specification correction degree based on the preset relationship between the user state and the output specification correction degree. However, the method of setting the output specification correction degree is not limited to this, and the information providing device 10 may set the output specification correction degree by any method based on the biological information detected by the biological sensor 22. Further, the information providing device 10 calculates the output specification correction degree using both the brain activity specified from the electroencephalogram and the mental stability specified from the pulse wave, but is not limited thereto. For example, the information providing device 10 may calculate the output specification correction degree by using either the brain activity specified from the electroencephalogram or the mental stability specified from the pulse wave. Further, the information providing device 10 handles the biometric information as a numerical value, and by estimating the user state based on the biometric information, it is possible to take into account the error of the biometric information and the like, and the psychology of the user U can be more accurately performed. The state can be estimated. In other words, it can be said that the information providing device 10 can accurately estimate the psychological state of the user U by classifying the biometric information and the user state based on the biometric information into any of three or more degrees. However, the information providing device 10 is not limited to classifying the biometric information and the user state based on the biometric information into three or more degrees, and treats the information as, for example, information indicating either Yes or No. You may.
 (出力制限要否情報の生成)
 また、図4に示すように、情報提供装置10は、ユーザ状態特定部46により、ユーザUの生体情報に基づき、出力制限要否情報を生成する(ステップS32)。図11は、出力制限要否情報の一例を示す表である。出力制限要否情報は、出力部26の出力制限の要否を示す情報であり、出力部26の作動を許可するか否かを示す情報であるといえる。出力制限要否情報は、出力部26毎に、すなわち表示部26A、音声出力部26B、及び感覚刺激出力部26Cのそれぞれについて、生成される。言い換えれば、ユーザ状態特定部46は、生体情報に基づいて、表示部26A、音声出力部26B、及び感覚刺激出力部26Cのそれぞれについて、作動を許可するか否かを示す出力制限要否情報を生成する。より具体的には、ユーザ状態特定部46は、生体情報と環境情報の両方に基づき、出力制限要否情報を生成する。ユーザ状態特定部46は、生体情報に基づき設定したユーザ状態と、環境情報に基づき算出した環境スコアとに基づき、出力制限要否情報を生成する。図11の例では、ユーザ状態特定部46は、ユーザ状態としての脳活性度と、環境スコアとしての、線路上であるというサブカテゴリーに対する居場所スコアとに基づき、出力制限要否情報を生成する。図11の例では、ユーザ状態特定部46は、線路上であるというサブカテゴリーに対する居場所スコアが100であり、かつ、脳活性度がVA3、VA2となる第1条件を満たす場合に、表示部26Aの使用を不許可とする出力制限要否情報を生成する。なお、第1条件は、線路上であるというサブカテゴリーに対する居場所スコアが100であり、かつ、脳活性度がVA3、VA2となる場合に限られず、例えば、情報提供装置10の位置が所定のエリア内にあり、かつ、脳活性度が所定の脳活性度閾値以下となる場合としてもよい。ここでの所定のエリアは、例えば、線路上、又は車道であってよい。
(Generation of output restriction necessity information)
Further, as shown in FIG. 4, the information providing device 10 generates output restriction necessity information based on the biometric information of the user U by the user state specifying unit 46 (step S32). FIG. 11 is a table showing an example of output restriction necessity information. The output restriction necessity information is information indicating whether or not the output restriction of the output unit 26 is necessary, and can be said to be information indicating whether or not the operation of the output unit 26 is permitted. The output restriction necessity information is generated for each output unit 26, that is, for each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C. In other words, the user state specifying unit 46 provides output restriction necessity information indicating whether or not to permit the operation of each of the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C based on the biological information. Generate. More specifically, the user state specifying unit 46 generates output restriction necessity information based on both biometric information and environmental information. The user state specifying unit 46 generates output restriction necessity information based on the user state set based on the biological information and the environmental score calculated based on the environmental information. In the example of FIG. 11, the user state specifying unit 46 generates output restriction necessity information based on the brain activity as the user state and the location score for the subcategory of being on the railroad track as the environmental score. In the example of FIG. 11, the user state specifying unit 46 has a location score of 100 for the subcategory of being on the railroad track, and the display unit 26A satisfies the first condition that the brain activity is VA3 and VA2. Generates output restriction necessity information that disallows the use of. The first condition is not limited to the case where the location score for the subcategory of being on the railroad track is 100 and the brain activity is VA3 and VA2. For example, the position of the information providing device 10 is a predetermined area. It may be in the case where the brain activity is equal to or less than a predetermined brain activity threshold. The predetermined area here may be, for example, on a railroad track or a roadway.
 また、図11の例では、ユーザ状態特定部46は、ユーザ状態としての脳活性度と、環境スコアとしての、動いているというサブカテゴリーに対する動きスコアとに基づき、出力制限要否情報を生成する。図11の例では、ユーザ状態特定部46は、動いているというサブカテゴリーに対する動きスコアが0であり、かつ、脳活性度がVA3、VA2となる第1条件を満たす場合に、表示部26Aの使用を不許可とする出力制限要否情報を生成する。なお、第2条件は、動いているというサブカテゴリーに対する動きスコアが0であり、かつ、脳活性度がVA3、VA2となる場合に限られず、例えば、情報提供装置10の位置の単位時間当たりの変化量が所定の変化量閾値以下であり、かつ、脳活性度が所定の脳活性度閾値以下となる場合としてもよい。 Further, in the example of FIG. 11, the user state specifying unit 46 generates output restriction necessity information based on the brain activity as the user state and the motion score for the subcategory of moving as the environmental score. .. In the example of FIG. 11, the user state specifying unit 46 has a motion score of 0 for the subcategory of being moving, and the display unit 26A satisfies the first condition that the brain activity is VA3 and VA2. Generates output restriction necessity information that disallows use. The second condition is not limited to the case where the movement score for the subcategory of moving is 0 and the brain activity is VA3 or VA2, for example, per unit time of the position of the information providing device 10. The change amount may be equal to or less than a predetermined change amount threshold value, and the brain activity may be equal to or less than a predetermined brain activity threshold value.
 このように、ユーザ状態特定部46は、生体情報と環境情報とが特定の関係を満たす場合に、ここではユーザ状態と環境スコアが第1条件及び第2条件との少なくとも一方を満たす場合に、表示部26Aの使用を不許可とする出力制限要否情報を生成する。一方、ユーザ状態特定部46は、ユーザ状態と環境スコアが第1条件を満たさず第2条件も満たさない場合には、表示部26Aの使用を不許可とする出力制限要否情報を生成せず、表示部26Aの使用を許可とする出力制限要否情報を生成する。ただし、出力制限要否情報の生成は必須の処理ではない。 As described above, the user state specifying unit 46 satisfies the case where the biometric information and the environmental information satisfy a specific relationship, and here, when the user state and the environmental score satisfy at least one of the first condition and the second condition. Generates output restriction necessity information that disallows the use of the display unit 26A. On the other hand, when the user state and the environment score do not satisfy the first condition and the second condition, the user state specifying unit 46 does not generate the output restriction necessity information disallowing the use of the display unit 26A. , Generates output restriction necessity information that permits the use of the display unit 26A. However, the generation of output restriction necessity information is not an essential process.
 (コンテンツ画像の取得)
 また、図4に示すように、情報提供装置10は、コンテンツ画像取得部52により、コンテンツ画像PSの画像データを取得する(ステップS34)。コンテンツ画像PSの画像データとは、コンテンツ画像のコンテンツ(表示内容)を表示させるための画像データである。コンテンツ画像取得部52は、コンテンツ画像受信部28Aを介して、外部の装置からコンテンツ画像の画像データを取得する。
(Acquisition of content image)
Further, as shown in FIG. 4, the information providing device 10 acquires the image data of the content image PS by the content image acquisition unit 52 (step S34). The image data of the content image PS is image data for displaying the content (display content) of the content image. The content image acquisition unit 52 acquires image data of the content image from an external device via the content image reception unit 28A.
 なお、コンテンツ画像取得部52は、情報提供装置10(ユーザU)の位置(地球座標)に応じたコンテンツ(表示内容)のコンテンツ画像の画像データを取得するものであってよい。情報提供装置10の位置は、GNSS受信機20Cで特定される。例えば、コンテンツ画像取得部52は、ユーザUがある位置に対して所定範囲内に位置している場合に、その位置に関連したコンテンツを受信する。コンテンツ画像PSは、原則、ユーザUの意思で表示制御できるが、表示を可に設定されていた場合、いつどこでどんなタイミングで表示されるかはわからないため、便利でもあるが、邪魔なときもあり得る。そこで、仕様設定用データベース30Cに、ユーザUが設定したコンテンツ画像PSの表示可否や表示仕様などを示す情報、記録させていてもよい。コンテンツ画像取得部52は、仕様設定用データベース30Cからこの情報を読み出して、この情報に基づき、コンテンツ画像PSの取得を制御する。また、位置情報と仕様設定用データベース30Cは、インターネット上のサイトに同じ情報を記載しておき、コンテンツ画像取得部52は、その内容をチェックしながら、コンテンツ画像PSの取得を制御してもよい。なお、コンテンツ画像PSの画像データを取得するステップS34は、後述のステップS36より前に実行されることに限られず、後述のステップS38の前の任意のタイミングで実行されてもよい。 The content image acquisition unit 52 may acquire image data of the content image of the content (display content) according to the position (earth coordinates) of the information providing device 10 (user U). The position of the information providing device 10 is specified by the GNSS receiver 20C. For example, when the user U is located within a predetermined range with respect to a certain position, the content image acquisition unit 52 receives the content related to the position. In principle, the content image PS can be displayed and controlled at the will of the user U, but if the display is set to be possible, it is convenient because it is not known when, where, and at what timing, but it can be annoying. obtain. Therefore, in the specification setting database 30C, information indicating whether or not the content image PS set by the user U can be displayed, display specifications, and the like may be recorded. The content image acquisition unit 52 reads this information from the specification setting database 30C, and controls the acquisition of the content image PS based on this information. Further, the location information and the specification setting database 30C may describe the same information on a site on the Internet, and the content image acquisition unit 52 may control the acquisition of the content image PS while checking the contents. .. The step S34 for acquiring the image data of the content image PS is not limited to being executed before the step S36 described later, and may be executed at any timing before the step S38 described later.
 なお、コンテンツ画像取得部52は、コンテンツ画像PSの画像データと共に、コンテンツ画像PSに関連する音声データや、触覚刺激データも取得してよい。音声出力部26Bは、コンテンツ画像PSに関連する音声データを、音声コンテンツ(音声の内容)として出力し、感覚刺激出力部26Cは、コンテンツ画像PSに関連する触覚刺激データを、触覚刺激コンテンツ(触覚刺激の内容)として出力する。 The content image acquisition unit 52 may acquire audio data and tactile stimulus data related to the content image PS as well as the image data of the content image PS. The audio output unit 26B outputs audio data related to the content image PS as audio content (audio content), and the sensory stimulus output unit 26C outputs tactile stimulus data related to the content image PS to tactile stimulus content (tactile sensation). Output as the content of the stimulus).
 (出力仕様の設定)
 次に、図4に示すように、情報提供装置10は、出力仕様決定部50により、基準出力仕様と出力仕様補正度とに基づき、出力仕様を決定する(ステップS36)。出力仕様決定部50は、環境情報に基づき設定した基準出力仕様を、生体情報に基づき設定した出力仕様補正度で補正することで、最終的な出力部26に対する出力仕様として、決定する。基準出力仕様を出力仕様補正度で補正する式などは、任意であってよい。
(Setting of output specifications)
Next, as shown in FIG. 4, the information providing device 10 determines the output specifications by the output specification determining unit 50 based on the reference output specifications and the output specification correction degree (step S36). The output specification determination unit 50 determines the reference output specification set based on the environmental information as the final output specification for the output unit 26 by correcting the reference output specification set based on the biological information with the output specification correction degree. The formula for correcting the reference output specification with the output specification correction degree may be arbitrary.
 以上説明したように、情報提供装置10は、環境情報に基づいて設定した基準出力仕様を、生体情報に基づいて設定した出力仕様補正度で補正して、最終的な出力仕様を決定している。ただし、情報提供装置10は、基準出力仕様を出力仕様補正度で補正することによって出力仕様を決定することに限られず、環境情報及び生体情報の少なくとも一方を用いて、任意の方法で出力仕様を決定するものであってよい。すなわち、情報提供装置10は、環境情報及び生体情報に基づいて、任意の方法で出力仕様を決定してよいし、環境情報及び生体情報のいずれか一方に基づいて、任意の方法で出力仕様を決定してよい。例えば、情報提供装置10は、環境情報と生体情報とのうち、環境情報に基づいて、上述の基準出力仕様を決定した方法を用いて出力仕様を決定してもよい。また例えば、情報提供装置10は、環境情報と生体情報とのうち、生体情報に基づいて、上述の出力仕様補正度を決定した方法を用いて、出力仕様を決定してもよい。 As described above, the information providing device 10 corrects the reference output specification set based on the environmental information with the output specification correction degree set based on the biological information, and determines the final output specification. .. However, the information providing device 10 is not limited to determining the output specifications by correcting the reference output specifications with the output specification correction degree, and uses at least one of the environmental information and the biometric information to adjust the output specifications by an arbitrary method. It may be something to decide. That is, the information providing device 10 may determine the output specifications by an arbitrary method based on the environmental information and the biometric information, or determine the output specifications by an arbitrary method based on either the environmental information or the biometric information. You may decide. For example, the information providing device 10 may determine the output specifications by using the method for determining the above-mentioned reference output specifications based on the environmental information among the environmental information and the biological information. Further, for example, the information providing device 10 may determine the output specifications by using the above-mentioned method of determining the output specification correction degree based on the biological information among the environmental information and the biological information.
 なお、ステップS32で出力部26の使用を不許可とする出力制限要否情報が生成された場合には、出力選択部48は、環境スコアだけでなく、出力制限要否情報にも基づいて、対象機器を選定する。すなわち、ステップS26において環境スコアに基づいて対象機器として選定されていた出力部26であっても、出力制限要否情報において仕様を不許可とされた場合には、対象機器から除外する。言い換えれば、出力選択部48は、出力制限要否情報と環境情報とに基づいて、対象機器を選定する。さらに言えば、出力制限要否情報は、生体情報に基づき設定されるので、対象機器は、生体情報と環境情報に基づいて設定されるといえる。ただし、出力選択部48は、生体情報と環境情報との両方に基づいて対象機器を設定することに限られず、生体情報と環境情報との少なくとも一方に基づいて、対象機器を選定してよい。 When the output restriction necessity information disallowing the use of the output unit 26 is generated in step S32, the output selection unit 48 is based not only on the environment score but also on the output restriction necessity information. Select the target device. That is, even the output unit 26 selected as the target device based on the environmental score in step S26 is excluded from the target device if the specification is disapproved in the output restriction necessity information. In other words, the output selection unit 48 selects the target device based on the output restriction necessity information and the environmental information. Furthermore, since the output restriction necessity information is set based on the biological information, it can be said that the target device is set based on the biological information and the environmental information. However, the output selection unit 48 is not limited to setting the target device based on both the biological information and the environmental information, and may select the target device based on at least one of the biological information and the environmental information.
 (出力制御)
 対象機器と出力仕様を設定し、コンテンツ画像PSの画像データなどを取得したら、情報提供装置10は、図4に示すように、出力制御部54により、対象機器に対して、出力仕様に基づいて出力を行わせる(ステップS38)。出力制御部54は、対象機器とされなかった出力部26については、作動させない。
(Output control)
After setting the target device and the output specifications and acquiring the image data of the content image PS and the like, the information providing device 10 uses the output control unit 54 for the target device based on the output specifications as shown in FIG. Output is performed (step S38). The output control unit 54 does not operate the output unit 26 that is not the target device.
 例えば、表示部26Aが対象機器とされた場合には、出力制御部54は、表示部26Aに対して、表示部26Aの出力仕様に従うように、コンテンツ画像取得部52が取得したコンテンツ画像データに基づいたコンテンツ画像PSを表示させる。出力仕様は、上述で説明したように、環境情報や生体情報に基づき設定されるため、出力仕様に従ってコンテンツ画像PSを表示させることで、ユーザUが置かれている環境や、ユーザUの心理状態に応じた適切な態様で、コンテンツ画像PSを表示することができる。 For example, when the display unit 26A is the target device, the output control unit 54 uses the content image data acquired by the content image acquisition unit 52 so as to comply with the output specifications of the display unit 26A for the display unit 26A. Display the based content image PS. As described above, the output specifications are set based on the environmental information and biological information. Therefore, by displaying the content image PS according to the output specifications, the environment in which the user U is placed and the psychological state of the user U are displayed. The content image PS can be displayed in an appropriate manner according to the above.
 また、音声出力部26Bが対象機器とされた場合には、出力制御部54は、音声出力部26Bに対して、音声出力部26Bの出力仕様に従うように、コンテンツ画像取得部52が取得した音声データに基づいた音声を出力させる。この場合においても、例えばユーザUの脳活性度が高かったりユーザUの心の安定度が低かったりするほど、聴覚刺激を弱くすることで、ユーザUが他の事に集中していたり心に余裕がない場合において、音声によって煩わされるおそれを少なくできる。一方、ユーザUの脳活性度が低かったりユーザUの心の安定度が高かったりするほど、聴覚刺激を強くすることで、音声によって適切に情報を得ることができる。 When the audio output unit 26B is the target device, the output control unit 54 outputs the audio acquired by the content image acquisition unit 52 so as to follow the output specifications of the audio output unit 26B for the audio output unit 26B. Output audio based on data. Even in this case, for example, the higher the brain activity of the user U or the lower the stability of the mind of the user U, the weaker the auditory stimulus, so that the user U can concentrate on other things or have a margin in the mind. If there is no such thing, it is possible to reduce the risk of being bothered by voice. On the other hand, the lower the brain activity of the user U and the higher the stability of the mind of the user U, the stronger the auditory stimulus, so that information can be appropriately obtained by voice.
 また、感覚刺激出力部26Cが対象機器とされた場合には、出力制御部54は、感覚刺激出力部26Cに対して、感覚刺激出力部26Cの出力仕様に従うように、コンテンツ画像取得部52が取得した触覚刺激データに基づいた触覚刺激を出力させる。この場合においても、例えばユーザUの脳活性度が高かったりユーザUの心の安定度が低かったりするほど、触覚刺激を弱くすることで、ユーザUが他の事に集中していたり心に余裕がない場合において、触覚刺激によって煩わされるおそれを少なくできる。一方、ユーザUの脳活性度が低かったりユーザUの心の安定度が高かったりするほど、触覚刺激を強くすることで、触覚刺激によって適切に情報を得ることができる。 When the sensory stimulus output unit 26C is the target device, the output control unit 54 causes the content image acquisition unit 52 to follow the output specifications of the sensory stimulus output unit 26C with respect to the sensory stimulus output unit 26C. The tactile stimulus based on the acquired tactile stimulus data is output. Even in this case, for example, the higher the brain activity of the user U or the lower the stability of the mind of the user U, the weaker the tactile stimulus, so that the user U can concentrate on other things or have a margin in the mind. In the absence of, the risk of being bothered by tactile stimuli can be reduced. On the other hand, the lower the brain activity of the user U and the higher the stability of the mind of the user U, the stronger the tactile stimulus, so that information can be appropriately obtained by the tactile stimulus.
 また、ステップS12で危険状態であると判断されて危険通知内容が設定された場合には、出力制御部54は、対象機器に対して、設定した出力仕様に従うように、危険通知内容を通知させる。 Further, when it is determined in step S12 that the danger state is determined and the danger notification content is set, the output control unit 54 causes the target device to notify the danger notification content so as to comply with the set output specifications. ..
 このように、本実施形態に係る情報提供装置10は、環境情報と生体情報に基づいて、出力仕様を設定することで、ユーザUの置かれている環境やユーザUの心理状態に応じて適切な度合いで感覚刺激を出力できる。また、情報提供装置10は、環境情報と生体情報に基づいて、作動させる対象機器を選定することで、ユーザUの置かれている環境やユーザUの心理状態に応じて適切な感覚刺激を選択できる。ただし、情報提供装置10は、環境情報と生体情報の両方を用いることに限られず、例えばどちらか片方だけ用いてもよい。従って、情報提供装置10は、例えば、環境情報に基づいて、対象機器を選定し、出力仕様を設定してもよいし、生体情報に基づいて、対象機器を選定し、出力仕様を設定してもよい。 As described above, the information providing device 10 according to the present embodiment is appropriate according to the environment in which the user U is placed and the psychological state of the user U by setting the output specifications based on the environmental information and the biological information. It is possible to output sensory stimuli to a certain degree. Further, the information providing device 10 selects an appropriate sensory stimulus according to the environment in which the user U is placed and the psychological state of the user U by selecting the target device to be operated based on the environmental information and the biological information. can. However, the information providing device 10 is not limited to using both environmental information and biological information, and for example, only one of them may be used. Therefore, the information providing device 10 may, for example, select a target device and set an output specification based on environmental information, or select a target device and set an output specification based on biometric information. May be good.
 (効果)
 以上説明したように、本実施形態の一態様に係る情報提供装置10は、ユーザUに情報を提供する装置であって、出力部26と、環境センサ20と、出力仕様決定部50と、出力制御部54とを含む。出力部26は、視覚刺激を出力する表示部26A、聴覚刺激を出力する音声出力部26B、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部26Cを含む。環境センサ20は、情報提供装置10の周辺の環境情報を検出する。出力仕様決定部50は、環境情報に基づいて、視覚刺激、聴覚刺激、及び感覚刺激の出力仕様を、すなわち表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様を、決定する。出力制御部54は、出力仕様に基づき、出力部26に、視覚刺激、聴覚刺激、及び感覚刺激を出力させる。情報提供装置10は、環境情報に基づいて、視覚刺激、聴覚刺激、及び感覚刺激の出力仕様を設定することで、ユーザUの置かれている環境に応じて、視覚刺激、聴覚刺激、及び感覚刺激のバランスを適切にして、出力できる。そのため、情報提供装置10によると、ユーザUに適切に情報を提供できる。
(effect)
As described above, the information providing device 10 according to one aspect of the present embodiment is a device that provides information to the user U, and includes an output unit 26, an environment sensor 20, an output specification determination unit 50, and an output. It includes a control unit 54. The output unit 26 includes a display unit 26A that outputs a visual stimulus, a voice output unit 26B that outputs an auditory stimulus, and a sensory stimulus output unit 26C that outputs a sensory stimulus different from the visual and auditory stimuli. The environment sensor 20 detects environmental information around the information providing device 10. The output specification determination unit 50 determines the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus, that is, the output specifications of the display unit 26A, the audio output unit 26B, and the sensory stimulus output unit 26C, based on the environmental information. .. The output control unit 54 causes the output unit 26 to output visual stimuli, auditory stimuli, and sensory stimuli based on the output specifications. The information providing device 10 sets the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental information, so that the visual stimulus, the auditory stimulus, and the sensory stimulus are set according to the environment in which the user U is placed. The stimulus can be balanced and output. Therefore, according to the information providing device 10, information can be appropriately provided to the user U.
 また、本実施形態の一態様に係る情報提供装置10は、互いに異なる種類の環境情報を検出する複数の環境センサと、環境特定部44とを含む。環境特定部44は、異なる種類の環境情報に基づき、ユーザUの現在の環境を総合的に示す環境パターンを特定する。出力仕様決定部50は、環境パターンに基づいて、出力仕様を決定する。情報提供装置10は、複数種類の環境情報から特定した環境パターンに基づいて、視覚刺激、聴覚刺激、及び感覚刺激の出力仕様を設定することで、ユーザUの置かれている環境に応じて、より適切に情報を提供できる。 Further, the information providing device 10 according to one aspect of the present embodiment includes a plurality of environment sensors that detect different types of environmental information from each other, and an environment specifying unit 44. The environment specifying unit 44 identifies an environment pattern that comprehensively indicates the current environment of the user U based on different types of environment information. The output specification determination unit 50 determines the output specifications based on the environment pattern. The information providing device 10 sets the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental pattern specified from a plurality of types of environmental information, so that the information providing device 10 can be set according to the environment in which the user U is placed. Information can be provided more appropriately.
 また、本実施形態の一態様に係る情報提供装置10において、出力仕様決定部50は、視覚刺激の出力仕様として、表示部26Aが表示する画像の大きさと、表示部26Aが表示する画像の透明度と、表示部26Aが表示する画像の内容(表示内容)との、少なくとも1つを決定する。情報提供装置10は、視覚刺激の出力仕様としてこれらを決定することで、より適切に視覚による情報を提供できる。 Further, in the information providing device 10 according to one embodiment of the present embodiment, the output specification determination unit 50 determines the size of the image displayed by the display unit 26A and the transparency of the image displayed by the display unit 26A as the output specifications of the visual stimulus. And the content (display content) of the image displayed by the display unit 26A, at least one is determined. The information providing device 10 can more appropriately provide visual information by determining these as the output specifications of the visual stimulus.
 また、本実施形態の一態様に係る情報提供装置10において、出力仕様決定部50は、聴覚刺激の出力仕様として、音声出力部26Bが出力する音声の音量と、音響との、少なくとも1つを決定する。情報提供装置10は、聴覚刺激の出力仕様としてこれらを決定することで、より適切に聴覚による情報を提供できる。 Further, in the information providing device 10 according to one aspect of the present embodiment, the output specification determination unit 50 determines at least one of the volume of the voice output by the voice output unit 26B and the sound as the output specifications of the auditory stimulus. decide. The information providing device 10 can more appropriately provide auditory information by determining these as the output specifications of the auditory stimulus.
 また、本実施形態の一態様に係る情報提供装置10において、感覚刺激出力部26Cは、感覚刺激として、触覚刺激を出力するものであり、出力仕様決定部50は、聴覚刺激の出力仕様として、感覚刺激出力部26Cが出力する触覚刺激の強さと、触覚刺激を出力する頻度との、少なくとも1つを決定する。情報提供装置10は、触覚刺激の出力仕様としてこれらを決定することで、より適切に触覚による情報を提供できる。 Further, in the information providing device 10 according to one embodiment of the present embodiment, the sensory stimulus output unit 26C outputs a tactile stimulus as a sensory stimulus, and the output specification determination unit 50 outputs a auditory stimulus as an output specification. At least one of the strength of the tactile stimulus output by the sensory stimulus output unit 26C and the frequency of outputting the tactile stimulus is determined. The information providing device 10 can more appropriately provide tactile information by determining these as output specifications of the tactile stimulus.
 また、本実施形態の一態様に係る情報提供装置10は、ユーザUに情報を提供する装置であって、出力部26と、生体センサ22と、出力仕様決定部50と、出力制御部54とを含む。出力部26は、視覚刺激を出力する表示部26A、聴覚刺激を出力する音声出力部26B、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部26Cを含む。生体センサ22は、ユーザUの生体情報を検出する。出力仕様決定部50は、生体情報に基づいて、視覚刺激、聴覚刺激、及び感覚刺激の出力仕様を、すなわち表示部26A、音声出力部26B、及び感覚刺激出力部26Cの出力仕様を、決定する。出力制御部54は、出力仕様に基づき、出力部26に、視覚刺激、聴覚刺激、及び感覚刺激を出力させる。情報提供装置10は、環境情報に基づいて、視覚刺激、聴覚刺激、及び感覚刺激の出力仕様を設定することで、ユーザUの心理状態に応じて、視覚刺激、聴覚刺激、及び感覚刺激のバランスを適切にして、出力できる。そのため、情報提供装置10によると、ユーザUに適切に情報を提供できる。 Further, the information providing device 10 according to one aspect of the present embodiment is a device that provides information to the user U, and includes an output unit 26, a biosensor 22, an output specification determination unit 50, and an output control unit 54. including. The output unit 26 includes a display unit 26A that outputs a visual stimulus, a voice output unit 26B that outputs an auditory stimulus, and a sensory stimulus output unit 26C that outputs a sensory stimulus different from the visual and auditory stimuli. The biosensor 22 detects the biometric information of the user U. The output specification determination unit 50 determines the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus, that is, the output specifications of the display unit 26A, the audio output unit 26B, and the sensory stimulus output unit 26C, based on the biological information. .. The output control unit 54 causes the output unit 26 to output visual stimuli, auditory stimuli, and sensory stimuli based on the output specifications. The information providing device 10 sets the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental information, so that the visual stimulus, the auditory stimulus, and the sensory stimulus are balanced according to the psychological state of the user U. Can be output appropriately. Therefore, according to the information providing device 10, information can be appropriately provided to the user U.
 また、本実施形態の一態様において、生体情報は、ユーザUの自律神経に関する情報を含み、出力仕様決定部50は、ユーザUの自律神経に関する情報に基づいて、出力仕様を決定する。情報提供装置10は、ユーザUの自律神経に関する情報に基づいて、視覚刺激、聴覚刺激、及び感覚刺激の出力仕様を設定することで、ユーザUの心理状態に応じて、より適切に情報を提供できる。 Further, in one embodiment of the present embodiment, the biological information includes information on the autonomic nerve of the user U, and the output specification determination unit 50 determines the output specification based on the information on the autonomic nerve of the user U. The information providing device 10 provides information more appropriately according to the psychological state of the user U by setting the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the information about the autonomic nerve of the user U. can.
 また、本実施形態の一態様に係る情報提供装置10は、ユーザUに情報を提供する装置であって、出力部26と、環境センサ20と、出力選択部48と、出力制御部54とを含む。出力部26は、視覚刺激を出力する表示部26A、聴覚刺激を出力する音声出力部26B、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部26Cを含む。環境センサ20は、情報提供装置10の周辺の環境情報を検出する。出力選択部48は、環境情報に基づいて、表示部26A、音声出力部26B、及び感覚刺激出力部26Cのうちから、用いる対象機器を選択する。出力制御部54は、対象機器を制御する。情報提供装置10は、環境情報に基づいて対象機器を選定することで、ユーザUの置かれている環境に応じて、視覚刺激、聴覚刺激、及び感覚刺激のどの刺激を出力するかを、適切に選定できる。そのため、情報提供装置10によると、ユーザUの置かれている環境に応じて、ユーザUに適切に情報を提供できる。 Further, the information providing device 10 according to one aspect of the present embodiment is a device that provides information to the user U, and includes an output unit 26, an environment sensor 20, an output selection unit 48, and an output control unit 54. include. The output unit 26 includes a display unit 26A that outputs a visual stimulus, a voice output unit 26B that outputs an auditory stimulus, and a sensory stimulus output unit 26C that outputs a sensory stimulus different from the visual and auditory stimuli. The environment sensor 20 detects environmental information around the information providing device 10. The output selection unit 48 selects the target device to be used from the display unit 26A, the voice output unit 26B, and the sensory stimulation output unit 26C based on the environmental information. The output control unit 54 controls the target device. By selecting the target device based on the environmental information, the information providing device 10 appropriately determines which stimulus, the visual stimulus, the auditory stimulus, or the sensory stimulus, is output according to the environment in which the user U is placed. Can be selected. Therefore, according to the information providing device 10, information can be appropriately provided to the user U according to the environment in which the user U is placed.
 また、本実施形態の一態様に係る情報提供装置10は、ユーザの生体情報を検出する生体センサ22をさらに含み、出力選択部48は、環境情報とユーザUの生体情報とに基づいて、対象機器を選択する。情報提供装置10は、環境情報と生体情報に基づいて、作動させる対象機器を選定することで、ユーザUの置かれている環境やユーザUの心理状態に応じて適切な感覚刺激を選択できる。 Further, the information providing device 10 according to one embodiment of the present embodiment further includes a biological sensor 22 for detecting the biological information of the user, and the output selection unit 48 is a target based on the environmental information and the biological information of the user U. Select a device. The information providing device 10 can select an appropriate sensory stimulus according to the environment in which the user U is placed and the psychological state of the user U by selecting the target device to be operated based on the environmental information and the biological information.
 また、本実施形態の一態様において、環境センサ20は、環境情報として、情報提供装置10の位置情報を検出し、生体センサ22は、生体情報として、ユーザUの脳活性度を検出する。出力選択部48は、情報提供装置10の位置が所定のエリア内にあり、かつ脳活性度が脳活性度閾値以下となる第1条件と、情報提供装置10の位置の単位時間当たりの変化量が所定の変化量閾値以下であり、かつ脳活性度が脳活性度閾値以下となる第2条件との少なくとも一方を満たす場合に、表示部26Aを対象機器として選定する。一方、出力選択部48は、第1条件及び第2条件を満たさない場合は、表示部26Aを前記対象機器として選定しない。情報提供装置10は、このようにして表示部26Aを作動させるか決めるため、例えばユーザUが移動しておらず、かつリラックスしている場合や、ユーザが車両に乗っておりリラックスしている場合などに、ユーザUに適切に視覚刺激を出力することができる。 Further, in one embodiment of the present embodiment, the environment sensor 20 detects the position information of the information providing device 10 as the environmental information, and the biosensor 22 detects the brain activity of the user U as the biometric information. The output selection unit 48 has the first condition that the position of the information providing device 10 is within a predetermined area and the brain activity is equal to or less than the brain activity threshold value, and the amount of change in the position of the information providing device 10 per unit time. Is less than or equal to the predetermined change amount threshold value, and the display unit 26A is selected as the target device when at least one of the second condition that the brain activity is equal to or less than the brain activity threshold value is satisfied. On the other hand, if the output selection unit 48 does not satisfy the first condition and the second condition, the output selection unit 48 does not select the display unit 26A as the target device. Since the information providing device 10 determines whether to operate the display unit 26A in this way, for example, when the user U is not moving and is relaxed, or when the user is in a vehicle and is relaxed. For example, the visual stimulus can be appropriately output to the user U.
 以上、本実施形態を説明したが、これら実施形態の内容により実施形態が限定されるものではない。また、前述した構成要素には、当業者が容易に想定できるもの、実質的に同一のもの、いわゆる均等の範囲のものが含まれる。さらに、前述した構成要素は適宜組み合わせることが可能であり、各実施形態の構成を組み合わせることも可能である。さらに、前述した実施形態の要旨を逸脱しない範囲で構成要素の種々の省略、置換又は変更を行うことができる。 Although the present embodiments have been described above, the embodiments are not limited by the contents of these embodiments. Further, the above-mentioned components include those that can be easily assumed by those skilled in the art, those that are substantially the same, that is, those in a so-called equal range. Further, the above-mentioned components can be appropriately combined, and the configurations of the respective embodiments can be combined. Further, various omissions, replacements or changes of the components can be made without departing from the gist of the above-described embodiment.
 本実施形態の情報提供装置、情報提供方法及びプログラムは、例えば画像表示に利用することができる。 The information providing device, the information providing method, and the program of the present embodiment can be used, for example, for displaying an image.
 10 情報提供装置
 20 環境センサ
 22 生体センサ
 26 出力部
 26A 表示部
 26B 音声出力部
 26C 感覚刺激出力部
 40 環境情報取得部
 42 生体情報取得部
 44 環境特定部
 46 ユーザ状態特定部
 48 出力選択部
 50 出力仕様決定部
 52 コンテンツ画像取得部
 54 出力制御部
 PM 環境像
 PS コンテンツ画像
10 Information provider 20 Environmental sensor 22 Biological sensor 26 Output unit 26A Display unit 26B Audio output unit 26C Sensory stimulus output unit 40 Environmental information acquisition unit 42 Biological information acquisition unit 44 Environmental identification unit 46 User status identification unit 48 Output selection unit 50 Output Specification determination unit 52 Content image acquisition unit 54 Output control unit PM environment image PS content image

Claims (9)

  1.  ユーザに情報を提供する情報提供装置であって、
     視覚刺激を出力する表示部、聴覚刺激を出力する音声出力部、及び、視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部を含む出力部と、
     前記情報提供装置の周辺の環境情報を検出する環境センサと、
     前記環境情報に基づいて、前記表示部、前記音声出力部、及び前記感覚刺激出力部のいずれかを選択する出力選択部と、
     を含む、
     情報提供装置。
    An information providing device that provides information to users.
    A display unit that outputs a visual stimulus, an audio output unit that outputs an auditory stimulus, and an output unit that includes a sensory stimulus output unit that outputs a sensory stimulus different from the visual and auditory senses.
    An environment sensor that detects environmental information around the information providing device, and
    An output selection unit that selects any of the display unit, the voice output unit, and the sensory stimulus output unit based on the environmental information.
    including,
    Information providing device.
  2.  前記ユーザの生体情報を検出する生体センサをさらに含み、
     前記出力選択部は、前記環境情報と前記ユーザの生体情報とに基づいて、前記表示部、前記音声出力部、及び前記感覚刺激出力部のいずれかを選択する、請求項1に記載の情報提供装置。
    Further including a biosensor for detecting the biometric information of the user.
    The information provision according to claim 1, wherein the output selection unit selects one of the display unit, the voice output unit, and the sensory stimulus output unit based on the environmental information and the biometric information of the user. Device.
  3.  前記環境センサは、前記環境情報として、前記情報提供装置の位置情報を検出し、
     前記生体センサは、前記生体情報として、前記ユーザの脳活性度を検出し、
     前記出力選択部は、
     前記情報提供装置の位置が所定のエリア内にあり、かつ前記脳活性度が所定の脳活性度閾値以下となる第1条件と、前記情報提供装置の位置の単位時間当たりの変化量が所定の変化量閾値以下であり、かつ前記脳活性度が前記脳活性度閾値以下となる第2条件との少なくとも一方を満たす場合に、前記表示部を選択し、
     前記第1条件及び前記第2条件を満たさない場合に、前記表示部を選択しない、請求項2に記載の情報提供装置。
    The environment sensor detects the position information of the information providing device as the environment information, and receives the position information.
    The biosensor detects the brain activity of the user as the biometric information, and determines the brain activity of the user.
    The output selection unit is
    The first condition that the position of the information providing device is within a predetermined area and the brain activity is equal to or less than the predetermined brain activity threshold value, and the amount of change in the position of the information providing device per unit time are predetermined. The display unit is selected when at least one of the second condition that is equal to or less than the change amount threshold and the brain activity is equal to or less than the brain activity threshold is satisfied.
    The information providing device according to claim 2, wherein the display unit is not selected when the first condition and the second condition are not satisfied.
  4.  前記所定のエリアは、線路上、又は車道上である、請求項3に記載の情報提供装置。 The information providing device according to claim 3, wherein the predetermined area is on a railroad track or a roadway.
  5.  前記感覚刺激出力部は、前記感覚刺激として、触覚刺激を出力するものである、請求項1から請求項4のいずれか1項に記載の情報提供装置。 The information providing device according to any one of claims 1 to 4, wherein the sensory stimulus output unit outputs a tactile stimulus as the sensory stimulus.
  6.  前記環境情報に基づいて、前記視覚刺激、前記聴覚刺激、及び前記感覚刺激の出力仕様を決定する出力仕様決定部と、
     前記出力仕様に基づき、前記出力部に、前記視覚刺激、前記聴覚刺激、及び前記感覚刺激を出力させる出力制御部と、を更に含む、請求項1から請求項5のいずれか1項に記載の情報提供装置。
    An output specification determination unit that determines the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the environmental information.
    The aspect according to any one of claims 1 to 5, wherein the output unit further includes an output control unit that outputs the visual stimulus, the auditory stimulus, and the sensory stimulus based on the output specifications. Information providing device.
  7.  前記ユーザの生体情報を検出する生体センサと、
     前記生体情報に基づいて、前記視覚刺激、前記聴覚刺激、及び前記感覚刺激の出力仕様を決定する出力仕様決定部と、
     前記出力仕様に基づき、前記出力部に、前記視覚刺激、前記聴覚刺激、及び前記感覚刺激を出力させる出力制御部と、を更に含む、請求項1から請求項6のいずれか1項に記載の情報提供装置。
    A biosensor that detects the biometric information of the user and
    An output specification determination unit that determines the output specifications of the visual stimulus, the auditory stimulus, and the sensory stimulus based on the biological information.
    The invention according to any one of claims 1 to 6, wherein the output unit further includes an output control unit that outputs the visual stimulus, the auditory stimulus, and the sensory stimulus based on the output specifications. Information providing device.
  8.  ユーザに情報を提供する情報提供方法であって、
     周辺の環境情報を検出するステップと、
     前記環境情報に基づいて、視覚刺激を出力する表示部、聴覚刺激を出力する音声出力部、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部のいずれかを選択するステップと、
     を含む、
     情報提供方法。
    It is an information provision method that provides information to users.
    Steps to detect surrounding environmental information and
    A step of selecting one of a display unit that outputs a visual stimulus, a voice output unit that outputs an auditory stimulus, and a sensory stimulus output unit that outputs a sensory stimulus different from the visual and auditory senses based on the environmental information.
    including,
    Information provision method.
  9.  周辺の環境情報を検出するステップと、
     前記環境情報に基づいて、視覚刺激を出力する表示部、聴覚刺激を出力する音声出力部、及び視覚と聴覚とは異なる感覚刺激を出力する感覚刺激出力部のいずれかを選択するステップと、
     を含む、
     ユーザに情報を提供する情報提供方法を、コンピュータに実行させる、
     プログラム。
    Steps to detect surrounding environmental information and
    A step of selecting one of a display unit that outputs a visual stimulus, a voice output unit that outputs an auditory stimulus, and a sensory stimulus output unit that outputs a sensory stimulus different from the visual and auditory senses based on the environmental information.
    including,
    Have the computer perform an information provision method that provides information to the user,
    program.
PCT/JP2021/034398 2020-09-18 2021-09-17 Information provision device, information provision method, and program WO2022059784A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/179,409 US20230200711A1 (en) 2020-09-18 2023-03-07 Information providing device, information providing method, and computer-readable storage medium

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2020-157524 2020-09-18
JP2020157525A JP2022051185A (en) 2020-09-18 2020-09-18 Information provision device, information provision method, and program
JP2020157526A JP2022051186A (en) 2020-09-18 2020-09-18 Information provision device, information provision method, and program
JP2020-157525 2020-09-18
JP2020157524A JP2022051184A (en) 2020-09-18 2020-09-18 Information provision device, information provision method, and program
JP2020-157526 2020-09-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/179,409 Continuation US20230200711A1 (en) 2020-09-18 2023-03-07 Information providing device, information providing method, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2022059784A1 true WO2022059784A1 (en) 2022-03-24

Family

ID=80776170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/034398 WO2022059784A1 (en) 2020-09-18 2021-09-17 Information provision device, information provision method, and program

Country Status (2)

Country Link
US (1) US20230200711A1 (en)
WO (1) WO2022059784A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019207896A1 (en) * 2018-04-25 2019-10-31 ソニー株式会社 Information processing system, information processing method, and recording medium
JP2020009027A (en) * 2018-07-04 2020-01-16 学校法人 芝浦工業大学 Live production system and live production method
JP2020067693A (en) * 2018-10-22 2020-04-30 セイコーインスツル株式会社 Information transmission device and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019207896A1 (en) * 2018-04-25 2019-10-31 ソニー株式会社 Information processing system, information processing method, and recording medium
JP2020009027A (en) * 2018-07-04 2020-01-16 学校法人 芝浦工業大学 Live production system and live production method
JP2020067693A (en) * 2018-10-22 2020-04-30 セイコーインスツル株式会社 Information transmission device and program

Also Published As

Publication number Publication date
US20230200711A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
JP6184989B2 (en) Biosensor, communicator and controller for monitoring eye movement and methods for using them
CN106471419B (en) Management information is shown
US10039445B1 (en) Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US20180184958A1 (en) Systems and methods for measuring reactions of head, eyes, eyelids and pupils
US20110077548A1 (en) Biosensors, communicators, and controllers monitoring eye movement and methods for using them
EP4161387B1 (en) Sound-based attentive state assessment
US20240115831A1 (en) Enhanced meditation experience based on bio-feedback
WO2022059784A1 (en) Information provision device, information provision method, and program
US20240164672A1 (en) Stress detection
JP2022051185A (en) Information provision device, information provision method, and program
JP2022051184A (en) Information provision device, information provision method, and program
JP2022051186A (en) Information provision device, information provision method, and program
WO2022025296A1 (en) Display device, display method, and program
JP2022027186A (en) Display device, display method, and program
JP2022027084A (en) Display device, display method, and program
JP2022026949A (en) Display device, display method, and program
JP2022027184A (en) Display device, display method, and program
JP2022027183A (en) Display device, display method, and program
JP2022027085A (en) Display device, display method, and program
JP2022027086A (en) Display device, display method, and program
JP2022027185A (en) Display device, display method, and program
KR20180054400A (en) Electronic device, wearable device, and method for providing content based somatic senses using ultrasound

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21869466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21869466

Country of ref document: EP

Kind code of ref document: A1