WO2023243338A1 - Information processing device, information processing method, program, and information processing system - Google Patents

Information processing device, information processing method, program, and information processing system Download PDF

Info

Publication number
WO2023243338A1
WO2023243338A1 PCT/JP2023/019250 JP2023019250W WO2023243338A1 WO 2023243338 A1 WO2023243338 A1 WO 2023243338A1 JP 2023019250 W JP2023019250 W JP 2023019250W WO 2023243338 A1 WO2023243338 A1 WO 2023243338A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
peripheral
audio
sensor
surrounding
Prior art date
Application number
PCT/JP2023/019250
Other languages
French (fr)
Japanese (ja)
Inventor
正幸 横山
孝悌 清水
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023243338A1 publication Critical patent/WO2023243338A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute

Definitions

  • the present technology relates to an information processing device, an information processing method, a program, and an information processing system that can be applied to broadcasting information about the surrounding environment.
  • the reliability of the optical distance detector is determined based on the difference between the distance information obtained from the optical distance detector and the distance information obtained from the ultrasonic distance detector. degree is calculated. If the difference between the two pieces of distance information is large, it is determined that the reliability is low, and the distance information obtained from the optical distance detector is corrected using the distance information obtained from the ultrasonic distance detector.
  • the purpose of the present technology is to provide an information processing device, an information processing method, a program, and an information processing system that are capable of reporting information about the surrounding environment with high accuracy.
  • an information processing device includes a peripheral information acquisition section, an audio information generation section, and a notification control section.
  • the surrounding information acquisition unit obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of one or more object detection sensors.
  • the audio information generation unit generates first audio information using first musical tone data for notifying the first peripheral information based on the first peripheral information, and generates first audio information based on the second peripheral information.
  • second audio information is generated using second musical tone data for notifying the second peripheral information based on the second peripheral information.
  • the notification control unit outputs both the first audio information and the second audio information.
  • first peripheral information and second peripheral information are acquired based on detection results by one or more object detection sensors. Then, first audio information is generated using the first musical tone data based on the first peripheral information. Also, second musical tone data is used to generate second audio information based on the second peripheral result. Both the first audio information and the second audio information are output. Thereby, it becomes possible to notify both the first surrounding information and the second surrounding information via audio, and it becomes possible to notify the user of information about the surrounding environment with high accuracy.
  • the one or more object detection sensors may include a first object detection sensor and a second object detection sensor.
  • the peripheral information acquisition unit acquires the first peripheral information based on the detection result of the first object detection sensor, and the peripheral information acquisition unit acquires the first peripheral information based on the detection result of the second object detection sensor. Surrounding information may also be acquired.
  • the first object detection sensor may be a first ranging sensor that operates according to a first method.
  • the second object detection sensor may be a second ranging sensor that operates according to a second method different from the first method.
  • the first object detection sensor may be a first distance measurement sensor arranged with the first direction as the detection direction.
  • the second object detection sensor may be a second distance measurement sensor arranged with a detection direction in a second direction different from the first direction.
  • the one or more object detection sensors may be sensors that generate image information.
  • the peripheral information acquisition unit acquires the first peripheral information based on information on some pixel areas of the image information, and acquires the first peripheral information based on information on other pixel areas of the image information.
  • the second surrounding information may be acquired using the second peripheral information.
  • the one or more object detection sensors may be sensors that generate image information.
  • the peripheral information acquisition unit acquires information regarding the first type of object detected based on the image information as the first peripheral information, and Information regarding a second type of object different from the type may be acquired as the second peripheral information.
  • the surrounding information acquisition unit may generate integrated surrounding information based on the first surrounding information and the second surrounding information.
  • the first ranging sensor may operate using an optical laser method.
  • the second ranging sensor may operate using an ultrasonic method.
  • the surrounding information acquisition unit may generate the integrated surrounding information based on the stability of detection by the first distance measurement sensor and the stability of detection by the second distance measurement sensor.
  • the peripheral information acquisition unit includes a light-transmitting member or a light-absorbing member in the periphery.
  • the integrated surrounding information indicating that the member exists may be generated.
  • the peripheral information acquisition unit further includes hardness information acquired as the second peripheral information based on the detection result of the second distance measurement sensor, and the peripheral information acquisition unit is configured to determine whether the light transmitting member or the light absorbing member Information regarding at least one of the material and type of object may be generated as the integrated surrounding information.
  • the audio information generation unit determines whether or not to output the first audio information based on the first peripheral information, and if it is determined that the first audio information is not output, the notification control
  • the output of the first audio information by the unit may be restricted.
  • the first audio information may be first musical tone information constituting a predetermined song.
  • the second audio information may be second musical tone information constituting the predetermined music piece.
  • the audio information generation unit may generate the first audio information by controlling musical tone parameters of the first musical tone data based on the first peripheral information.
  • the musical sound parameters may include at least one of volume, frequency, pitch, speed, BPM, or tempo.
  • the first surrounding information may include distance information.
  • the audio information generating section may generate the first audio information by controlling the musical tone parameters based on the distance information.
  • the audio information generation unit may control localization of the first audio information based on a detection direction of the first ranging sensor.
  • An information processing method is an information processing method executed by a computer system, and includes first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors. including obtaining. First musical tone data for notifying the first peripheral information based on the first peripheral information is used to generate first audio information, and the second audio information is generated based on the second peripheral information. Second audio information is generated using second musical tone data for reporting surrounding information. Both the first audio information and the second audio information are output.
  • a program causes a computer system to execute the following steps.
  • First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information.
  • An information processing system includes one or more object detection sensors, the surrounding information acquisition section, the audio information generation section, and the notification control section.
  • the information processing system may further include an audio output unit that outputs the first audio information and the second audio information, and an information output unit that outputs information to the user.
  • FIG. 1 is a schematic diagram for explaining an overview of a surrounding information notification system according to an embodiment of the present technology.
  • FIG. 1 is a schematic diagram showing an example of a functional configuration of a surrounding information notification system.
  • 2 is a flowchart showing an example of basic operation of the surrounding information notification system.
  • FIG. 2 is a schematic diagram for explaining a configuration example of a sensor section.
  • FIG. 7 is a schematic diagram for explaining another configuration example of the sensor section.
  • FIG. 7 is a schematic diagram for explaining another configuration example of the sensor section.
  • FIG. 2 is a schematic diagram showing an example of the configuration of one or more object detection sensors.
  • FIG. 3 is a schematic diagram showing another example of the configuration of one or more object detection sensors.
  • FIG. 2 is a block diagram for explaining notification of surrounding information according to the first embodiment.
  • FIG. 3 is a schematic diagram for explaining an example in which an image sensor is arranged as an object detection sensor.
  • FIG. 7 is a block diagram showing a configuration example for realizing notification of surrounding information according to a second embodiment.
  • 2 is a flowchart showing an example of notification of surrounding information according to the present embodiment.
  • 3 is a table showing an example of determination of the situation of the surrounding environment.
  • FIG. 3 is a schematic diagram showing a case where an obstacle exists in a position in the front direction of the user.
  • FIG. 3 is a schematic diagram showing a case where an obstacle exists on the ground in front of the user.
  • Schematic diagram showing a case where a fall danger point exists in the front direction of the user 3 is a table showing an example of a process for notifying obstacles and falling danger points.
  • FIG. 7 is a schematic diagram for explaining another example of the notification process of obstacles and falling danger points. It is a schematic diagram which shows the detection example of "ground obstacle (large)" and "ground obstacle (small).”
  • FIG. 7 is a block diagram showing an example of a configuration for realizing notification of surrounding information according to a third embodiment. 2 is a flowchart showing an example of notification of surrounding information according to the present embodiment. It is a schematic diagram which shows an example of an obstacle space map.
  • FIG. 1 is a schematic diagram showing a case where an obstacle exists on the ground in front of the user.
  • Schematic diagram showing a case where a fall danger point exists in the front direction of the user 3 is a table showing an example of a process for notifying
  • FIG. 3 is a schematic diagram showing another configuration example of the surrounding information notification system according to the present embodiment.
  • FIG. 7 is a schematic diagram showing a configuration example of a surrounding information notification system according to a fourth embodiment.
  • 2 is a flowchart showing an example of notification of surrounding information according to the present embodiment.
  • FIG. 7 is a schematic diagram showing another example of a method of outputting audio according to distance.
  • FIG. 2 is a block diagram illustrating an example of a hardware configuration of a computer (information processing device) that can be used to construct a peripheral information notification system according to the present technology.
  • FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system.
  • FIG. 2 is an explanatory diagram showing an example of installation positions of an outside-vehicle information detection section and an imaging section.
  • FIG. 1 is a schematic diagram for explaining an overview of a surrounding information notification system according to an embodiment of the present technology.
  • the surrounding information notification system 1 is constructed as a system that can be used by visually impaired people such as blindness and amblyopia. That is, the user 2 of the surrounding information notification system 1 is a visually impaired person.
  • the surrounding information notification system 1 corresponds to an embodiment of an information processing system according to the present technology.
  • the user 2 uses a white cane 4 to grasp the situation of the surrounding environment when moving on the ground 3.
  • the user 2 can grasp the situation on the ground 3 based on the sensation (tactile sensation) obtained through the white cane 4.
  • objects 5 such as cars (vehicles), utility poles, signboards, etc. that are present in the direction in which the user 2 is traveling. Further, it is also possible to grasp stairs going upward (up stairs), escalators going upward (up escalator), and the like. In addition, there are various objects 5 such as stairs going downwards (down stairs), escalators going downwards (down escalators), the edge of the station platform (boundary between the platform and the railroad crossing), Braille blocks installed on the ground 3, etc. It is possible to grasp the situation.
  • a car is illustrated as the object 5.
  • any object 5 on the ground 3 such as a telephone pole, a signboard, a wall, an upward staircase, an upward escalator, a pedestrian, a bicycle, a motorbike, etc.
  • object any shape or area where there is a risk of the user 2 falling, such as stairs going downwards (downstairs), escalators going downwards (downward escalators), edges of station platforms, holes, etc.
  • any shape or area where there is a risk of the user 2 falling such as stairs going downwards (downstairs), escalators going downwards (downward escalators), edges of station platforms, holes, etc.
  • the concept of "fall hazard point” In the present disclosure, the "fall danger point” is included in the “downwardly concave area.”
  • the surrounding information notification system 1 is capable of reporting surrounding information regarding the surrounding environment to the user 2 with high precision.
  • the user 2 can avoid various dangers based on the notified surrounding information. For example, it is possible to avoid collision with the object 5 that becomes an obstacle during movement (walking, etc.). It is also possible to avoid falling at dangerous points, such as falling down stairs or falling from a station platform onto the tracks.
  • the surrounding information notification system 1 can also be called a danger avoidance system.
  • the notification of surrounding information can also be called notification of surrounding information.
  • the surrounding information notification system 1 includes a sensor section 6, an information output section 7, and a controller 8.
  • the sensor unit 6 performs sensing regarding the surrounding environment.
  • the information output unit 7 outputs information to the user 2.
  • the controller 8 controls the operation of the sensor section 6 and the information output section 7.
  • the controller 8 acquires surrounding information regarding the surrounding environment and notifies the user 2 of the surrounding information.
  • FIG. 2 is a schematic diagram showing an example of the functional configuration of the surrounding information notification system 1.
  • the sensor section 6 includes one or more object detection sensors 10.
  • the object detection sensor 10 includes any sensor capable of outputting information capable of detecting the object 5, a signal including information capable of detecting the object 5, or the like.
  • any sensor that can output information (signal) that can determine whether or not the object 5 is detected (ON/OFF) is included.
  • the object detection sensor 10 also detects various information regarding the object 5, such as the distance to the object 5, the shape of the object 5, the size of the object 5, the material of the object 5, etc. Any sensor capable of detecting information may be used.
  • detection can also be referred to as “detection.”
  • a sensor that acquires biological information such as pulse, heartbeat, body temperature, and brain waves may be used as necessary.
  • a distance measurement sensor for example, a distance measurement sensor, an image sensor (digital camera), an infrared sensor, etc.
  • the distance measurement sensor include an optical laser distance measurement sensor (hereinafter referred to as a laser distance measurement sensor), an ultrasonic distance measurement sensor (hereinafter referred to as an ultrasonic distance measurement sensor), a stereo camera, It is possible to use various types of distance measurement sensors, such as a ToF (Time of Flight) sensor, a LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), and a structured light type distance measurement sensor.
  • ToF Time of Flight
  • LiDAR Light Detection and Ranging
  • Laser Imaging Detection and Ranging Laser Imaging Detection and Ranging
  • structured light type distance measurement sensor for example, a structured light type distance measurement sensor.
  • CMOS Complementary Metal-Oxide Semiconductor
  • CCD Charge Coupled Device
  • a sensor having both the functions of an image sensor and a ranging sensor may be used.
  • a ToF sensor or the like that can detect distance information for each pixel may be used.
  • An image sensor corresponds to one embodiment of a sensor that generates image information.
  • the information output unit 7 is configured by an arbitrary device for outputting information to the user 2.
  • a speaker 11 and a vibration device 12 are used as an example of the information output section 7.
  • the speaker 11 outputs audio. By driving the speaker 11, it becomes possible to notify the user 2 of information via audio.
  • a headset 13 including a speaker 11 is used as the information output section 7 and is attached to the user's 2 head.
  • the headset 13 is not limited to an overhead type, but may be an in-ear type, a canal type, an open-ear type, or a head-mounted device. Further, it may be a device having hearing aid processing such as a hearing aid or a sound collector.
  • the speaker 11 functions as an audio output section. That is, the information output section 7 is configured to include an audio output section.
  • the vibration device 12 outputs vibration.
  • the vibration device 12 is placed at any position that contacts the user's 2 body.
  • any vibration motor or the like that can generate notification vibrations or the like can be used as the vibration device 12.
  • the surrounding information notification system 1 further includes a communication section 14 and a storage section 15.
  • the communication unit 14 and the storage unit 15 are connected to the controller 8 via a bus or the like.
  • the communication unit 14 is a module for performing network communication, short-range wireless communication, etc. with other devices.
  • a wireless LAN module such as WiFi or a communication module such as Bluetooth (registered trademark) is provided.
  • the sensor unit 6 shown in FIG. 2 and the controller 8 may be communicably connected via wireless communication or the like.
  • a communication section is also configured in the sensor section 6 (not shown).
  • an object detection sensor 10 including a communication section is used.
  • the information output unit 7 shown in FIG. 2 and the controller 8 may be communicably connected via wireless communication or the like.
  • the information output section 7 is also configured with a communication section (not shown).
  • the headset 13 shown in FIG. 1 includes a communication section, and is connected to the controller 8 via wireless communication.
  • the storage unit 15 is a storage device such as a nonvolatile memory, and for example, an HDD (Hard Disk Drive) or SSD (Solid State Drive) is used. In addition, any computer-readable non-transitory storage medium may be used.
  • HDD Hard Disk Drive
  • SSD Solid State Drive
  • the storage unit 15 stores a control program for controlling the overall operation of the surrounding information notification system 1.
  • the storage unit 15 also contains a history of detection results (sensing results) by the sensor unit 6, a history of acquired surrounding information, user information regarding the user 2, and information such as methods and characteristics regarding the sensor unit 6 and the information output unit 7. , and other various information necessary for operating the surrounding information notification system 1 are stored. Note that the method of installing the control program etc. is not limited.
  • the controller 8 controls the operation of each block included in the surrounding information notification system 1.
  • the controller 8 includes hardware necessary for a computer, such as a processor such as a CPU, GPU, or DSP, memory such as a ROM or RAM, and a storage device such as an HDD.
  • the information processing method according to the present technology is executed by the CPU loading the program according to the present technology stored in the storage unit 15 or the memory into the RAM and executing it.
  • a device such as a PLD (Programmable Logic Device) such as an FPGA (Field Programmable Gate Array), or another ASIC (Application Specific Integrated Circuit) may be used.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the processor of the controller 8 executes a program (for example, an application program) according to the present technology, so that the peripheral information acquisition section 17, the notification information generation section 18, and the notification control section 19 are configured as functional blocks. Realized.
  • the information processing method according to this embodiment is executed by these functional blocks. Note that dedicated hardware such as an IC (integrated circuit) may be used as appropriate to realize each functional block.
  • the surrounding information acquisition unit 17 obtains surrounding information regarding the surrounding environment based on the detection results of one or more object detection sensors 10.
  • the peripheral information acquisition unit 17 can acquire the presence or absence of a peripheral object 5, the distance to the object 5, the shape of the object 5, the size of the object 5, the material of the object 5, etc. as peripheral information.
  • a distance measurement sensor is used as the object detection sensor 10
  • it is possible to obtain information regarding fall danger points such as descending stairs or the edge of a station platform as surrounding information.
  • the peripheral information acquisition unit 17 can also acquire information such as the presence or absence of a fall danger point, the distance to the fall danger point, the shape of the fall danger point, the size of the fall danger point, and the like.
  • various peripheral information regarding the surrounding environment may be acquired.
  • the distance to the object 5 is detected by the distance measurement sensor. That is, surrounding information is detected by the ranging sensor. In this way, peripheral information may be detected by the object detection sensor 10 in some cases. That is, the surrounding information may be generated by the sensor unit 6 in some cases. In this case, the surrounding information acquisition unit 17 obtains surrounding information by receiving the surrounding information from the sensor unit 6.
  • peripheral information may be generated by performing recognition processing, analysis processing, etc. based on information, signals, etc. detected by one or more object detection sensors 10.
  • the peripheral information acquisition unit 17 acquires peripheral information by generating peripheral information based on detection results by one or more object detection sensors.
  • acquiring peripheral information based on detection results by one or more object detection sensors 10 means receiving peripheral information detected by one or more object detection sensors 10 and detecting one or more objects. This includes both generating surrounding information based on the detection results by the sensor 10.
  • this surrounding information notification system 1 it is also possible to acquire the following detection results as surrounding information.
  • the distance measurement value (distance information) of a distance sensor whose detection direction is set to the front direction (progressing direction) upward direction, downward direction, left side, right side, etc. of user 2 reaches a threshold value or less, each detection direction detects that an obstacle is approaching.
  • object recognition By performing object recognition on image information acquired by an image sensor, specific objects such as people and cars are detected.
  • the distance measurement value of the distance measurement sensor directed downward and forward reaches a threshold value or more, it is detected that a step or the like is approaching.
  • the surrounding information notification system 1 is configured as a vehicle-mounted sensor, it detects pedestrians in front and obstacles and people behind when entering the garage.
  • the threshold value may be automatically or dynamically set by the surrounding information notification system 1 side, or may be set by the user 2 as appropriate.
  • any image recognition technology such as image size conversion, character recognition, shape recognition, matching processing using a model image of the object, edge detection, projective transformation, etc. can be used. May be used.
  • any machine learning algorithm using, for example, DNN (Deep Neural Network), RNN (Recurrent Neural Network), CNN (Convolutional Neural Network), etc. is used, good.
  • AI artificial intelligence
  • semantic segmentation it is also possible to determine the type of object for each pixel in the image.
  • the application of the machine learning algorithm may be performed to any processing within the present disclosure.
  • material information it is possible to acquire information related to hardness, such as amplitude information of ultrasonic reflected waves.
  • information regarding other materials may also be acquired.
  • the notification information generation unit 18 generates notification information for notifying the user 2 of surrounding information.
  • the notification information includes any information for realizing output of peripheral information by the speaker 11 and the vibration device 12 arranged as the information output section 7.
  • the notification information includes audio information to be output from the speaker 11 and output control information for specifying how to output the audio information.
  • Audio information can be in various forms, such as a message such as "There is an obstacle ahead", musical sound information (melody, accompaniment, etc.) that makes up a certain song, or a notification sound such as "beep beep”. may be output.
  • As the output control information arbitrary information defining volume, pitch, playback speed, BPM (Beats Per Minute), sound localization (localization direction), etc. may be generated. For example, by controlling the localization of sound, it is also possible to provide information using stereophonic sound.
  • the notification information generation unit 18 generates vibration information for vibrating the vibration device 12 as notification information. For example, vibration information for realizing various vibration patterns in which vibration strength (amplitude), frequency, tempo, etc. are specified is generated as notification information.
  • the notification control section 19 controls the information output section 7 based on the notification information.
  • the speaker 11 is driven by the notification control unit 19, and audio information generated as notification information is output. Further, the vibration device 12 is driven, and a vibration pattern corresponding to the vibration information generated as notification information is output.
  • a device including the controller 8 corresponds to an embodiment of an information processing device according to the present technology.
  • the information processing apparatus according to the present technology is realized in a form that includes one or more object detection sensors 10.
  • the information output section 7 and the controller 8 may be integrally configured.
  • the controller 8 may be configured in the headset 13 worn on the user's 2 head.
  • an embodiment of the information processing apparatus according to the present technology is implemented in a form that includes a device for notification such as the speaker 11.
  • the sensor section 6, the information output section 7, and the controller 8 may be integrally configured.
  • one embodiment of the information processing apparatus according to the present technology is realized in a form including one or more object detection sensors 10 and a notification device such as a speaker 11. In this way, it is possible to adopt various forms as the peripheral information notification system 1.
  • FIG. 3 is a flowchart showing an example of the basic operation of the surrounding information notification system 1.
  • the surrounding information acquisition section 17 obtains surrounding information based on the detection result by the sensor section 6 (step 101).
  • the broadcast information generation unit 18 generates broadcast information for broadcasting surrounding information (step 102).
  • step 102 notification information corresponding to the surrounding information to be notified to the user 2 is generated.
  • step 101 surrounding information indicating that the edge of the station platform, which is a falling danger point, exists in the immediate vicinity of the user 2 is acquired.
  • the danger level degree of danger
  • information is generated as broadcast information.
  • vibration information such that a powerful vibration pattern with a large amplitude and a suppressed frequency is output from the vibration device 12 is generated as notification information.
  • the information output unit 7 is controlled by the notification control unit 19 based on the notification information, and surrounding information is notified to the user 2 (step 103).
  • the speaker 11 and the vibration device 12 are controlled by the notification control section 19.
  • the user 2 can grasp the situation of the surrounding environment through sound and vibration (tactile sense), and can move while avoiding danger.
  • FIG. 4A the sensor section 6 is configured at the user's 2 waist position (belt position), and at a position on the front side of the user 2.
  • the sensor unit 6 is located on the head of the user 2, on the front side of the user 2.
  • the sensor main body 21 in which one or more object detection sensors 10 are arranged is configured as a wearable device that can be worn by the user 2.
  • the sensor section 6 is realized by the wearable device. In this way, one or more object detection sensors 10 may be placed in a wearable device worn by the user 2.
  • the user 2 can realize the sensor unit 6 at various positions by wearing the sensor main body 21 configured as a wearable device.
  • a wearable device For example, a wristband type worn on the wrist, a bracelet type worn on the upper arm, a headband type worn on the head (head mounted type), a neckband type worn around the neck, a torso type worn on the chest, and a type worn on the waist.
  • Various forms may be adopted, such as a belt type worn on the ankle or an anklet type worn on the ankle.
  • wearable devices in the form of glasses, rings, necklaces, earrings, or piercings, a form that can be attached to the toe of a shoe, a form that can be attached to any position with a clip, etc. may be adopted.
  • the sensor section 6 is realized in a form that can be held by the user 2.
  • a sensor main body 21 in which one or more object detection sensors 10 are arranged is configured as a device that can be held by the user 2.
  • the sensor main body 21 is held by the right hand holding the white cane 4.
  • the sensor main body 21 is held by the left hand on the opposite side from the right hand holding the white cane 4. In this way, one or more object detection sensors 10 may be placed on a device held by the user 2.
  • the sensor main body 21 (sensor section 6) is mounted on another device held by the user 2.
  • a sensor body 21 on which one or more object detection sensors 10 are arranged is mounted on a carrier 22 that is pulled and moved by the user 2.
  • the sensor body 21 is mounted on a handcart 23 that the user 2 pushes to move. In this way, the sensor unit 6 may be realized by mounting the sensor main body 21 on another device held by the user 2.
  • the sensor body 21 may be mounted on the white cane 4 held by the user 2.
  • a configuration in which the sensor main body 21 (sensor unit 6) is mounted on another device held by the user 2 is included in a configuration in which one or more object detection sensors 10 are placed in a device held by the user 2.
  • Example of configuration of one or more object detection sensors Various variations can also be considered for the configuration of the one or more object detection sensors 10. For example, the number of object detection sensors 10, the type (method, etc.) of object detection sensors 10, the attitude (detection direction, etc.) of object detection sensors 10, the sensing parameters of object detection sensors 10 (frame rate, gain, laser intensity, etc.), etc. By arbitrarily selecting and setting , it is possible to realize various configurations.
  • FIG. 7 is a schematic diagram showing an example of the configuration of one or more object detection sensors 10.
  • one or more object detection sensors 10 are arranged in a sensor main body 21 that can be held by the user 2. By changing the orientation of the sensor body 21, the user 2 can scan and sense the surrounding environment.
  • two ranging sensors with different methods are used as the one or more object detection sensors 10.
  • an optical laser distance measurement sensor (laser distance measurement sensor) 25 and an ultrasonic distance measurement sensor (ultrasonic distance measurement sensor) 26 are used.
  • the two distance measuring sensors 25 and 26 are arranged on the sensor main body 21 so that their detection directions are the same.
  • sensing is performed by the laser ranging sensor 25 and the ultrasonic ranging sensor 26, with the direction in which the sensor body 21 is directed by the user 2 as the detection direction.
  • the detection results by the two types of distance measuring sensors 25 and 26 make it possible to acquire highly accurate surrounding information and to notify the user 2 of the same.
  • the method of the distance measuring sensor employed is not limited and may be set arbitrarily.
  • FIG. 8 is a schematic diagram showing another example of the configuration of one or more object detection sensors 10.
  • one or more object detection sensors 10 are arranged, for example, in a wearable device (not shown) that can be worn on the user's 2 hand.
  • one or more object detection sensors 10 are arranged near the portion of the white cane 4 that the user 2 holds in his/her hand.
  • two ranging sensors 27 and 28 having different detection directions are used (the detection directions are denoted by the symbols of the ranging sensors 27 and 28).
  • the example shown in FIG. 8 includes a first ranging sensor arranged with a first direction as a detection direction, and a second ranging sensor arranged with a second direction different from the first direction as a detection direction. This is an embodiment. Either of the distance measurement sensors 27 and 28 may be used as the first distance measurement sensor.
  • the distance measurement sensor 27 is the first distance measurement sensor.
  • the distance measurement sensor 27 is arranged so that the front direction of the user 2 is the detection direction. Therefore, the first direction is the front direction of the user 2.
  • the distance measuring sensor 27 will be referred to as the front side distance measuring sensor 27 using the same reference numerals.
  • the front distance measuring sensor 27 is arranged at a height H from the ground 3 so that the direction parallel to the ground 3 is the detection direction. Note that the front direction of the user 2 can also be said to be the direction of movement of the user 2.
  • the distance measurement sensor 28 serves as a second distance measurement sensor, and is arranged so that the direction toward the measurement point P set on the ground 3 is the detection direction. Therefore, the second direction is the direction from the position of the user's 2 hand (the position at the height H) toward the measurement point P.
  • the distance measuring sensor 28 will be referred to as the ground side distance measuring sensor 28 using the same reference numerals.
  • the measurement point P is set at a position on the ground 3 that is a predetermined distance D away from the user 2 along the front direction.
  • the size of the distance D is determined, for example, depending on how far the object 5 on the ground 3 or the falling dangerous point is desired to be detected. For example, if it is desired to quickly detect an object 5 on the ground 3 or a fall danger point at a relatively far position, the distance D is set to be relatively long. If it is desired to detect the object 5 on the ground 3 or the fall danger point when it is in a relatively close position, the distance D is set to be relatively short.
  • the distance D to the measurement point P may be set taking into consideration the moving speed of the user 2 and the like. Of course, the distance D to the measurement point P may be arbitrarily set by the user 2, for example, based on other viewpoints.
  • intersection angle ⁇ between the detection direction of the ground-side ranging sensor 28 and the ground 3 can be calculated based on the desired distance D using the following trigonometric positioning formula.
  • Intersection angle ⁇ arctan(H/D)...(1)
  • the height H from the ground 3 of the front-side distance measurement sensor 27 and the ground-side distance measurement sensor 28 is 0.75 m.
  • two channels of detection can be performed in the front direction and the ground direction. It becomes possible to obtain the results. Thereby, it becomes possible to acquire highly accurate surrounding information, and it becomes possible to notify the user 2.
  • the detection directions of the two distance measuring sensors 27 and 28, that is, the first direction and the second direction, are not limited and may be set arbitrarily.
  • any combination of directions may be employed, such as (front direction, back direction), (front direction, upper direction), (front direction, left direction), (front direction, right direction).
  • the distance measuring sensor may be arranged with the direction in which the situation is desired to be known as the detection direction.
  • the detection direction is (front direction, left direction).
  • the measurement point P is, for example, It may be set at a position a predetermined distance D away from the user 2 along the front direction on the left wall.
  • an arbitrary number of three or more distance measuring sensors may be arranged so that their detection directions are different from each other.
  • three ranging sensors may be arranged so as to provide three channels in the front direction and in the left and right directions perpendicular to the front direction (directions of both left and right walls). Any variation may be adopted as the number of ranging sensors and the detection direction of each ranging sensor.
  • the user 2 may be able to set the detection directions of the distance measuring sensors 27 and 28 on the application.
  • GUI Graphic User Interface
  • FIG. 9 is a block diagram for explaining notification of surrounding information according to the first embodiment.
  • the peripheral information acquisition unit 17 acquires first peripheral information 30 and second peripheral information 31. That is, at least two different types of peripheral information are acquired.
  • the sensor section 6 having the configuration illustrated in FIG. 7 is adopted. That is, it is assumed that a laser ranging sensor 25 and an ultrasonic ranging sensor 26 are arranged.
  • the surrounding information obtained based on the detection result of the laser ranging sensor 25 is obtained as the first surrounding information 30.
  • surrounding information acquired based on the detection result of the ultrasonic ranging sensor 26 is acquired as second surrounding information 31.
  • the presence or absence (ON/OFF) of detection of the object 5 output from the laser ranging sensor 25, the distance to the object 5, the material (hardness) of the object 5, etc. are acquired as the first peripheral information 30.
  • information such as whether or not the object 5 is detected (ON/OFF), the distance to the object 5, and the material (hardness) of the object 5 outputted from the ultrasonic ranging sensor 26 is acquired as second peripheral information 31.
  • the laser distance measurement sensor 25 can detect the distance to the object 5, etc. without being affected by the hardness of the object 5. Conversely, it is often difficult to detect the material (hardness) of the object 5 using the laser ranging sensor 25. In this case, for example, the laser distance measurement sensor 25 may output information indicating that the object 5 is undetectable as information about the material (hardness) of the object 5.
  • the broadcast information generation section 18 includes audio signal processing sections 32 and 33 and a speech synthesis processing section 34.
  • the storage unit 15 also stores first musical tone data for notifying the first peripheral information 30 and second musical tone data for notifying the second peripheral information 31.
  • the musical tone data includes any data that constitutes a musical tone.
  • the data includes data in which a predetermined scale or melody is defined, audio data of a specific musical instrument, and the like.
  • the musical tone data of the main melody, the musical tone data of the sub melody, etc. of a predetermined song may be used individually.
  • specific instruments playing a given song (melody instruments such as piano, violin, vocals, bass instruments such as bass guitar, contrabass, bass drum, percussion instruments such as glockenspiel, drums, bells, chimes, etc.) ) audio data may be used.
  • data such as an electronic sound that is discontinuously and periodically reproduced as "pippipippi" may also be used.
  • the audio signal processing unit 32 Based on the first peripheral information 30, the audio signal processing unit 32 generates first audio information using the first musical tone data.
  • the audio signal processing unit 33 generates second audio information using the second musical tone data based on the second peripheral information 31.
  • the first audio information and the second audio information are generated as broadcast information.
  • FIG. 10 is a schematic diagram showing an example of first audio information and second audio information.
  • the first musical sound data and the second musical sound data are musical sound data that, when output together to the user 2, form a combination that can be listened to without any musical discomfort. .
  • musical tone data of a certain part of a predetermined musical piece is set as the first musical tone data.
  • musical sound data of other parts constituting the same music piece is set.
  • first musical sound data and second musical sound data such as (main melody, sub melody), (melody, accompaniment), (melody of high instrument, melody of low instrument), (melody of melody instrument, sound of percussion instrument), etc. Any combination may be adopted as the combination.
  • the audio signal processing unit 32 generates first audio information by controlling musical tone parameters based on the first peripheral information 30.
  • musical sound parameters include volume, frequency, pitch, speed of song reproduction, BPM, and tempo.
  • musical tone parameter control such as increasing the volume, pitch, and tempo is executed to generate the first audio information.
  • the first audio information may be generated by controlling the musical tone parameters based on the distance information.
  • audio data of a specific musical instrument may be generated as the first audio information in response to the detection of the object 5.
  • the audio signal processing unit 33 generates second audio information by controlling musical tone parameters based on the second peripheral information 31.
  • first musical tone information constituting a predetermined music piece is generated as the first audio information.
  • second musical tone information constituting the same song is generated.
  • the speech synthesis processing unit 34 synthesizes the first speech information and the second speech information to generate one speech information (synthesized speech information). That is, the synthesized speech information is generated by the speech synthesis processing unit 34 so that both the first speech information and the second speech information illustrated in FIG. 10 are output. Any mixing technique or the like for synthesizing audio information (audio data) may be used.
  • an audio output section 35 is configured within the notification control section 19.
  • the audio output unit 35 controls the speaker 11 to output synthesized audio information in which the first audio information and the second audio information are synthesized. As a result, both the first audio information and the second audio information are output from the speaker 11.
  • the user 2 is able to grasp the first peripheral information 30 through the first audio information and the second peripheral information 31 through the second audio information. That is, the user 2 can grasp the first peripheral information 30 and the second peripheral information 31 at the same time via voice. As a result, it becomes possible to notify the user 2 of information about the surrounding environment with high accuracy.
  • a melody of a certain musical instrument is output as the first audio information.
  • the melody of another musical instrument is output as the second audio information.
  • the first audio information and the second audio information are combined and output together, they are played back to the user 2 as one melody.
  • a sound of a certain scale is output. Sounds of different scales are output as second audio information.
  • the first audio information and the second audio information are synthesized and output together, they form a chord and are played back to the user 2.
  • the user 2 moves while scanning the surrounding area using the sensor section 6 having the configuration shown in FIG. 7 in which the laser distance measurement sensor 25 and the ultrasonic distance measurement sensor 26 are arranged.
  • the sensor section 6 may be mounted on the white cane 4.
  • the first peripheral information 30 is acquired by the peripheral information acquisition unit 17 based on the detection result of the laser ranging sensor 25. Further, second surrounding information 31 is acquired based on the detection result of the ultrasonic ranging sensor 26. It is assumed that the detection direction of the laser distance measurement sensor 25 and the detection direction of the ultrasonic distance measurement sensor 26 are set in the same direction.
  • FIG. 11 is a table showing differences between the laser distance measurement sensor 25 and the ultrasonic distance measurement sensor 26.
  • the detection range of the ultrasonic ranging sensor 26 becomes wide.
  • the laser has high directivity, the detection range of the laser ranging sensor 25 is narrow. Both the ultrasonic distance measurement sensor 26 and the laser distance measurement sensor 25 return the distance measurement value when an object 5 exists within the detection range, so the laser distance measurement sensor 25 is easier to aim at, It becomes possible to know with high accuracy whether or not there is an object 5 at the point where the cane 4 is pointed.
  • the ultrasonic ranging sensor 26 has a wide detection range, so while only the approximate direction of the detected object 5 can be known, it enables detection over a wide range. If we compare it with vision, it can be said that the laser distance measurement sensor 25 is close to the fovea, which has a narrow field of view, and the ultrasonic distance measurement sensor 26 is close to the peripheral field, which has a wide field of view.
  • the laser of the laser ranging sensor 25 straddles the boundary of a thin object 5 by slightly moving the hand, it is possible to know the width of the object 5 in conjunction with the hand movement.
  • the ultrasonic ranging sensor 26 having a wide detection range, the object 5 does not exceed the detection range unless the user moves his or her hand widely, so it is difficult for the user 2 to know information such as the width and height of the object 5 in detail.
  • musical sounds for example, piano, violin, vocals, etc.
  • instruments suitable for the main melody with many scales can be used to convey detailed changes in the detection results. musical sound data).
  • the ultrasonic ranging sensor 26 Since the ultrasonic ranging sensor 26 has a wide detection range, it will notify the detection of some object 5 relatively frequently compared to the laser ranging sensor 25, which has a narrow detection range. For this reason, for example, musical tones of accompaniment or bass instruments (eg, bass guitar, contrabass, bass drum, etc.) suitable for continuous notification are assigned.
  • the detection result is not a continuous value such as a distance value, but a binary value such as whether or not the object 5 is detected (ON/OFF), for example, musical sounds of percussion instruments such as glockenspiel, drums, bells, and chimes are assigned. It's okay.
  • the laser distance measurement sensor 25 has difficulty detecting objects with low light reflectance such as glass or black materials;
  • the difference is that the ultrasonic ranging sensor 26 is not suitable for detecting soft materials.
  • both types of distance measuring sensors have advantages and disadvantages.
  • the surrounding information notification system 1 it is possible to compensate for the shortcomings of both distance measuring sensors. For example, even if there is a wall or door made of glass that cannot be detected by a laser, the ultrasonic wave returns a measured distance value, so it is possible to notify the user 2 that there is some object 5 in the vicinity. In this way, it is possible to increase the types of objects 5 that can be detected. It is also possible to improve environmental resistance.
  • the laser distance measurement sensor 25 is used as the main distance measurement sensor, and musical tone data of a high-pitched musical instrument is assigned.
  • the secondary ultrasonic ranging sensor 26 receives musical tone data from a bass musical instrument.
  • musical sound data is assigned so that the user 2 can clearly hear the voices corresponding to both distance measuring sensors. This makes it possible to notify the user 2 of information about the surrounding environment with very high accuracy.
  • the main melody is output according to the distance measurement value of the laser distance measurement sensor 25, the output may be interrupted or become inaudible. It is difficult for the user 2 to judge whether this is due to the influence of small objects 5 such as trees or snow, or whether it is due to the light transmitting/absorbing material.
  • the object 5 is If it can be determined that the material is a transparent material or a light-absorbing material, it is also possible to further utilize information on hardness obtained from the ultrasonic ranging sensor 26. For example, it is assumed that objects 5 that are light-transmissive or light-absorbing and have high hardness are often highly dangerous obstacles such as glass walls or black walls. Based on such an assumption, it is possible to estimate that an object having light transmittance or light absorption and low hardness is not glass or the like but a soft light-absorbing material such as black clothing.
  • the peripheral information acquisition unit 17 can further generate peripheral information based on the first peripheral information 30 and the second peripheral information 31. That is, it is possible to integrate the first surrounding information and the second surrounding information to generate surrounding information regarding the surrounding environment (hereinafter referred to as integrated surrounding information).
  • the surrounding information acquisition unit 17 may generate integrated surrounding information based on the stability of detection by the laser ranging sensor 25 and the stability of detection by the ultrasonic ranging sensor 26. Note that in the present disclosure, the stability of sensor detection is included in the sensor detection result.
  • information regarding at least one of the material and the type of object for the light-transmitting member or the light-absorbing member is determined. It is also possible to generate detailed information as integrated peripheral information.
  • the shape of an object can be grasped in detail by converting the detection results from the laser distance measurement sensor 25 into minute changes in tone that are linked to the movements of the body or movable parts.
  • the ultrasonic ranging sensor 26 by converting the wide range detection results by the ultrasonic ranging sensor 26 into audio such as accompaniment, it is possible to detect in advance the presence of obstacles that are likely to block the route. It is also possible to realize the division of roles between the fovea and the peripheral visual field in vision. It is also possible to share roles that compensate for the shortcomings of each distance measurement sensor.
  • the laser distance measurement sensor 25 is normally used, it is also possible to rely on information from the ultrasonic distance measurement sensor 26 in places where there is glass or black material with low light reflectivity.
  • the specific types of the first peripheral information 30 and the second peripheral information 31 shown in FIG. 9 are not limited.
  • surrounding information obtained based on the detection result of a ranging sensor arranged with the first direction as the detection direction may be obtained as the first surrounding information 30.
  • surrounding information obtained based on the detection result of a ranging sensor arranged with the second direction as the detection direction may be obtained as the second surrounding information 31.
  • the first surrounding information 30 is acquired based on the detection result of the front distance measuring sensor 27 arranged with the front direction as the detection direction, for example.
  • the second surrounding information 31 may be acquired based on the detection result of the ground-side ranging sensor 28 arranged with the direction toward the measurement point P as the detection direction. This allows the user 2 to simultaneously grasp information about the environment on the front side and information on the environment on the ground side via audio.
  • the first surrounding information 30 may be acquired based on the detection result of the ground-side distance measurement sensor 28, and the second surrounding information 31 may be obtained based on the detection result of the front-side distance measurement sensor 27.
  • the distance measurement value of the front side distance measurement sensor 27 is acquired as the first peripheral information 30. Then, musical tone parameters are controlled according to the measured distance value, and a predetermined melody is output as first audio information.
  • the presence or absence of a fall danger point is acquired based on the distance measurement value of the ground-side distance measurement sensor 28. For example, when the distance measurement value of the ground-side distance measurement sensor 28 becomes large, it is determined that there is a fall danger point. In response to detection of a fall danger point, second audio information is generated and output from audio data of a percussion instrument or the like.
  • the melody which is the first audio information
  • the audio of a percussion instrument which is the second audio information
  • the user 2 can simultaneously grasp the proximity of the object 5 in the front direction and the presence or absence of a fall danger point on the ground 3.
  • Distance measuring sensors of the same type are arranged so that their detection directions are different from each other.
  • a plurality of laser ranging sensors are arranged with different detection directions such as front, back, left, right, top and bottom.
  • the main melody is assigned to the laser distance measurement sensor in the front direction.
  • Accompaniment etc. are assigned to the laser ranging sensor in a direction other than the front direction.
  • the localization of the first audio information is controlled based on the detection direction of the first ranging sensor among the plurality of ranging sensors.
  • the localization of the second audio information is controlled based on the detection direction of the second ranging sensor among the plurality of ranging sensors.
  • Such processing is also possible.
  • FIG. 12 is a schematic diagram for explaining an example in which an image sensor is arranged as the object detection sensor 10.
  • Image information 38 is generated by the image sensor and output to the surrounding information acquisition unit 17 as a detection result.
  • the surrounding information acquisition unit 17 performs object recognition processing on the image information 38.
  • the presence or absence of detection of the object 5 ON/OFF
  • the type of the object 5, the distance to the object 5, the material (hardness) of the object 5, etc. are used as peripheral information. It is possible to obtain.
  • the first peripheral information 30 may be acquired based on the information of the upper half pixel region 38a of the image information 38. Further, the second peripheral information 31 may be acquired based on the information of the lower half pixel region 38b of the image information 38. In this way, the first peripheral information 30 may be acquired based on information on a part of the pixel area of the image information 38. Further, the second peripheral information may be acquired based on information on other pixel areas in the image information 38. This allows the user 2 to simultaneously grasp information on the upper environment and information on the lower environment via audio.
  • information regarding a car 39 detected based on image information 38 is acquired as first surrounding information 30. Further, information regarding the person 40 detected based on the image information 38 is acquired as the second peripheral information 31. In this way, the first peripheral information 30 and the second peripheral information 31 may be acquired for each type of object detected based on the image information 38. That is, information regarding a first type of object detected based on the image information 38 is acquired as first peripheral information, and information regarding a second type different from the first type is acquired as second peripheral information. may be done.
  • the first type and second type of object can be set arbitrarily. For example, it is possible to set any combination of (person, vehicle), (motorcycle, automobile), (adult, child) (pedestrian, bicycle) as the first type and second type. be.
  • the user 2 can simultaneously grasp information regarding two different types of objects via voice.
  • the notification information generation unit 18 may determine whether to output the first audio information based on the first peripheral information 30. For example, it is determined whether or not to output the first audio information based on whether or not the object 5 is detected, the distance to the object 5, and the like. For example, if the object 5 is not detected, or if the distance to the object 5 is greater than a predetermined threshold (eg, 5 m, etc.), it is determined that the first audio information is not output.
  • a predetermined threshold eg, 5 m, etc.
  • the output of the first audio information by the audio output unit 35 is restricted. That is, when the first peripheral information 30 satisfies a predetermined condition, the first audio information is output, and when the predetermined condition is not satisfied, the first audio information is not output. Such processing is also possible.
  • the conditions that serve as the criteria for determining whether or not to output the first audio information may be set arbitrarily.
  • the output of the second audio information it is determined whether or not to output the second audio information based on predetermined conditions, and the output of the second audio information is controlled based on the determination result. Good too.
  • the same determination condition may be set as the determination condition regarding the output of the first audio information and the determination condition regarding the output of the second audio information, or different determination conditions may be set separately. .
  • first musical tone data, second musical tone data, and third musical tone data are prepared for each of first peripheral information, second peripheral information, and third peripheral information.
  • First musical tone data is used to generate first audio information based on the first peripheral information.
  • Second audio information is generated using the second musical tone data based on the second peripheral information.
  • Third audio information is generated using third musical tone data based on the third peripheral information.
  • a sensor capable of outputting multidimensional information may be disposed.
  • an image sensor capable of outputting image information 38 shown in FIG. 12 and the like can also be said to be a sensor capable of outputting multidimensional information, with information on each pixel being one-dimensional information.
  • the object detection sensor 10 that can output multidimensional information it is also possible to use a ToF (Time of Flight) sensor that can acquire distance information for each pixel.
  • ToF Time of Flight
  • audio information may be generated using musical tone data for each pixel information and output together.
  • the first peripheral information 30 and the second peripheral information 31 are acquired by the controller 8 based on the detection results by the one or more object detection sensors 10. . Then, first audio information is generated using the first musical tone data based on the first peripheral information 30. Also, second musical tone data is used to generate second audio information based on the second peripheral result. Both the first audio information and the second audio information are output. Thereby, it becomes possible to notify both the first surrounding information and the second surrounding information via voice, and it becomes possible to notify the user 2 of information on the surrounding environment with high accuracy.
  • Distance sensors include, for example, optical laser systems, ultrasonic systems, and stereo cameras, but optical laser systems have problems such as being unable to measure distances from objects with low light reflectance and being affected by environmental light. there were. Another problem with the ultrasonic method was that it was difficult to narrow down the distance measurement range because the sound waves spread out. In order to solve such problems, a configuration in which a plurality of distance measuring sensors of different methods are combined can be cited.
  • a laser distance measurement sensor and an ultrasonic distance measurement sensor are selectively switched and used as appropriate.
  • the highly directional laser distance measurement sensor outputs detection results for obstacles in the front direction, but the ultrasonic distance measurement sensor with a wide detection range suddenly outputs detection results for objects that are widely present in the surrounding area. It is also possible that the output is switched and output. It may be difficult for the user 2 to understand whether the detection result currently being reported via audio or the like is information for the front direction or information for a wide area around the user, making it difficult to avoid danger. could be.
  • the first peripheral information 30 and the second peripheral information 31 are converted into audio information, and it becomes possible to notify the user 2 at the same time.
  • the peripheral information acquisition unit 17 corresponds to an embodiment of the peripheral information acquisition unit according to the present technology.
  • the audio signal processing units 32 and 33 and the audio synthesis processing unit 34 configured in the notification information generation unit 18 correspond to an embodiment of the audio information generation unit according to the present technology.
  • the audio output unit 35 configured in the notification control unit 19 corresponds to an embodiment of the notification control unit according to the present technology, which outputs both first audio information and second audio information.
  • the speech synthesis processing section 34 can also be regarded as a block that also functions as a notification control section according to the present technology.
  • the laser ranging sensor 25 corresponds to an embodiment of the first object detection sensor according to the present technology. Further, the laser distance measurement sensor 25 also corresponds to an embodiment of a first distance measurement sensor that operates according to the first method (optical laser method).
  • the ultrasonic ranging sensor 26 corresponds to an embodiment of the second object detection sensor according to the present technology. Further, the ultrasonic ranging sensor 26 also corresponds to an embodiment of a second ranging sensor that operates according to a second method (ultrasonic method) different from the first method.
  • the front distance measuring sensor 27 corresponds to an embodiment of the first object detection sensor according to the present technology. Further, the front side distance measuring sensor 27 also corresponds to one implementation value of the first distance measuring sensor arranged with the first direction (front direction) as the detection direction.
  • the ground-side ranging sensor 28 corresponds to an embodiment of the second object detection sensor according to the present technology. Further, the ground-side ranging sensor 28 also corresponds to an embodiment of a second ranging sensor arranged with a detection direction in a second direction (ground direction) different from the first direction.
  • FIG. 13 is a block diagram showing a configuration example for realizing notification of surrounding information according to the second embodiment.
  • the surrounding information notification system 41 the configuration of the sensor unit 6 shown in FIG. 8 is adopted. That is, a front-side distance measurement sensor 27 whose detection direction is in the front direction, and a ground-side distance measurement sensor 28 whose detection direction is in a direction toward the measurement point P on the ground 3 (ground direction) are used.
  • the surrounding information acquisition section 17 includes a distance information acquisition section 42 and a situation determination section 43.
  • the distance information acquisition unit 42 receives first distance information detected by a first ranging sensor arranged with a first direction as a detection direction, and a second direction different from the first direction as a detection direction. and second distance information detected by the second distance measuring sensor arranged.
  • the distance information detected by the front distance measuring sensor 27 (hereinafter referred to as front distance information) is acquired as the first distance information. Further, distance information detected by the ground-side distance measuring sensor 28 (hereinafter referred to as ground-side distance information) is acquired as second distance information.
  • the situation determination unit 43 determines the surrounding environment based on at least one of first detection information including fluctuations and dispersion of the first distance information and second detection information including fluctuations and dispersion of the second distance information. Determine the situation.
  • first detection information including fluctuations and variations in the front side distance information
  • second detection information including fluctuations and variations in the ground side distance information are used. That is, the situation of the surrounding environment is determined using four pieces of information: variation in front distance information, variation in front distance information, variation in ground side distance information, and variation in ground side distance information.
  • variable in distance information includes any information related to variation in distance information, such as the magnitude of variation in distance information, the direction of variation (increase/decrease), time of variation, and time of no variation. It will be done.
  • distance information variations includes any information regarding variations in a plurality of distance information detected in time series at a predetermined frame rate. For example, information regarding dispersion of a plurality of pieces of distance information detected in the most recent predetermined period or information regarding dispersion of a predetermined number of distance information detected most recently may be used. For example, values such as variance and deviation (standard deviation) indicating variations are calculated. The variation time of the deviation, the magnitude of the variation in the deviation, the direction of the variation in the deviation (increase/decrease), the time during which the deviation does not vary, etc. are acquired as information regarding the "distance variation". Note that the values of variance and deviation (standard deviation) can be determined using known arithmetic expressions.
  • the presence or absence of a surrounding object 5, the distance to the object 5, the shape of the object 5, the type of the object 5, the size of the object 5, the material of the object 5, etc. may be output as a determination result.
  • the presence or absence of a fall danger point, the distance to the fall danger point, the type of fall danger point, the shape of the fall danger point, the size of the fall danger point, etc. can be output as the determination results.
  • other situations may also be determined.
  • the front side distance information and the ground side distance information acquired by the distance information acquisition unit 42 are information included in the surrounding information according to the present technology. Further, the determination result of the situation of the surrounding environment outputted by the situation judgment unit 43 is also information included in the surrounding information according to the present technology.
  • FIG. 14 is a flowchart showing an example of notification of surrounding information according to this embodiment.
  • the surrounding information notification system 41 starts up (step 201).
  • a power button or the like is installed on the sensor body 21, and the user 2 presses the power button or the like.
  • the power may be turned on by voice input by the user 2.
  • Automatic calibration of the ranging sensor is started (step 202).
  • automatic calibration is performed on the front-side ranging sensor 27 and the ground-side ranging sensor 28.
  • step 203 It is determined whether the automatic calibration results are normal (step 203). Specifically, the distance information (distance measurement value) of each distance measurement sensor is acquired, and it is determined whether the value is appropriate. For example, if the sensor body 21 is attached incorrectly or if the user's 2 hand or the like is covering each distance measurement sensor, the distance measurement value will not be appropriate and it will be determined that there is an abnormality. Further, even if each distance measuring sensor is out of order, it is determined that there is an abnormality.
  • a voice guide indicating an error is output from the speaker 11 (step 204). For example, a voice guide such as "Please check whether the device is worn properly" is output.
  • the user 2 corrects the orientation of each distance measuring sensor as a countermeasure for the error (step 205).
  • the distance information acquisition unit 42 acquires front side distance information and ground side distance information detected at a predetermined frame rate (step 206).
  • the situation determination unit 43 determines the situation of the surrounding environment based on fluctuations and dispersion of front side distance information and ground side distance information, which are two channels of distance information on the front side and the ground side. In this embodiment, determination results regarding obstacles and fall danger points are output. Specifically, it is determined whether an obstacle and a falling point are present (step 207).
  • FIG. 15 is a table showing an example of determination of the situation of the surrounding environment.
  • 16 to 18 are schematic diagrams for explaining the determination example shown in FIG. 15.
  • FIG. 16 is a schematic diagram showing a case where an obstacle exists in a position in the front direction of the user 2.
  • FIG. 17 is a schematic diagram showing a case where an obstacle exists on the ground 3 in front of the user 2.
  • FIG. 18 is a schematic diagram showing a case where a fall danger point exists at a position in the front direction of the user 2.
  • the situation determining unit 43 can determine that the obstacle 44 is present at the position in the front direction when the front distance becomes small and the variation time of the deviation of the front distance is longer than a predetermined time. . More specifically, it is possible to determine that an obstacle 44 with a height of at least H or more exists in a position in the front direction.
  • a threshold regarding front distance information may be set. For example, when the front distance information becomes smaller than a predetermined threshold value, it may be determined that the obstacle 44 exists. For example, when the front distance becomes smaller than a threshold value such as 5 m or 10 m, it is determined that the obstacle 44 exists at a position in the front direction. Thereby, it becomes possible to detect with high precision the obstacle 44 that exists within the distance where there is a possibility of collision.
  • the "predetermined time” that serves as the criterion for determination may be appropriately set when constructing the surrounding information notification system 41. For example, by arranging the obstacle 44 at a position in the front direction and performing calibration or the like, an appropriate time during which the obstacle 44 can be detected is calculated. A threshold value is set as a "predetermined time” based on the calculated time. For example, the calculated time may be directly used as the threshold, or a time close to the calculated time may be used as the threshold. When the variation time of the deviation of the front side distance is longer than the threshold value, it is possible to determine that the variation time of the deviation of the front side distance is "longer than a predetermined time". Of course, the settings are not limited to this.
  • FIG. 16A a car 45 exists as an obstacle 44.
  • FIG. 16B an upward staircase 46 is present as the obstacle 44.
  • the car 45 comes relatively close to the measurement point P.
  • the ground-side distance information becomes smaller. In other words, the ground side distance information does not substantially change until the vehicle 45 approaches the measurement point P.
  • the front distance information becomes approximately equal to the distance D from the user 2 to the measurement point P.
  • the up stairs 46 relatively approaches the vicinity of the measurement point P.
  • the ground-side distance information becomes smaller. In other words, the ground side distance information does not substantially change until the up stairs 46 approaches the measurement point P.
  • the step portion on the upper side of the ascending staircase 46 is located further back in the front direction than the lowest step, and is located away from the user 2. Therefore, when the upward stairs 46 approaches the measurement point P, the front side distance information has a value larger than the distance D from the user 2 to the measurement point P. After that, when the user 2 moves toward the upward stairs, the ground side distance information becomes further smaller, and the deviation of the ground side distance information continues to fluctuate. That is, until the front distance information becomes approximately equal to the distance D to the measurement point P, the ground side distance information becomes smaller and the deviation of the ground side distance information continues to fluctuate.
  • the obstacle 44 present in the front direction is, for example, a car 45 or an upward staircase 46.
  • the obstacle present in the front direction is a car 45 or an upward staircase 46 by focusing on the front side distance information at the timing when the ground side distance information becomes small. It is possible.
  • the front side distance information becomes smaller and the fluctuation time of the deviation of the front side distance information is longer than a predetermined time, and the ground side distance information is changed until the front side distance information becomes smaller than a predetermined threshold. If there is no change, it is determined that there is an object other than the object constructed obliquely upward and away from the user at a position in the front direction.
  • the ground side distance information becomes small and the ground side If the variation time of the deviation of the distance information is longer than a predetermined time, it is determined that there is an object constructed diagonally upward and away from the user at a position in the front direction.
  • the "predetermined threshold” regarding the front distance information is set based on the distance D on the ground 3 from the user 2 to the measurement point P.
  • the distance D may be used as it is as the threshold value.
  • a value close to the distance D may be used as the threshold.
  • the "predetermined threshold” may be calculated by calibration or the like, or may be set arbitrarily by the user 2.
  • a state in which the distance information does not change includes not only a state in which the distance information does not change at all, but also a state in which there is almost no change.
  • a state in which a range with a relatively small width is set and the distance information falls within the range can be defined as a "state in which the distance information does not change.”
  • an object in the shape of an upward staircase is determined as "an object constructed diagonally upward and away from the user.” Therefore, “an object other than an object obliquely constructed upward and away from the user” is an object other than an upward staircase-shaped object.
  • a car 45 is an embodiment of an object other than an upwardly directed staircase-shaped object.
  • the object is not limited to the car 45, and includes any object other than an object shaped like an upward staircase.
  • an ascending staircase 46 is one embodiment of an upwardly directed staircase-shaped object.
  • the object is not limited to the upward staircase 46, and for example, an upward escalator is also included in the upward staircase-shaped object. Other arbitrary step-shaped objects are also included.
  • a "frontal obstacle” corresponds to an object other than an upward staircase-shaped object.
  • the "up stairs/up escalator” corresponds to a staircase-shaped object heading upward.
  • the condition for the ground side distance information for determining a "frontal obstacle” and “up stairs/up escalator” is that the front side distance information becomes smaller than a predetermined threshold. is listed.
  • an obstacle 48 (object 5) whose height is lower than H exists on the ground 3.
  • an obstacle 48 whose height is lower than H will be referred to as a ground obstacle 48 using the same reference numeral.
  • the ground obstacle 48 shown in FIG. 17A is larger in size than the ground obstacle 48 shown in FIG. 17B.
  • the ground side distance information when the ground side distance information becomes small and the variation time of the deviation of the ground side distance information is longer than a predetermined time in a state where the front side distance information does not change, the ground side distance information is changed. It is determined that a ground obstacle 48 larger than a predetermined size exists on the ground. In addition, in a state where the front side distance information does not change, if the ground side distance information becomes small and the variation time of the deviation of the ground side distance information is shorter than a predetermined time, a ground smaller than the predetermined size is placed on the ground 3. It is determined that an obstacle 48 exists.
  • the deviation of the ground side distance information is higher than when the ground obstacle 48 shown in FIG. 17B exists on the ground 3.
  • the fluctuation time becomes longer. Based on the variation time of the deviation of the ground distance information, it is possible to determine the relative size of the ground obstacle 48 existing on the ground 3. That is, it is possible to determine whether the ground obstacle 48 is larger than a "predetermined size".
  • a threshold value (“predetermined time") is set for the variation time of the deviation of the ground side distance information. Then, when the variation time of the deviation of the ground side distance information is longer than the threshold value, it is determined that a ground obstacle 48 having a relatively large size exists on the ground 3. If the variation time of the deviation of the ground side distance information is shorter than the threshold value, it is determined that a ground obstacle 48 with a relatively small size exists on the ground 3.
  • a "predetermined size” that is a criterion for determining the size of the ground obstacle 48.
  • a "predetermined size” is set as a criterion for determining the size of the ground obstacle 48, and the ground side distance information is set so that determination can be made based on the set "predetermined size”.
  • a threshold value (“predetermined time”) may be set for the variation time of the deviation.
  • the threshold value (“predetermined time” or “predetermined size”) serving as a criterion for determination may be arbitrarily set by the system or the user.
  • the moving speed (walking speed) of the user 2 may be acquired and the threshold value may be set using the information.
  • the size of the ground obstacle 48 is also defined based on the height of the ground obstacle 48, the area seen from the information on the ground obstacle 48, or both of these parameters. It may be set arbitrarily.
  • Ground obstacle (large) listed in the table of FIG. 15 corresponds to a ground obstacle 48 that is larger than a predetermined size (relatively large in size) that exists on the ground 3.
  • “Ground obstacle (small)” corresponds to a ground obstacle 48 existing on the ground 3 that is smaller than a predetermined size (relatively small in size).
  • a downward staircase 51 exists as a fall danger point 50 at a position in the front direction of the user 2.
  • an edge 52 of the platform exists as a fall danger point 50 in a position in the front direction.
  • a fall danger point 50 which is an area concave downward, exists in a position in the front direction.
  • the front side distance information is described as being unchanged or decreasing. The present invention is not limited to this, and regardless of the conditions of the front side distance information, it may be determined that the fall danger point 50 exists when the ground side distance information becomes large.
  • FIGS. 18A and 18B a descending staircase 51 and an edge 52 of the platform are illustrated as falling danger points 50.
  • the present invention is not limited to this, and it is also possible to determine the presence of a downward escalator or any fall danger point 50 where there is a risk of the user 2 falling based on the ground side distance information. Note that the falling point in FIG. 15 corresponds to the falling danger point 50.
  • the notification information generation unit 18 and the notification control unit 19 execute notification processing of the obstacles 44 (48) and the falling danger points 50 (step 208).
  • FIG. 19 is a table showing an example of a process for notifying obstacles and falling danger points.
  • the notification information generation unit 18 determines the danger level for movement of the user 2 based on the determination result by the situation determination unit 43. Then, notification information is generated and output so as to correspond to the danger level.
  • the danger level is determined to be higher when the fall danger point 50 exists than when the obstacle 44 (48) exists.
  • the danger level is set to "high.”
  • the danger level of the obstacle 44 (48) is set to "medium” or "low,” which is lower than “high.” This makes it possible to more strongly notify the user 2 of the danger of falling.
  • the danger level is determined depending on the type of obstacle 44 (48). Specifically, the danger level of the "frontal obstacle” is set to “medium”, assuming that the degree of injury to the user 2 at the time of a collision is moderate. The risk level of the "ground obstacle (large)” is set to “medium” because the possibility of tripping or the degree of injury to the user 2 in the event of a collision is moderate.
  • the danger level of the "ground obstacle (small)” is set to “small” because the possibility of tripping or the degree of injury to the user 2 in the event of a collision is low.
  • the danger level of the "up stairs/up escalator” is set to “low” because the degree of injury to the user 2 is low.
  • the setting of the danger level is not limited to this example. In addition, it may be dynamically set by the system based on data on the user's status at the time (gender, age, health condition, whether or not hearing aids are worn, etc.), or it may be set arbitrarily by the user. It's okay.
  • the degree of risk of falling may differ between a young person and an elderly person. Therefore, if User 2 is a young person, the risk level of "Ground Obstacle (Large)" is set to “Medium”, but if User 2 is an elderly person, it is changed to "High”. may be performed. Note that the distance D and musical tone data may be set as appropriate based on data on user status (gender, age, health condition, presence or absence of hearing aids, etc.).
  • notification by voice and notification by vibration are used. Specifically, audio notifications are performed regarding “frontal obstacles” and “up stairs/up escalators.” Vibration notification is performed for "ground obstacles (large)”, “ground-side obstacles (small)”, “down stairs, down escalators, and other falling points”.
  • audio information is generated as notification information in order to notify the situation of the surrounding environment corresponding to the front direction.
  • vibration information is generated as notification information in order to notify the situation of the surrounding environment corresponding to the ground direction.
  • the user 2 can grasp the situation in the front direction through the sound, and can also grasp the situation in the ground direction through the vibration. That is, it becomes possible to broadcast information about the surrounding environment with high accuracy.
  • simultaneous notification to the user 2 via voice as described in the first embodiment may be adopted.
  • vibration information may be generated as notification information to notify the situation of the surrounding environment corresponding to the front direction
  • audio information may be generated as the notification information to notify the situation of the surrounding environment corresponding to the ground direction.
  • a "safety distance range” and a “notification distance range” are set.
  • the "safe distance range” is a distance range that is far from the obstacle 44 (48) or the fall danger point 50 and is determined to be safe. If the distance from the obstacle 44 (48) or fall danger point 50 is included in the "safe distance range", no notification is necessary and reproduction of the notification information is stopped. That is, the output of audio information from the speaker 11 and the output of the vibration pattern according to the vibration information of the vibration device 12 are stopped.
  • the “notification distance range” is a distance range in which it is determined that the obstacle 44 (48) or fall danger point 50 is approaching and that notification is necessary.
  • the notification according to the danger level is executed as follows, for example. "Front obstacle” (danger level “medium”)...Outputs discontinuous mid-range sound “Up stairs/up escalator” (danger level “low”)...Outputs discontinuous low-range sound “Ground obstacle (Large)” (Danger level “Medium”)...Outputs discontinuous mid-range vibrations. "Ground obstruction (Small)” (Danger level “Low”)...Outputs discontinuous high-range vibrations. "Downward escalators and other falling points”...outputs discontinuous low-frequency vibrations.In this way, by executing notifications according to the danger level, highly accurate notifications can be achieved. This allows the user 2 to intuitively grasp the danger level.
  • the "safety distance range” and the “notification distance range” can be set using, for example, a threshold related to distance. If the obstacle 44 (48) or the fall danger point 50 is further away than the threshold value, it is determined that the distance is within the "safe distance range”. If the obstacle 44 (48) or the falling danger point 50 is closer than the threshold value, it is determined that the obstacle 44 (48) or the fall danger point 50 is included in the "reported distance range”.
  • the threshold value is set, for example, based on the distance D on the ground 3 from the user 2 to the measurement point P.
  • the distance D may be used as it is as the threshold value.
  • a value close to the distance D may be used as the threshold.
  • the distance D is used as it is as the threshold value.
  • the distance D from the user 2 to the measurement point P is relatively large, a value shorter than the distance D is used as the threshold value.
  • any other setting method may be adopted.
  • the method of determining the danger level, the method of outputting notification information, the method of setting the "safe distance range” and the “notification distance range”, etc. are not limited and may be set arbitrarily.
  • personalization and customization can be freely performed based on the environment in which User 2 moves (such as what objects are present on the route he walks every day), information about User 2's walking (such as walking speed), etc. It's fine.
  • FIG. 20 is a schematic diagram for explaining another example of the process for notifying obstacles and falling danger points.
  • the "notification distance range” is further divided into a “soft notification distance range” and a “danger notification distance range.”
  • the "soft notification distance range” is a distance range for notifying that the obstacle 44 (48) or fall danger point 50 is approaching. In other words, it is a distance range that notifies you of the need for vigilance.
  • the “danger warning distance range” is a distance range in which the obstacles 44 (48) and falling danger points 50 are approaching and the danger level is high. In other words, this is a distance range that notifies you that the vehicle is about to collide with the obstacle 44 (48) or fall into the fall danger point 50.
  • 4 m is set as the distance indicating the boundary between the "safety distance range” and the “notification distance range”.
  • 2 m is set as the distance indicating the boundary between the "soft alert distance range” and the “danger alert distance range.”
  • the range from 0m to 2m from an obstacle or a fall danger point is the “dangerous distance range.”
  • the "soft distance range” is a range of 2m to 4m from an obstacle or fall hazard point.
  • the "safe distance range” is a range of 4 meters or more from an obstacle or fall hazard point.
  • the distance indicating the boundary between the "safety distance range” and the “notification distance range” may be set arbitrarily.
  • the tempo is switched for the discontinuously output sounds and vibrations.
  • discontinuous sounds and vibrations are output at a relatively low tempo.
  • discontinuous sounds and vibrations are output at a relatively high tempo.
  • the present invention is not limited to this, and the intensity of the sound, the playback speed of the song, the BPM, the intensity of vibration, the frequency, etc. may be controlled.
  • the notification of surrounding information continues until the user 2 turns off the main body power.
  • the operation of the peripheral information notification system 41 ends (step 209).
  • FIG. 21 is a schematic diagram showing an example of detection of "ground obstacle (large)” and “ground obstacle (small).” As shown in FIG. 21, until the detection range of the ground-side distance measuring sensor 28 reaches the ground obstacle 48, the ground-side distance information remains unchanged based on the distance to the ground 3, and the ground-side distance information The deviation of is stable. That is, as shown in FIG. 21, there is a "deviation stabilization period".
  • the deviation of the ground side distance information when detecting the ground obstacle 48, the deviation of the ground side distance information may become an unspecified value at the detection start timing and the detection end timing. Therefore, the deviation of the ground side distance information at the detection start timing of the ground obstacle 48 and the deviation of the ground side distance information at the obstacle detection end timing may be processed so as not to be included in the obstacle determination.
  • deviation data excluding the deviation of the ground distance information at the detection start timing and the detection end timing is used as data that can be used to determine an obstacle. This makes it possible to detect the ground obstacle 48 with high accuracy.
  • the controller 8 controls the fluctuation of the front side distance information (first distance information) detected by the front side distance measurement sensor 27 (first distance measurement sensor) and First detection information including variations, and second detection information including variations and variations in ground-side distance information (second distance information) detected by the ground-side distance measurement sensor 28 (second distance measurement sensor). Broadcast information is generated based on this. This makes it possible to detect information about the surrounding environment with high accuracy and notify the user 2 of the information.
  • danger avoidance does not use camera recognition technology that is unsuitable for low cost, weight reduction, and miniaturization, but uses a lightweight and inexpensive distance measurement sensor to implement the above-mentioned accessibility device. It is possible to achieve this.
  • Figure 15 by focusing on the fluctuations and dispersion of distance information in two channels, ⁇ front direction'' and ⁇ ground direction,'' it is possible to grasp the surrounding environment and situation while walking in detail and instantly. Become. In other words, it is possible to simultaneously detect "collision with an obstacle" and "fall/fall due to an abnormality on the floor”. This has the effect of improving the safety and sense of security of the visually impaired user 2.
  • the detection directions of the plurality of ranging sensors may be set arbitrarily. Based on the fluctuations and variations in distance information of a plurality of distance measuring sensors whose detection directions are set in various directions, it is possible to notify the user 2 of various environmental information with high accuracy.
  • the type of fall danger point may be able to be determined by appropriately setting the number of ranging sensors and the detection direction of each ranging sensor. For example, it may be possible to determine whether the point is a fall danger point in the shape of a staircase going downward or a fall danger point in another shape. Furthermore, it may be possible to determine that it is at the edge of a station platform. Then, the danger level may be determined according to the type of fall danger point.
  • the threshold value for determining whether or not an obstacle is detected in the front direction and the distance D from the user 2 to the measurement point P are adjusted based on the route and path that the user 2 takes every day. may be done. For example, route data frequently used by user 2 is acquired using GPS or the like. Based on the route data, settings such as the maximum distance for obstacle detection in the front direction and the irradiation angle in the ground direction may be automatically adjusted (personalized).
  • settings suitable for detecting upward stairs are adopted. If the route has many dangerous falling points, settings suitable for detecting the dangerous falling points are adopted. The settings may be automatically adjusted during the route.
  • a voice input device such as a microphone
  • settings such as the maximum distance for obstacle detection in the front direction and the irradiation angle in the ground direction may be adjusted by voice input by the user 2.
  • a motor mechanism, an actuator mechanism, or the like can be appropriately configured as a mechanism for automatically adjusting settings such as the maximum distance for obstacle detection in the front direction and the irradiation angle in the ground direction.
  • the configuration that can control the detection direction of each ranging sensor can also be said to be a direction control unit that changes at least one of the first direction and the second direction.
  • a mode in which only the front-side distance measurement sensor 27 is driven may be selectively switched between a mode in which only the front-side distance measurement sensor 27 is driven, a mode in which only the ground-side distance measurement sensor 28 is driven, and a mode in which both the two distance measurement sensors 27 and 28 are driven.
  • a mode in which only important channels are driven may be automatically set depending on the remaining battery level, or the user 2 may be able to set the mode as appropriate.
  • an image sensor or the like is installed and a person can be detected based on image information, the detected person may be excluded from obstacle detection. Further, as the configuration of the sensor section 6, an integral configuration with a smartphone may be adopted by utilizing a distance measuring sensor etc. mounted on the smartphone.
  • the distance information acquisition section 42 corresponds to one embodiment of the distance information acquisition section according to the present technology.
  • the situation determining unit 43 corresponds to an embodiment of the situation determining unit according to the present technology.
  • the notification information generation section 18 and the notification control section 19 function as an embodiment of a notification section that generates and outputs notification information for notifying the determination result by the situation determination section.
  • FIG. 22 is a block diagram showing a configuration example for realizing notification of surrounding information according to the third embodiment.
  • the sensor unit 6 is further equipped with a 9-axis sensor 55 and a GPS 56.
  • the 9-axis sensor 55 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor.
  • the nine-axis sensor 55 can detect acceleration, angular velocity, and orientation in three axes of the sensor section 6 (sensor main body 21).
  • an IMU Inertial Measurement Unit
  • the GPS 56 acquires information on the current position of the sensor section 6 (sensor main body 21). Further, a sensor that acquires biological information such as pulse, heartbeat, body temperature, and brain waves may be used as necessary.
  • the controller 8 further includes a self-position estimating section 57 and a map information generating section 58. These blocks are realized by the processor of the controller 8 executing a program according to the present technology. In order to realize each functional block, dedicated hardware such as an IC (integrated circuit) may be used as appropriate.
  • IC integrated circuit
  • the self-position estimating section 57 estimates the self-position of the sensor section 6 (sensor main body 21).
  • the self-position includes the position and orientation of the sensor body 21.
  • the self-position estimating unit 57 can calculate position information indicating where the sensor body 21 is located and posture information such as which direction the sensor body 21 is facing. Furthermore, based on the posture information of the sensor body 21, it is possible to detect which direction the user 2 is currently facing. In other words, it is possible to detect which direction the front distance measuring sensor 27 is facing.
  • the attitude, position, and movement (movement) of the sensor body 21 can also be regarded as the attitude, position, and movement (movement) of the user 2.
  • the self-position of the sensor body 21 is calculated based on the detection results from the sensor section 6.
  • an image sensor or the like for acquiring surrounding image information may be installed.
  • a three-dimensional coordinate system is set for the surrounding space.
  • coordinate values for example, XYZ coordinate values
  • absolute coordinate system world coordinate system
  • coordinate values for example, xyz coordinate values or uvd coordinate values
  • a relative coordinate system with a predetermined point as a reference (origin)
  • the origin serving as a reference may be set arbitrarily.
  • the self-position estimating unit 57 calculates position coordinates in the set three-dimensional coordinate system.
  • the X axis is the pitch axis
  • the Y axis is the roll axis
  • the Z axis is the yaw axis
  • the pitch angle, roll angle, and yaw angle are calculated based on the front direction of user 2 (sensor body 21). Ru.
  • the specific format of the position information and posture information of the user 2 (sensor body 21) is not limited.
  • the algorithm for estimating the self-position of the sensor body 21 is not limited, and any algorithm such as SLAM (Simultaneous Localization and Mapping) may be used. In addition, any machine learning algorithm or the like may be used.
  • SLAM Simultaneous Localization and Mapping
  • machine learning algorithm or the like may be used.
  • the map information generation unit 58 generates an obstacle space map corresponding to the surrounding environment based on the history of determination results by the situation determination unit 43.
  • the obstacle space map corresponds to one embodiment of surrounding map information according to the present technology.
  • the situation determination unit 43 can detect the obstacles 44 (48) and the fall danger points 50.
  • FIG. 23 is a flowchart showing an example of notification of surrounding information according to this embodiment. Steps 301-307 in FIG. 23 are similar to steps 201-207 shown in FIG. In this embodiment, in step 308, the map information generation unit 58 generates an obstacle space map.
  • FIG. 24 is a schematic diagram showing an example of an obstacle space map.
  • a three-dimensional coordinate system is set in which the XY direction is the horizontal direction and the Z direction is the height direction.
  • step 308 based on the self-position of the user 2 and the yaw angle (rotation angle with respect to the Z axis) of the distance measurement sensor (front distance measurement sensor 27), detected obstacles 44 (48) and falling danger are detected.
  • the XY coordinate values of point 50 are calculated as position information.
  • an obstacle space map 60 containing positional information of the detected obstacles 44 (48) and fall danger points 50 is generated.
  • the obstacles 44 (48) and fall danger points 50 are all schematically illustrated in the same manner.
  • the obstacle space map 60 including information such as the types of obstacles 44 (48) and the types of fall danger points 50.
  • an obstacle space map 60 that includes spatial position information of the positions, types, and attributes of the obstacles 44 (48) and fall danger points 50.
  • the obstacles 44 (48) and the fall danger points 50 may be collectively referred to as objects to be avoided.
  • the object to be avoided can also be called a dangerous object.
  • the notification information generation unit 18 and notification control unit 19 execute notification via audio.
  • notification is performed using stereophonic sound regarding the object to be avoided that is closest to the user 2 on the obstacle space map 60. That is, the localization of the audio information is set so that the audio can be heard from the position of the object to be avoided at the closest distance with respect to the current direction of the user 2 (front direction). Additionally, audio information is output such that the volume is attenuated depending on the distance to the object to be avoided.
  • notification using stereophonic sound is performed for the object to be avoided at the closest distance from the user 2, but which object to avoid is targeted for notification using stereophonic sound? may be set arbitrarily.
  • notification using stereophonic sound may be performed regarding the object to be avoided that is closest to the user 2 and the object to be avoided that is the second closest to the user 2 . Furthermore, if a plurality of objects to be avoided are located at the same distance from the user 2, notification using stereophonic sound may be performed for all of the plurality of objects.
  • notification is performed in response to the detection of an object to be avoided that exists in a position in the front direction.
  • the obstacle space map 60 of FIG. 24A it is possible to detect and notify an object to be avoided that exists in a position in front of the user 2.
  • the user 2 changes the direction of travel to the right in order to avoid the notified object to be avoided.
  • the detection and notification end when the previously detected object to be avoided deviates from the front direction of the user 2, the detection and notification end. After that, the user 2 may walk right next to the avoidable object without being detected. As a result, the vehicle may collide with the end of the obstacle 44 (48) or fall from the end of the fall danger point 50.
  • the sensor unit 6 sensor main body 21
  • the sensor unit 6 sensor main body 21
  • the detection of the object to be avoided may fail and the notification may end.
  • the vehicle may move toward the object to be avoided that was detected immediately before, causing a collision with or falling from the object to be avoided. That is, it may become impossible to avoid an object to be avoided once it has been detected.
  • an obstacle space map 60 is generated that includes position information of objects to be avoided that have been detected in the past. Then, an object to be avoided that is close to the user 2 is notified by stereophonic sound so that the direction toward the user 2 is reflected. Thereby, even if the object to be avoided deviates from the front direction of the user 2, it is possible to notify the user 2 of the object to be avoided. This makes it possible to solve the above-mentioned problems and improve the success rate of avoiding the detected object to be avoided.
  • the notification method described in the second embodiment and the notification method described in the third embodiment may be used together. That is, both real-time notification for the front direction and ground direction and notification using stereophonic sound using the obstacle space map 60 may be performed.
  • the notification method described in the first embodiment may be used in combination.
  • the user 2 is also a hearing-impaired person (for example, is hard of hearing or uses a hearing aid/sound collector, etc.), it is also effective to provide notification using vibration instead of stereophonic sound. That is, vibrations are presented to the body part corresponding to the position of the object to be avoided at the closest distance with respect to the current direction of the user 2 (front direction). Further, the strength of the vibration may be attenuated depending on the distance to the object to be avoided.
  • the user 2 may be able to appropriately set whether to notify using stereophonic sound or vibration. Further, if the user 2 is using a hearing aid, a sound collector, etc., processing such as hearing aid processing may be performed on the stereophonic sound that is notified.
  • FIG. 25 is a schematic diagram showing another configuration example of the surrounding information notification system according to the present embodiment.
  • the peripheral information notification system 64 is realized by the controller 8 shown in FIG. 1 and the server device 63 arranged on the network 62 working together.
  • the network 62 is constructed by, for example, the Internet or a wide area communication network.
  • any WAN (Wide Area Network), LAN (Local Area Network), etc. may be used, and the protocol for constructing the network 62 is not limited.
  • the server device 63 includes hardware necessary for configuring a computer, such as a CPU, ROM, RAM, and HDD.
  • the server device 63 can be realized by any computer such as a PC (Personal Computer).
  • the map information generation unit 58 is implemented by the server device 63.
  • the determination result by the situation determination unit 43 ("obstacle/fall danger point information" in the figure) is transmitted to the server device 63 via the network 62.
  • information on the self-position estimated by the self-position estimating unit 57 is also transmitted to the server device 63 via the network 63.
  • the server device 63 a history of determination results by the situation determining section 43 is stored in an obstacle information DB. Then, the map information generation unit 58 generates an obstacle space map 60 as illustrated in FIG. 24 . The obstacle space map 60 generated by the server device 63 is transmitted to the controller 8 via the network 62.
  • the DB may be constructed in a storage device within the server device 63, or may be constructed in an external storage device that the server device 63 can access.
  • the notification information generation unit 18 and notification control unit 19 of the controller 8 execute notification using stereophonic sound based on the received obstacle space map 60.
  • the surrounding information notification system 64 can be realized by a cloud system using the cloud (cloud computing).
  • cloud cloud computing
  • the edge terminal configured on the user 2 side is not equipped with high processing capacity, it is possible to realize notification using a wide range obstacle space map 60.
  • an accessibility device that is lightweight and inexpensive, yet can report information about the surrounding environment with high accuracy.
  • the server device 63 on the network 62 may integrate "obstacle/fall danger point information" sent from multiple users 2 using the surrounding information notification system 64. Then, the obstacle space map 60 for each user 2 may be generated based on the integrated "obstacle/fall danger point information".
  • the sensor main body 21 worn by the user 2 walking in a certain area performs a search in the front direction and the ground direction.
  • “Obstacle/fall danger point information” related to the detection of the detected object to be avoided is transmitted from the controller 8 to the server device 63 via the network 62 .
  • the server device 63 integrates the information on the avoidable object detected by the user 2's search and the information on the avoidable object detected by the other user 2's search, and stores it in the obstacle information DB. Then, an obstacle space map 60 is generated that includes both the positional information of the avoidable object detected by the user 2's search and the positional information of the avoidable object detected by the search of another user 2. The generated obstacle space map 60 is transmitted to both the user 2 and other users 2 via the network 62.
  • FIG. 26 is a schematic diagram showing a configuration example of a surrounding information notification system according to the fourth embodiment.
  • the surrounding information notification system 66 according to this embodiment is configured by a cloud system. That is, the peripheral information notification system 66 is realized by the controller 8 shown in FIG. 1 and the server device 63 arranged on the network 62 working together.
  • the surrounding information notification system 66 includes a guide device 67 that is communicably connected to the controller 8 and the server device 63 via the network 62.
  • the guidance device 67 has a display and is used as a remote terminal used by an operator (route guidance etc. sender) 68 who provides route guidance etc. to the user 2.
  • the route guidance can also be called a walking guidance notification.
  • the server device 63 stores the information on the object to be avoided ("obstacle/fall danger point information") sent from the user 2 in the obstacle information DB. Further, the server device 63 has built therein a real world map information DB in which map information of the real world is stored. For example, map information of various regions is acquired from a map server that provides map services on the network 62, and stored in the real world map information DB.
  • FIG. 27 is a flowchart showing an example of notification of surrounding information according to this embodiment. Steps 401-408 in FIG. 27 are similar to steps 301-308 shown in FIG. 23.
  • step 408 the controller 8 transmits the determination result by the situation determining unit 43 (“obstacle/fall danger point information”) and the self-position information estimated by the self-position estimating unit 57 to the server device via the network 63. 63. Then, the map information generation unit 58 in the server device 63 generates an obstacle space map 60.
  • the map information generation unit 58 generates a real space dangerous object map 69 in which surrounding real world information is added to the obstacle space map 60 based on the user 2's position information in the real world. .
  • the obstacle space map 60 and real world map information are linked to generate a real space dangerous object map 69 to which landmark information and the like are added (step 409).
  • the real space dangerous object map 69 includes the user 2's real world position information, such as the real world position information and attribute information of the obstacles 44 (48) that are objects to be avoided (dangerous objects) and the falling danger points 50, and Contains real world information such as landmark information. Note that the real world information includes arbitrary geographic information such as place names and topography.
  • the real space dangerous object map 69 is an embodiment of real surrounding map information according to the present technology.
  • the real space dangerous object map 69 is transmitted to the guide device 67 via the network 62 and displayed on the display of the guide device 67.
  • the display mode of the real space dangerous object map 69 is not limited and may be set arbitrarily.
  • an icon of an object to be avoided may be superimposed on map information of the real world, and when the icon is selected, detailed information of the object to be avoided may be displayed.
  • the display area of the display may be divided and information regarding the avoidable object may be displayed in a list.
  • the operator 68 executes route guidance using the real space dangerous object map 69. For example, ⁇ There are stairs leading down to the entrance of ⁇ station 5 meters ahead. Please proceed with caution.'' ⁇ A car is parked at the entrance of ⁇ parking lot. Please stop temporarily.'' ⁇ There is a large obstacle on the ground. It becomes possible to provide route guidance that combines information from the real world with information about dangerous objects to avoid, such as "There is an object falling. Please slow down.” As a result, the user 2 can obtain information normally obtained by a healthy person, and it is possible to further improve safety in avoiding danger.
  • the notification information generation unit 18 in the controller 8 receives guidance information including the contents of the route guidance of the operator 68 (step 410). Based on the received guidance information, the notification control unit 19 outputs the contents of the route guidance from the speaker 11.
  • the configuration and method for transmitting the guidance information to the controller 8 on the user 2 side via the network 62 are not limited.
  • the guidance information can be transmitted using a well-known technique using a voice input device such as a microphone of the guidance device 67, a communication device, or the like.
  • the route guidance provided by the operator 68 can also be said to be notification using real world information based on the real space dangerous object map 69. Therefore, it can be said that the mechanism for transmitting guidance information including the contents of route guidance for the operator 68 provided in the guidance device 67 functions as a "notification section" in the surrounding information notification system 66.
  • Automatic voice route guidance based on the real space dangerous object map 69 may be performed instead of the route guidance by the operator 68. Automatic voice route guidance based on the real space dangerous object map 69 is also included in the notification using real world information based on the real space dangerous object map 69. Further, the mechanism that executes route guidance using automatic voice functions as a "notification unit" in the surrounding information notification system 66.
  • the real space dangerous object map 69 may be transmitted to the controller 8 on the user 2 side via the network 62. Then, the notification information generation section 18 and the notification control section 19 may perform route guidance or the like based on the real space dangerous object map 69.
  • the image of a company that creates smart accessibility products will be enhanced, and the company's brand image and existence value can be expected to be enhanced as a company that aims to contribute to society. .
  • FIG. 28 is a schematic diagram showing another example of a method of outputting sound according to distance.
  • musical tone information such as a predetermined song is played in the "safe distance range.” For example, songs that the user 2 likes are played.
  • a detection notification sound that notifies that an object to be avoided has been detected is faded in so as to be superimposed on the musical sound information.
  • the detection notification sound for example, discontinuous mid-range sound is output.
  • the mixing amount for the detection notification sound is increased linearly, but the present invention is not limited to this, and various other fade-in controls may be adopted.
  • both the musical tone information and the detection notification sound are output at the maximum standard level.
  • both the musical tone information and the detection notification sound are faded out.
  • the mixing amount for the musical tone information is reduced in a curve (rapidly), and the mixing amount for the detection notification sound is reduced linearly (reduced at a constant rate).
  • Various such fade-out controls may be employed.
  • the danger notification sound indicating that danger is approaching is faded in.
  • the danger notification sound for example, discontinuous high-frequency sound is output.
  • the danger notification sound becomes louder to the maximum standard level to strongly alert the user 2.
  • the buzzer sound when a buzzer sound is output when an object to be avoided is detected, the buzzer sound may continue to sound in a crowded station premises, inside an elevator, or when using an escalator. It may also be accompanied by discomfort.
  • music information of "favorite music” is used as the main sound source, and detection notification sound such as sonar sound is mixed in a fading manner according to the approach distance to the object to be avoided. To go. When the object to be avoided is extremely close, the main music, detection notification sound, and danger notification sound are cross-faded and mixed. Such notification becomes possible, and it becomes possible to eliminate notifications that are unpleasant for the user 2.
  • the object to be avoided is a person (not the object to be avoided)
  • the necessary distance measurement channels may be automatically switched based on map information of the real world or the like. For example, in the configuration shown in FIG. 8, even if sensing in the front direction by the front-side distance measurement sensor 27 and sensing in the ground direction by the ground-side distance measurement sensor 28 are automatically switched based on surrounding information. good. For example, when the user 2 is moving on a station platform, it is possible to prioritize sensing toward the ground.
  • the walking speed of the user 2 is estimated from the change in the front side distance information, and is used to determine the size of the ground obstacle 48 along with the variation time of the deviation of the ground side distance information. good.
  • peripheral information notification system By applying the peripheral information notification system according to the present technology, it is also possible to realize a device compatible with a speech-type UI for visually impaired people.
  • a surrounding information notification system according to the present technology for healthy people. For example, a front-side distance measuring sensor whose detection direction is in the front direction and a back-side distance measuring sensor whose detection direction is in the back direction (direction toward the rear side) are arranged. Then, detection of a suspicious person sneaking up from behind may be performed based on the back side distance information detected by the back side distance measuring sensor.
  • NR noise reduction
  • NC noise canceling
  • the surrounding information notification system according to the present technology may be provided to blindfolded healthy people at a theme park or the like where they can experience the experience of blind people. Furthermore, when a healthy person uses a smartphone while walking, a surrounding information notification system according to the present technology may be constructed by attaching a distance measurement sensor or the like in the same direction as the outward-facing camera. Further, a surrounding information system according to the present technology may be constructed for a vehicle, a drone, or the like, and the surrounding information may be notified to a pilot or the like.
  • FIG. 29 is a block diagram showing an example of a hardware configuration of a computer (information processing device) 70 that can be used to construct a peripheral information notification system according to the present technology.
  • the computer 70 includes a CPU 71, a ROM 72, a RAM 73, an input/output interface 75, and a bus 74 that connects these to each other.
  • a display section 76, an input section 77, a storage section 78, a communication section 79, a drive section 80, and the like are connected to the input/output interface 75.
  • the display section 76 is a display device using, for example, liquid crystal, EL, or the like.
  • the input unit 77 is, for example, a keyboard, pointing device, touch panel, or other operating device.
  • the input section 77 includes a touch panel
  • the touch panel can be integrated with the display section 76.
  • the storage unit 78 is a nonvolatile storage device, such as an HDD, flash memory, or other solid-state memory.
  • the drive section 80 is a device capable of driving a removable recording medium 81 such as an optical recording medium or a magnetic recording tape.
  • the communication unit 79 is a modem, router, or other communication equipment that can be connected to a LAN, WAN, etc., and is used to communicate with other devices. The communication unit 79 may communicate using either wired or wireless communication. The communication unit 79 is often used separately from the computer 70.
  • Information processing by the computer 70 having the above hardware configuration is realized by cooperation between software stored in the storage unit 78 or the ROM 72, and hardware resources of the computer 70.
  • the information processing method according to the present technology is realized by loading a program constituting software stored in the ROM 72 or the like into the RAM 73 and executing it.
  • the program is installed on the computer 70 via the recording medium 61, for example.
  • the program may be installed on the computer 70 via a global network or the like.
  • any computer-readable non-transitory storage medium may be used.
  • the information processing method (peripheral information notification method) and program according to the present technology are executed by multiple computers communicably connected via a network etc., and the information processing device according to the present technology is constructed. It's okay. That is, the information processing method and program according to the present technology can be executed not only in a computer system configured by a single computer but also in a computer system in which multiple computers operate in conjunction with each other. Note that in the present disclosure, a system means a collection of multiple components (devices, modules (components), etc.), and it does not matter whether all the components are in the same housing or not.
  • a plurality of devices housed in separate casings and connected via a network and a single device in which a plurality of modules are housed in one casing are both systems.
  • Execution of the information processing method and program according to the present technology by a computer system includes, for example, acquisition of surrounding information (obtaining distance information, situation determination), generation of notification information (generation of audio information, generation of vibration information), and notification control. This includes both the case where each process is executed by a single computer and the case where each process is executed by different computers.
  • execution of each process by a predetermined computer includes causing another computer to execute part or all of the process and acquiring the results. That is, the information processing method and program according to the present technology can also be applied to a cloud computing configuration in which one function is shared and jointly processed by a plurality of devices via a network.
  • “perfectly centered”, “perfectly centered”, “perfectly uniform”, “perfectly equal”, “perfectly identical”, “perfectly orthogonal”, “perfectly parallel”, “perfectly symmetrical”, “perfectly extended”, “perfectly” also includes states that fall within a predetermined range (e.g. ⁇ 10% range) based on the following criteria: axial direction, completely cylindrical, completely cylindrical, completely ring-shaped, completely annular, etc. It will be done. Therefore, even when words such as “approximately,””approximately,” and “approximately” are not added, concepts that can be expressed by adding so-called “approximately,””approximately,” and “approximately” may be included. On the other hand, when a state is expressed by adding words such as “approximately”, “approximately”, “approximately”, etc., a complete state is not always excluded.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure can be applied to any type of transportation such as a car, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility vehicle, an airplane, a drone, a ship, a robot, a construction machine, an agricultural machine (tractor), etc. It may also be realized as a device mounted on the body.
  • FIG. 30 is a block diagram showing a schematic configuration example of a vehicle control system 7000, which is an example of a mobile object control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 7000 includes multiple electronic control units connected via communication network 7010.
  • the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside vehicle information detection unit 7400, an inside vehicle information detection unit 7500, and an integrated control unit 7600. .
  • the communication network 7010 connecting these plurality of control units is, for example, a communication network based on any standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay (registered trademark). It may be an in-vehicle communication network.
  • CAN Controller Area Network
  • LIN Local Interconnect Network
  • LAN Local Area Network
  • FlexRay registered trademark
  • Each control unit includes a microcomputer that performs calculation processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used in various calculations, and a drive circuit that drives various devices to be controlled. Equipped with Each control unit is equipped with a network I/F for communicating with other control units via the communication network 7010, and also communicates with devices or sensors inside and outside the vehicle through wired or wireless communication. A communication I/F is provided for communication. In FIG.
  • the functional configuration of the integrated control unit 7600 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, an audio image output section 7670, An in-vehicle network I/F 7680 and a storage unit 7690 are illustrated.
  • the other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.
  • the drive system control unit 7100 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the drive system control unit 7100 includes a drive force generation device such as an internal combustion engine or a drive motor that generates drive force for the vehicle, a drive force transmission mechanism that transmits the drive force to wheels, and a drive force transmission mechanism that controls the steering angle of the vehicle. It functions as a control device for a steering mechanism to adjust and a braking device to generate braking force for the vehicle.
  • the drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
  • a vehicle state detection section 7110 is connected to the drive system control unit 7100.
  • the vehicle state detection unit 7110 includes, for example, a gyro sensor that detects the angular velocity of the axial rotation movement of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or an operation amount of an accelerator pedal, an operation amount of a brake pedal, or a steering wheel. At least one sensor for detecting angle, engine rotational speed, wheel rotational speed, etc. is included.
  • the drive system control unit 7100 performs arithmetic processing using signals input from the vehicle state detection section 7110, and controls the internal combustion engine, the drive motor, the electric power steering device, the brake device, and the like.
  • the body system control unit 7200 controls the operations of various devices installed in the vehicle body according to various programs.
  • the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a turn signal, or a fog lamp.
  • radio waves transmitted from a portable device that replaces a key or signals from various switches may be input to the body control unit 7200.
  • the body system control unit 7200 receives input of these radio waves or signals, and controls the door lock device, power window device, lamp, etc. of the vehicle.
  • the battery control unit 7300 controls the secondary battery 7310, which is a power supply source for the drive motor, according to various programs. For example, information such as battery temperature, battery output voltage, or remaining battery capacity is input to the battery control unit 7300 from a battery device including a secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature adjustment of the secondary battery 7310 or the cooling device provided in the battery device.
  • the external information detection unit 7400 detects information external to the vehicle in which the vehicle control system 7000 is mounted. For example, at least one of an imaging section 7410 and an external information detection section 7420 is connected to the vehicle exterior information detection unit 7400.
  • the imaging unit 7410 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
  • the vehicle external information detection unit 7420 includes, for example, an environmental sensor for detecting the current weather or weather, or a sensor for detecting other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the surrounding information detection sensors is included.
  • the environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunlight sensor that detects the degree of sunlight, and a snow sensor that detects snowfall.
  • the surrounding information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device.
  • the imaging section 7410 and the vehicle external information detection section 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
  • FIG. 31 shows an example of the installation positions of the imaging section 7410 and the vehicle external information detection section 7420.
  • the imaging units 7910, 7912, 7914, 7916, and 7918 are provided, for example, at at least one of the front nose, side mirrors, rear bumper, back door, and upper part of the windshield inside the vehicle 7900.
  • An imaging unit 7910 provided in the front nose and an imaging unit 7918 provided above the windshield inside the vehicle mainly acquire images in front of the vehicle 7900.
  • Imaging units 7912 and 7914 provided in the side mirrors mainly capture images of the sides of the vehicle 7900.
  • An imaging unit 7916 provided in the rear bumper or back door mainly acquires images of the rear of the vehicle 7900.
  • the imaging unit 7918 provided above the windshield inside the vehicle is mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
  • FIG. 31 shows an example of the imaging range of each of the imaging units 7910, 7912, 7914, and 7916.
  • Imaging range a indicates the imaging range of imaging unit 7910 provided on the front nose
  • imaging ranges b and c indicate imaging ranges of imaging units 7912 and 7914 provided on the side mirrors, respectively
  • imaging range d is The imaging range of an imaging unit 7916 provided in the rear bumper or back door is shown. For example, by superimposing image data captured by imaging units 7910, 7912, 7914, and 7916, an overhead image of vehicle 7900 viewed from above can be obtained.
  • the external information detection units 7920, 7922, 7924, 7926, 7928, and 7930 provided at the front, rear, sides, corners, and the upper part of the windshield inside the vehicle 7900 may be, for example, ultrasonic sensors or radar devices.
  • External information detection units 7920, 7926, and 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield inside the vehicle 7900 may be, for example, LIDAR devices.
  • These external information detection units 7920 to 7930 are mainly used to detect preceding vehicles, pedestrians, obstacles, and the like.
  • the vehicle exterior information detection unit 7400 causes the imaging unit 7410 to capture an image of the exterior of the vehicle, and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the vehicle exterior information detection section 7420 to which it is connected.
  • the external information detection unit 7420 is an ultrasonic sensor, a radar device, or a LIDAR device
  • the external information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, etc., and receives information on the received reflected waves.
  • the external information detection unit 7400 may perform object detection processing such as a person, car, obstacle, sign, or text on the road surface or distance detection processing based on the received information.
  • the external information detection unit 7400 may perform environment recognition processing to recognize rain, fog, road surface conditions, etc. based on the received information.
  • the vehicle exterior information detection unit 7400 may calculate the distance to the object outside the vehicle based on the received information.
  • the outside-vehicle information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing people, cars, obstacles, signs, characters on the road, etc., based on the received image data.
  • the outside-vehicle information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and also synthesizes image data captured by different imaging units 7410 to generate an overhead image or a panoramic image. Good too.
  • the outside-vehicle information detection unit 7400 may perform viewpoint conversion processing using image data captured by different imaging units 7410.
  • the in-vehicle information detection unit 7500 detects in-vehicle information.
  • a driver condition detection section 7510 that detects the condition of the driver is connected to the in-vehicle information detection unit 7500.
  • the driver state detection unit 7510 may include a camera that images the driver, a biosensor that detects biometric information of the driver, a microphone that collects audio inside the vehicle, or the like.
  • the biosensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of a passenger sitting on a seat or a driver holding a steering wheel.
  • the in-vehicle information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, or determine whether the driver is dozing off. You may.
  • the in-vehicle information detection unit 7500 may perform processing such as noise canceling processing on the collected audio signal.
  • the integrated control unit 7600 controls overall operations within the vehicle control system 7000 according to various programs.
  • An input section 7800 is connected to the integrated control unit 7600.
  • the input unit 7800 is realized by a device that can be inputted by the passenger, such as a touch panel, a button, a microphone, a switch, or a lever.
  • the integrated control unit 7600 may be input with data obtained by voice recognition of voice input through a microphone.
  • the input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA (Personal Digital Assistant) that is compatible with the operation of the vehicle control system 7000. It's okay.
  • the input unit 7800 may be, for example, a camera, in which case the passenger can input information using gestures. Alternatively, data obtained by detecting the movement of a wearable device worn by a passenger may be input. Furthermore, the input section 7800 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input section 7800 described above and outputs it to the integrated control unit 7600. By operating this input unit 7800, a passenger or the like inputs various data to the vehicle control system 7000 and instructs processing operations.
  • the storage unit 7690 may include a ROM (Read Only Memory) that stores various programs executed by the microcomputer, and a RAM (Random Access Memory) that stores various parameters, calculation results, sensor values, etc. Further, the storage unit 7690 may be realized by a magnetic storage device such as a HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the general-purpose communication I/F 7620 is a general-purpose communication I/F that mediates communication with various devices existing in the external environment 7750.
  • the general-purpose communication I/F7620 supports cellular communication protocols such as GSM (registered trademark) (Global System of Mobile communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution), or LTE-A (LTE-Advanced). , or other wireless communication protocols such as wireless LAN (also referred to as Wi-Fi (registered trademark)) or Bluetooth (registered trademark).
  • the general-purpose communication I/F 7620 connects to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via a base station or an access point, for example. You may.
  • the general-purpose communication I/F 7620 uses, for example, P2P (Peer To Peer) technology to communicate with a terminal located near the vehicle (for example, a driver, a pedestrian, a store terminal, or an MTC (Machine Type Communication) terminal). You can also connect it with a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via a base station or an access point, for example. You may.
  • P2P Peer To Peer
  • a terminal located near the vehicle for example, a driver, a pedestrian, a store terminal, or an MTC (Machine Type Communication) terminal. You can also connect it with
  • the dedicated communication I/F 7630 is a communication I/F that supports communication protocols developed for use in vehicles.
  • the dedicated communication I/F 7630 uses standard protocols such as WAVE (Wireless Access in Vehicle Environment), which is a combination of lower layer IEEE802.11p and upper layer IEEE1609, DSRC (Dedicated Short Range Communications), or cellular communication protocol. May be implemented.
  • the dedicated communication I/F 7630 typically supports vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-pedestrian communication. ) communications, a concept that includes one or more of the following:
  • the positioning unit 7640 performs positioning by receiving, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), and determines the latitude, longitude, and altitude of the vehicle. Generate location information including. Note that the positioning unit 7640 may specify the current location by exchanging signals with a wireless access point, or may acquire location information from a terminal such as a mobile phone, PHS, or smartphone that has a positioning function.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • the beacon receiving unit 7650 receives, for example, radio waves or electromagnetic waves transmitted from a wireless station installed on the road, and obtains information such as the current location, traffic jams, road closures, or required travel time. Note that the function of the beacon receiving unit 7650 may be included in the dedicated communication I/F 7630 described above.
  • the in-vehicle device I/F 7660 is a communication interface that mediates connections between the microcomputer 7610 and various in-vehicle devices 7760 present in the vehicle.
  • the in-vehicle device I/F 7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB).
  • the in-vehicle device I/F 7660 connects to USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface), or MHL (Mobile High).
  • USB Universal Serial Bus
  • HDMI registered trademark
  • MHL Mobile High
  • the in-vehicle device 7760 may include, for example, at least one of a mobile device or wearable device owned by a passenger, or an information device carried into or attached to the vehicle.
  • the in-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination. or exchange data signals.
  • the in-vehicle network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010.
  • the in-vehicle network I/F 7680 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 7010.
  • the microcomputer 7610 of the integrated control unit 7600 communicates via at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680.
  • the vehicle control system 7000 is controlled according to various programs based on the information obtained. For example, the microcomputer 7610 calculates a control target value for a driving force generating device, a steering mechanism, or a braking device based on acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. Good too.
  • the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. Coordination control may be performed for the purpose of
  • the microcomputer 7610 controls the driving force generating device, steering mechanism, braking device, etc. based on the acquired information about the surroundings of the vehicle, so that the microcomputer 7610 can drive the vehicle autonomously without depending on the driver's operation. Cooperative control for the purpose of driving etc. may also be performed.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 7610 acquires information through at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680. Based on this, three-dimensional distance information between the vehicle and surrounding objects such as structures and people may be generated, and local map information including surrounding information of the current position of the vehicle may be generated. Furthermore, the microcomputer 7610 may predict dangers such as a vehicle collision, a pedestrian approaching, or entering a closed road, based on the acquired information, and generate a warning signal.
  • the warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
  • the audio and image output unit 7670 transmits an output signal of at least one of audio and images to an output device that can visually or audibly notify information to the occupants of the vehicle or to the outside of the vehicle.
  • an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as output devices.
  • Display unit 7720 may include, for example, at least one of an on-board display and a head-up display.
  • the display section 7720 may have an AR (Augmented Reality) display function.
  • the output device may be other devices other than these devices, such as headphones, a wearable device such as a glasses-type display worn by the passenger, a projector, or a lamp.
  • the output device When the output device is a display device, the display device displays results obtained from various processes performed by the microcomputer 7610 or information received from other control units in various formats such as text, images, tables, graphs, etc. Show it visually. Further, when the output device is an audio output device, the audio output device converts an audio signal consisting of reproduced audio data or acoustic data into an analog signal and audibly outputs the analog signal.
  • control units connected via the communication network 7010 may be integrated as one control unit.
  • each control unit may be composed of a plurality of control units.
  • vehicle control system 7000 may include another control unit not shown.
  • some or all of the functions performed by one of the control units may be provided to another control unit.
  • predetermined arithmetic processing may be performed by any one of the control units.
  • sensors or devices connected to any control unit may be connected to other control units, and multiple control units may send and receive detection information to and from each other via communication network 7010. .
  • a computer program for realizing each function of the information processing apparatus according to the present embodiment described using FIG. 2 and the like can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
  • the above computer program may be distributed, for example, via a network, without using a recording medium.
  • the information processing device according to the present embodiment described using FIG. 2 and the like can be applied to the integrated control unit 7600 of the application example shown in FIG. 30.
  • the components of the information processing device described using FIG. 2 etc. are included in the module for the integrated control unit 7600 shown in FIG. May be realized.
  • the information processing device described using FIG. 2 and the like may be realized by a plurality of control units of vehicle control system 7000 shown in FIG. 30.
  • a surrounding information acquisition unit that obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of one or more object detection sensors;
  • First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information.
  • an audio information generation unit that generates second audio information using second musical tone data for notifying surrounding information;
  • An information processing device comprising: a notification control unit that outputs both the first audio information and the second audio information.
  • the one or more object detection sensors include a first object detection sensor and a second object detection sensor
  • the peripheral information acquisition unit acquires the first peripheral information based on the detection result of the first object detection sensor, and acquires the second peripheral information based on the detection result of the second object detection sensor.
  • the first object detection sensor is a first ranging sensor that operates according to a first method
  • the second object detection sensor is a second distance measuring sensor that operates according to a second method different from the first method.
  • the information processing device is a first ranging sensor arranged with a first direction as a detection direction
  • the second object detection sensor is a second ranging sensor arranged with a second direction different from the first direction as a detection direction.
  • Information processing device. (5) The information processing device according to (1),
  • the one or more object detection sensors are sensors that generate image information
  • the peripheral information acquisition unit acquires the first peripheral information based on information on some pixel regions of the image information, and acquires the first peripheral information based on information on other pixel regions of the image information.
  • the information processing device is a sensor that generate image information
  • the peripheral information acquisition unit acquires information regarding a first type of object detected based on the image information as the first peripheral information, and acquires information regarding the first type of object detected based on the image information.
  • An information processing device that acquires information regarding a different second type of object as the second peripheral information.
  • the surrounding information acquisition unit generates integrated surrounding information based on the first surrounding information and the second surrounding information. Information processing device.
  • the information processing device operates by an optical laser method
  • the second ranging sensor operates using an ultrasonic method
  • the surrounding information acquisition unit generates the integrated surrounding information based on the stability of detection by the first ranging sensor and the stability of detection by the second ranging sensor.
  • Information processing device (9)
  • the information processing device (9)
  • the peripheral information acquisition unit includes a light-transmitting member or a light-absorbing member in the periphery. An information processing device that generates the integrated peripheral information indicating that a member exists.
  • the peripheral information acquisition unit further includes hardness information acquired as the second peripheral information based on the detection result of the second distance measurement sensor, and the peripheral information acquisition unit is configured to determine whether the light transmitting member or the light absorbing member An information processing device that generates information regarding at least one of a material and a type of an object as the integrated peripheral information.
  • the audio information generation unit determines whether or not to output the first audio information based on the first peripheral information, and if it is determined that the first audio information is not output, the notification control An information processing device that regulates output of the first audio information by the unit.
  • the information processing device according to any one of (1) to (11),
  • the first audio information is first musical tone information constituting a predetermined song
  • the second audio information is second musical tone information constituting the predetermined music piece.
  • the audio information generation unit generates the first audio information by controlling musical tone parameters of the first musical tone data based on the first peripheral information.
  • the musical tone parameter includes at least one of volume, frequency, pitch, speed, BPM, and tempo.
  • Information processing device (15)
  • the information processing device according to (13) or (14),
  • the first peripheral information includes distance information,
  • the audio information generation unit generates the first audio information by controlling the musical tone parameters based on the distance information.
  • the information processing device controls the localization of the first audio information based on the detection direction of the first ranging sensor.
  • First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information.
  • an audio information generation unit that generates second audio information using second musical tone data for notifying surrounding information
  • An information processing system comprising: a notification control unit that outputs both the first audio information and the second audio information.
  • the information processing device according to any one of (1) to (16) The one or more object detection sensors are arranged in a device worn by a user or a device held by the user. Information processing apparatus.

Abstract

An information processing device according to one aspect of the present technology comprises a peripheral information acquiring unit, a sound information generating unit, and a notification control unit. The peripheral information acquiring unit acquires first peripheral information and second peripheral information relating to a peripheral environment on the basis of a detection result from one or more object detecting sensors. The sound information generating unit generates first sound information using first musical sound data for notifying the first peripheral information, on the basis of the first peripheral information, and generates second sound information using second musical sound data for notifying the second peripheral information, on the basis of the second peripheral information. The notification control unit outputs both the first sound information and the second sound information.

Description

情報処理装置、情報処理方法、プログラム、及び情報処理システムInformation processing device, information processing method, program, and information processing system
 本技術は、周辺環境の情報の報知に適用可能な情報処理装置、情報処理方法、プログラム、及び情報処理システムに関する。 The present technology relates to an information processing device, an information processing method, a program, and an information processing system that can be applied to broadcasting information about the surrounding environment.
 特許文献1に記載の走査型距離測定装置では、光学方式距離検出器から得られる距離情報と、超音波方式距離検出器から得られる距離情報との差分に基づいて、光学方式距離検出器の信頼度が算出される。2つの距離情報の差分が大きい場合には信頼度が低いと判定され、光学方式距離検出器から得られる距離情報が、超音波方式距離検出器から得られる距離情報を用いて補正される。 In the scanning distance measuring device described in Patent Document 1, the reliability of the optical distance detector is determined based on the difference between the distance information obtained from the optical distance detector and the distance information obtained from the ultrasonic distance detector. degree is calculated. If the difference between the two pieces of distance information is large, it is determined that the reliability is low, and the distance information obtained from the optical distance detector is corrected using the distance information obtained from the ultrasonic distance detector.
特開2018-189494号公報Japanese Patent Application Publication No. 2018-189494
 ユーザに対して周辺環境の情報を高い精度で報知することが可能な技術が求められている。 There is a need for technology that can notify users of information about the surrounding environment with high accuracy.
 以上のような事情に鑑み、本技術の目的は、周辺環境の情報を高い精度で報知することが可能となる情報処理装置、情報処理方法、プログラム、及び情報処理システムを提供することにある。 In view of the above circumstances, the purpose of the present technology is to provide an information processing device, an information processing method, a program, and an information processing system that are capable of reporting information about the surrounding environment with high accuracy.
 上記目的を達成するため、本技術の一形態に係る情報処理装置は、周辺情報取得部と、音声情報生成部と、報知制御部とを具備する。
 前記周辺情報取得部は、1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得する。
 前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成する。
 前記報知制御部は、前記第1の音声情報及び前記第2の音声情報をともに出力させる。
In order to achieve the above object, an information processing device according to an embodiment of the present technology includes a peripheral information acquisition section, an audio information generation section, and a notification control section.
The surrounding information acquisition unit obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of one or more object detection sensors.
The audio information generation unit generates first audio information using first musical tone data for notifying the first peripheral information based on the first peripheral information, and generates first audio information based on the second peripheral information. second audio information is generated using second musical tone data for notifying the second peripheral information based on the second peripheral information.
The notification control unit outputs both the first audio information and the second audio information.
 この情報処理装置では、1以上の物体検出センサによる検出結果に基づいて、第1の周辺情報と第2の周辺情報とが取得される。そして、第1の周辺情報に基づいて第1の楽音データが用いられて第1の音声情報が生成される。また第2の周辺結果に基づいて第2の楽音データが用いられて第2の音声情報が生成される。第1の音声情報及び第2の音声情報はともに出力される。
 これにより、音声を介して、第1の周辺情報及び第2の周辺情報をともに報知することが可能となり、ユーザに対して周辺環境の情報を高い精度で報知することが可能となる。
In this information processing device, first peripheral information and second peripheral information are acquired based on detection results by one or more object detection sensors. Then, first audio information is generated using the first musical tone data based on the first peripheral information. Also, second musical tone data is used to generate second audio information based on the second peripheral result. Both the first audio information and the second audio information are output.
Thereby, it becomes possible to notify both the first surrounding information and the second surrounding information via audio, and it becomes possible to notify the user of information about the surrounding environment with high accuracy.
 前記1以上の物体検出センサは、第1の物体検出センサと、第2の物体検出センサとを含んでもよい。この場合、前記周辺情報取得部は、前記第1の物体検出センサの検出結果に基づいて前記第1の周辺情報を取得し、前記第2の物体検出センサの検出結果に基づいて前記第2の周辺情報を取得してもよい。 The one or more object detection sensors may include a first object detection sensor and a second object detection sensor. In this case, the peripheral information acquisition unit acquires the first peripheral information based on the detection result of the first object detection sensor, and the peripheral information acquisition unit acquires the first peripheral information based on the detection result of the second object detection sensor. Surrounding information may also be acquired.
 前記第1の物体検出センサは、第1の方式により動作する第1の測距センサであってもよい。この場合、前記第2の物体検出センサは、前記第1の方式とは異なる第2の方式により動作する第2の測距センサであってもよい。 The first object detection sensor may be a first ranging sensor that operates according to a first method. In this case, the second object detection sensor may be a second ranging sensor that operates according to a second method different from the first method.
 前記第1の物体検出センサは、第1の方向を検出方向として配置された第1の測距センサであってもよい。この場合、前記第2の物体検出センサは、前記第1の方向とは異なる第2の方向を検出方向として配置された第2の測距センサであってもよい。 The first object detection sensor may be a first distance measurement sensor arranged with the first direction as the detection direction. In this case, the second object detection sensor may be a second distance measurement sensor arranged with a detection direction in a second direction different from the first direction.
 前記1以上の物体検出センサは、画像情報を生成するセンサであってもよい。この場合、前記周辺情報取得部は、前記画像情報のうちの一部の画素領域の情報に基づいて前記第1の周辺情報を取得し、前記画像情報のうちの他の画素領域の情報に基づいて前記第2の周辺情報を取得してもよい。 The one or more object detection sensors may be sensors that generate image information. In this case, the peripheral information acquisition unit acquires the first peripheral information based on information on some pixel areas of the image information, and acquires the first peripheral information based on information on other pixel areas of the image information. The second surrounding information may be acquired using the second peripheral information.
 前記1以上の物体検出センサは、画像情報を生成するセンサであってもよい。この場合、前記周辺情報取得部は、前記画像情報に基づいて検出される第1の種類の物体に関する情報を前記第1の周辺情報として取得し、前記画像情報に基づいて検出される前記第1の種類とは異なる第2の種類の物体に関する情報を前記第2の周辺情報として取得してもよい。 The one or more object detection sensors may be sensors that generate image information. In this case, the peripheral information acquisition unit acquires information regarding the first type of object detected based on the image information as the first peripheral information, and Information regarding a second type of object different from the type may be acquired as the second peripheral information.
 前記周辺情報取得部は、前記第1の周辺情報と、前記第2の周辺情報とに基づいて、統合周辺情報を生成してもよい。 The surrounding information acquisition unit may generate integrated surrounding information based on the first surrounding information and the second surrounding information.
 前記第1の測距センサは、光学レーザ方式により動作してもよい。この場合、前記第2の測距センサは、超音波方式により動作してもよい。また、前記周辺情報取得部は、前記第1の測距センサの検出の安定性、及び前記第2の測距センサの検出の安定性に基づいて、前記統合周辺情報を生成してもよい。 The first ranging sensor may operate using an optical laser method. In this case, the second ranging sensor may operate using an ultrasonic method. Further, the surrounding information acquisition unit may generate the integrated surrounding information based on the stability of detection by the first distance measurement sensor and the stability of detection by the second distance measurement sensor.
 前記周辺情報取得部は、前記第1の測距センサの検出の安定性が低く、かつ前記第2の測距センサの検出の安定性が高い場合に、周辺に光透過性部材又は光吸収性部材が存在する旨の前記統合周辺情報を生成してもよい。 When the detection stability of the first distance measurement sensor is low and the detection stability of the second distance measurement sensor is high, the peripheral information acquisition unit includes a light-transmitting member or a light-absorbing member in the periphery. The integrated surrounding information indicating that the member exists may be generated.
 前記周辺情報取得部は、さらに、前記第2の測距センサの検出結果に基づいて前記第2の周辺情報として取得される硬度の情報に基づいて、前記光透過性部材又は前記光吸収性部材に対する材質及び物体の種類の少なくとも一方に関する情報を、前記統合周辺情報として生成してもよい。 The peripheral information acquisition unit further includes hardness information acquired as the second peripheral information based on the detection result of the second distance measurement sensor, and the peripheral information acquisition unit is configured to determine whether the light transmitting member or the light absorbing member Information regarding at least one of the material and type of object may be generated as the integrated surrounding information.
 前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の音声情報を出力するか否かを判定し、前記第1の音声情報を出力しないと判定した場合は、前記報知制御部による前記第1の音声情報の出力を規制してもよい。 The audio information generation unit determines whether or not to output the first audio information based on the first peripheral information, and if it is determined that the first audio information is not output, the notification control The output of the first audio information by the unit may be restricted.
 前記第1の音声情報は、所定の楽曲を構成する第1の楽音情報であってもよい。この場合、前記第2の音声情報は、前記所定の楽曲を構成する第2の楽音情報であってもよい。 The first audio information may be first musical tone information constituting a predetermined song. In this case, the second audio information may be second musical tone information constituting the predetermined music piece.
 前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の楽音データの楽音パラメータを制御することで、前記第1の音声情報を生成してもよい。 The audio information generation unit may generate the first audio information by controlling musical tone parameters of the first musical tone data based on the first peripheral information.
 前記楽音パラメータは、音量、周波数、ピッチ、速さ、BPM、又はテンポの少なくとも1つを含んでもよい。 The musical sound parameters may include at least one of volume, frequency, pitch, speed, BPM, or tempo.
 前記第1の周辺情報は、距離情報を含んでもよい。この場合、前記音声情報生成部は、前記距離情報に基づいて前記楽音パラメータを制御することで、前記第1の音声情報を生成してもよい。 The first surrounding information may include distance information. In this case, the audio information generating section may generate the first audio information by controlling the musical tone parameters based on the distance information.
 前記音声情報生成部は、前記第1の測距センサの検出方向に基づいて、前記第1の音声情報の定位を制御してもよい。 The audio information generation unit may control localization of the first audio information based on a detection direction of the first ranging sensor.
 本技術の一形態に係る情報処理方法は、コンピュータシステムが実行する情報処理方法であって、1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得することを含む。
 前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データが用いられて第1の音声情報が生成され、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データが用いられて第2の音声情報が生成される。
 前記第1の音声情報及び前記第2の音声情報がともに出力される。
An information processing method according to an embodiment of the present technology is an information processing method executed by a computer system, and includes first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors. including obtaining.
First musical tone data for notifying the first peripheral information based on the first peripheral information is used to generate first audio information, and the second audio information is generated based on the second peripheral information. Second audio information is generated using second musical tone data for reporting surrounding information.
Both the first audio information and the second audio information are output.
 本技術の一形態に係るプログラムは、以下のステップを、コンピュータシステムに実行させる。
 1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得するステップ。
 前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成するステップ。
 前記第1の音声情報及び前記第2の音声情報をともに出力させるステップ。
A program according to one embodiment of the present technology causes a computer system to execute the following steps.
A step of acquiring first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors.
First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. A step of generating second audio information using second musical tone data for notifying surrounding information.
A step of outputting both the first audio information and the second audio information.
 本技術の一形態に係る情報処理システムは、1以上の物体検出センサと、前記周辺情報取得部と、前記音声情報生成部と、前記報知制御部とを具備する情報処理システム。 An information processing system according to an embodiment of the present technology includes one or more object detection sensors, the surrounding information acquisition section, the audio information generation section, and the notification control section.
 前記情報処理システムは、さらに、前記第1の音声情報及び前記第2の音声情報を出力する音声出力部を有し、ユーザに対して情報を出力する情報出力部を具備してもよい。 The information processing system may further include an audio output unit that outputs the first audio information and the second audio information, and an information output unit that outputs information to the user.
本技術の一実施形態に係る周辺情報報知システムの概要を説明するための模式図である。FIG. 1 is a schematic diagram for explaining an overview of a surrounding information notification system according to an embodiment of the present technology. 周辺情報報知システムの機能的な構成例を示す模式図である。FIG. 1 is a schematic diagram showing an example of a functional configuration of a surrounding information notification system. 周辺情報報知システムの基本動作例を示すフローチャートである。2 is a flowchart showing an example of basic operation of the surrounding information notification system. センサ部の構成例について説明するための模式図である。FIG. 2 is a schematic diagram for explaining a configuration example of a sensor section. センサ部の他の構成例について説明するための模式図である。FIG. 7 is a schematic diagram for explaining another configuration example of the sensor section. センサ部の他の構成例について説明するための模式図である。FIG. 7 is a schematic diagram for explaining another configuration example of the sensor section. 1以上の物体検出センサの構成の一例を示す模式図である。FIG. 2 is a schematic diagram showing an example of the configuration of one or more object detection sensors. 1以上の物体検出センサの構成の他の例を示す模式図である。FIG. 3 is a schematic diagram showing another example of the configuration of one or more object detection sensors. 第1の実施形態に係る周辺情報の報知を説明するためのブロック図である。FIG. 2 is a block diagram for explaining notification of surrounding information according to the first embodiment. 第1の音声情報及び第2の音声情報の一例を示す模式図である。It is a schematic diagram which shows an example of 1st audio|voice information and 2nd audio|voice information. レーザ測距センサと、超音波測距センサとの差異点を示す表である。It is a table showing differences between a laser ranging sensor and an ultrasonic ranging sensor. 物体検出センサとしてイメージセンサが配置された場合の例を説明するための模式図である。FIG. 3 is a schematic diagram for explaining an example in which an image sensor is arranged as an object detection sensor. 第2の実施形態に係る周辺情報の報知を実現するための構成例を示すブロック図である。FIG. 7 is a block diagram showing a configuration example for realizing notification of surrounding information according to a second embodiment. 本実施形態に係る周辺情報の報知例を示すフローチャートである。2 is a flowchart showing an example of notification of surrounding information according to the present embodiment. 周辺環境の状況の判定の一例を示す表である。3 is a table showing an example of determination of the situation of the surrounding environment. ユーザの正面方向の位置に障害物が存在する場合を示す模式図である。FIG. 3 is a schematic diagram showing a case where an obstacle exists in a position in the front direction of the user. ユーザの正面側の地面に障害物が存在する場合を示す模式図である。FIG. 3 is a schematic diagram showing a case where an obstacle exists on the ground in front of the user. ユーザの正面方向の位置に、落下危険ポイントが存在する場合を示す模式Schematic diagram showing a case where a fall danger point exists in the front direction of the user 障害物及び落下危険ポイントの報知処理の一例を示す表である。3 is a table showing an example of a process for notifying obstacles and falling danger points. 障害物及び落下危険ポイントの報知処理の他の例を説明するための模式図である。FIG. 7 is a schematic diagram for explaining another example of the notification process of obstacles and falling danger points. 「地面障害物(大)」「地面障害物(小)」の検出例を示す模式図である。It is a schematic diagram which shows the detection example of "ground obstacle (large)" and "ground obstacle (small)." 第3の実施形態に係る周辺情報の報知を実現するための構成例を示すブロック図である。FIG. 7 is a block diagram showing an example of a configuration for realizing notification of surrounding information according to a third embodiment. 本実施形態に係る周辺情報の報知例を示すフローチャートである。2 is a flowchart showing an example of notification of surrounding information according to the present embodiment. 障害物空間マップの一例を示す模式図である。It is a schematic diagram which shows an example of an obstacle space map. 本実施形態に係る周辺情報報知システムの他の構成例を示す模式図である。FIG. 3 is a schematic diagram showing another configuration example of the surrounding information notification system according to the present embodiment. 第4の実施形態に係る周辺情報報知システムの構成例を示す模式図である。FIG. 7 is a schematic diagram showing a configuration example of a surrounding information notification system according to a fourth embodiment. 本実施形態に係る周辺情報の報知例を示すフローチャートである。2 is a flowchart showing an example of notification of surrounding information according to the present embodiment. 距離に応じた音声の出力方法の他の例を示す模式図である。FIG. 7 is a schematic diagram showing another example of a method of outputting audio according to distance. 本技術に係る周辺情報報知システムを構築するために用いることが可能なコンピュータ(情報処理装置)のハードウェア構成例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a hardware configuration of a computer (information processing device) that can be used to construct a peripheral information notification system according to the present technology. 車両制御システムの概略的な構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。FIG. 2 is an explanatory diagram showing an example of installation positions of an outside-vehicle information detection section and an imaging section.
 以下、本技術に係る実施形態を、図面を参照しながら説明する。 Hereinafter, embodiments according to the present technology will be described with reference to the drawings.
 [周辺情報報知システムの概要]
 図1は、本技術の一実施形態に係る周辺情報報知システムの概要を説明するための模式図である。
 本実施形態では、周辺情報報知システム1は、全盲や弱視等の視覚障がい者が利用可能なシステムとして構築される。すなわち、本周辺情報報知システム1のユーザ2は、視覚障がい者となる。
 周辺情報報知システム1は、本技術に係る情報処理システムの一実施形態に相当する。
[Summary of surrounding information notification system]
FIG. 1 is a schematic diagram for explaining an overview of a surrounding information notification system according to an embodiment of the present technology.
In this embodiment, the surrounding information notification system 1 is constructed as a system that can be used by visually impaired people such as blindness and amblyopia. That is, the user 2 of the surrounding information notification system 1 is a visually impaired person.
The surrounding information notification system 1 corresponds to an embodiment of an information processing system according to the present technology.
 図1に示すように、ユーザ2は、地面3を移動する際に、白杖4を使って周辺環境の状況を把握する。例えばユーザ2は、白杖4を介して得られる感覚(触覚)に基づいて、地面3の状況を把握することが可能である。 As shown in FIG. 1, the user 2 uses a white cane 4 to grasp the situation of the surrounding environment when moving on the ground 3. For example, the user 2 can grasp the situation on the ground 3 based on the sensation (tactile sensation) obtained through the white cane 4.
 例えば、ユーザ2の進行方向に存在する車(車両)、電柱、看板等の物体5を把握することが可能である。また、上方に向かう階段(上り階段)や上方に向かうエスカレータ(上りエスカレータ)等を把握することも可能である。
 その他、下方に向かう階段(下り階段)、下方に向かうエスカレータ(下りエスカレータ)、駅のホームの端(ホームと踏切との境界)、地面3に設置された点字ブロック等、様々な物体5や様々な状況を把握することが可能である。
For example, it is possible to grasp objects 5 such as cars (vehicles), utility poles, signboards, etc. that are present in the direction in which the user 2 is traveling. Further, it is also possible to grasp stairs going upward (up stairs), escalators going upward (up escalator), and the like.
In addition, there are various objects 5 such as stairs going downwards (down stairs), escalators going downwards (down escalators), the edge of the station platform (boundary between the platform and the railroad crossing), Braille blocks installed on the ground 3, etc. It is possible to grasp the situation.
 図1では、物体5として、車が図示されている。その他、電柱、看板、壁、上り階段、上りエスカレータ、歩行者、自転車、バイク等、地面3上の任意の物体5が、本技術に係る「物体」の概念に含まれる。
 また、下方に向かう階段(下り階段)、下方に向かうエスカレータ(下りエスカレータ)、駅のホームの端、穴等、ユーザ2が落下する危険のある任意の形状や任意の領域が、本技術に係る「落下危険ポイント」の概念に含まれる。なお本開示において、「落下危険ポイント」は、「下方に向かって凹状となる領域」に含まれる。
In FIG. 1, a car is illustrated as the object 5. In addition, any object 5 on the ground 3, such as a telephone pole, a signboard, a wall, an upward staircase, an upward escalator, a pedestrian, a bicycle, a motorbike, etc., is included in the concept of "object" according to the present technology.
Furthermore, any shape or area where there is a risk of the user 2 falling, such as stairs going downwards (downstairs), escalators going downwards (downward escalators), edges of station platforms, holes, etc. Included in the concept of "fall hazard point". In the present disclosure, the "fall danger point" is included in the "downwardly concave area."
 ユーザ2が歩行する環境の中には、衝突や落下等の可能性のある様々な危険物が存在する。例えば折れた標識との衝突、トラックの荷台との衝突、突然足元の床面がなくなるホームからの転落等の事故事例も挙げられる。視覚障がい者にとっては、周囲の環境に関する情報を把握し、危険を回避することは非常に重要となる。 In the environment in which the user 2 walks, there are various dangerous objects that may collide or fall. Examples of accidents include collisions with broken signs, collisions with truck beds, and falls from platforms where the floor suddenly becomes empty. For visually impaired people, it is extremely important to grasp information about the surrounding environment and avoid danger.
 本周辺情報報知システム1は、ユーザ2に対して、周辺環境に関する周辺情報を高精度に報知することが可能である。ユーザ2は、例えば、報知された周辺情報に基づいて様々な危険を回避することが可能となる。
 例えば、移動(歩行等)の際の障害物となる物体5への衝突を回避することが可能となる。また下り階段への落下、駅のホームから線路への落下等、落下危険ポイントでの落下を回避することが可能となる。
 なお、周辺情報報知システム1を、危険回避システムと呼ぶことも可能である。また周辺情報の報知は、周辺情報の通知とも言える。
The surrounding information notification system 1 is capable of reporting surrounding information regarding the surrounding environment to the user 2 with high precision. For example, the user 2 can avoid various dangers based on the notified surrounding information.
For example, it is possible to avoid collision with the object 5 that becomes an obstacle during movement (walking, etc.). It is also possible to avoid falling at dangerous points, such as falling down stairs or falling from a station platform onto the tracks.
Note that the surrounding information notification system 1 can also be called a danger avoidance system. Further, the notification of surrounding information can also be called notification of surrounding information.
 図1に模式的に示すように、周辺情報報知システム1は、センサ部6と、情報出力部7と、コントローラ8とを有する。
 センサ部6は、周辺環境に関してセンシングを実行する。
 情報出力部7は、ユーザ2に対して情報を出力する。
 コントローラ8は、センサ部6及び情報出力部7の動作を制御する。コントローラ8により、周辺環境に関する周辺情報の取得、及びユーザ2への周辺情報の報知が実行される。
As schematically shown in FIG. 1, the surrounding information notification system 1 includes a sensor section 6, an information output section 7, and a controller 8.
The sensor unit 6 performs sensing regarding the surrounding environment.
The information output unit 7 outputs information to the user 2.
The controller 8 controls the operation of the sensor section 6 and the information output section 7. The controller 8 acquires surrounding information regarding the surrounding environment and notifies the user 2 of the surrounding information.
 [周辺情報報知システムの構成例]
 図2は、周辺情報報知システム1の機能的な構成例を示す模式図である。
 図2に示すように本実施形態では、センサ部6は、1以上の物体検出センサ10を含む。
[Configuration example of surrounding information notification system]
FIG. 2 is a schematic diagram showing an example of the functional configuration of the surrounding information notification system 1. As shown in FIG.
As shown in FIG. 2, in this embodiment, the sensor section 6 includes one or more object detection sensors 10.
 本開示において、物体検出センサ10は、物体5を検出可能な情報や、物体5を検出可能な情報を含む信号等を出力可能な任意のセンサを含む。例えば、物体5の検出の有無(ON/OFF)を判定可能な情報(信号)を出力可能な任意のセンサが含まれる。
 また、物体検出センサ10として、物体5の検出の有無(ON/OFF)に加えて、物体5までの距離、物体5の形状、物体5のサイズ、物体5の材質等、物体5に関する様々な情報を検出可能な任意のセンサが用いられてよい。なお「検出」は「検知」とも言える。また、必要に応じて脈拍、心拍、体温、脳波等の生体情報を取得するセンサが用いられてもよい。
In the present disclosure, the object detection sensor 10 includes any sensor capable of outputting information capable of detecting the object 5, a signal including information capable of detecting the object 5, or the like. For example, any sensor that can output information (signal) that can determine whether or not the object 5 is detected (ON/OFF) is included.
In addition to detecting the object 5 (ON/OFF), the object detection sensor 10 also detects various information regarding the object 5, such as the distance to the object 5, the shape of the object 5, the size of the object 5, the material of the object 5, etc. Any sensor capable of detecting information may be used. Note that "detection" can also be referred to as "detection." Further, a sensor that acquires biological information such as pulse, heartbeat, body temperature, and brain waves may be used as necessary.
 物体検出センサ10として、例えば、測距センサ、イメージセンサ(デジタルカメラ)、赤外線センサ等を用いることが可能である。
 測距センサとしては、例えば、光学レーザ方式の測距センサ(以下、レーザ測距センサと記載する)、超音波方式の測距センサ(以下、超音波測距センサと記載する)、ステレオカメラ、ToF(Time of Flight)センサ、LiDAR(Light Detection and Ranging、Laser Imaging Detectionand Ranging)、ストラクチャライト(Structured Light)方式の測距センサ等、様々な方式の測距センサを用いることが可能である。
As the object detection sensor 10, for example, a distance measurement sensor, an image sensor (digital camera), an infrared sensor, etc. can be used.
Examples of the distance measurement sensor include an optical laser distance measurement sensor (hereinafter referred to as a laser distance measurement sensor), an ultrasonic distance measurement sensor (hereinafter referred to as an ultrasonic distance measurement sensor), a stereo camera, It is possible to use various types of distance measurement sensors, such as a ToF (Time of Flight) sensor, a LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), and a structured light type distance measurement sensor.
 イメージセンサとしては、例えばCMOS(Complementary Metal-Oxide Semiconductor)センサやCCD(Charge Coupled Device)センサ等が用いられる。イメージセンサ及び測距センサの両方の機能を備えるセンサが用いられてもよい。例えば、各画素に対して距離情報を検出可能なToFセンサ等が用いられてもよい。イメージセンサは、画像情報を生成するセンサの一実施形態に相当する。 As the image sensor, for example, a CMOS (Complementary Metal-Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor is used. A sensor having both the functions of an image sensor and a ranging sensor may be used. For example, a ToF sensor or the like that can detect distance information for each pixel may be used. An image sensor corresponds to one embodiment of a sensor that generates image information.
 情報出力部7は、ユーザ2に対して情報を出力するための任意のデバイスにより構成される。図2に示すように本実施形態では、情報出力部7の一例として、スピーカ11と、振動デバイス12とが用いられる。
 スピーカ11は、音声を出力する。スピーカ11が駆動されることで、音声を介してユーザ2に情報を報知することが可能となる。図1に示す例では、スピーカ11を備えるヘッドセット13が、情報出力部7として用いられ、ユーザ2の頭部に装着されている。なお、ヘッドセット13はオーバーヘッド型に限らず、インイヤー型やカナル型、オープンイヤー型等であってもよいし、ヘッドマウントデバイス等のようなものであってもよい。また、補聴器や集音器のような補聴処理を有するデバイス等であってもよい。
The information output unit 7 is configured by an arbitrary device for outputting information to the user 2. As shown in FIG. 2, in this embodiment, a speaker 11 and a vibration device 12 are used as an example of the information output section 7.
The speaker 11 outputs audio. By driving the speaker 11, it becomes possible to notify the user 2 of information via audio. In the example shown in FIG. 1, a headset 13 including a speaker 11 is used as the information output section 7 and is attached to the user's 2 head. Note that the headset 13 is not limited to an overhead type, but may be an in-ear type, a canal type, an open-ear type, or a head-mounted device. Further, it may be a device having hearing aid processing such as a hearing aid or a sound collector.
 本実施形態において、スピーカ11は、音声出力部として機能する。すなわち、情報出力部7は、音声出力部を含むように構成されている。 In this embodiment, the speaker 11 functions as an audio output section. That is, the information output section 7 is configured to include an audio output section.
 振動デバイス12は、振動を出力する。例えば、振動デバイス12は、ユーザ2の身体に接触する任意の位置に配置される。例えば、報知用のバイブレーション等を発生可能な任意の振動モータ等が振動デバイス12として用いられる。振動デバイス12が駆動することで、ユーザ2に対して触覚を提示することが可能となり、情報を報知することが可能となる。 The vibration device 12 outputs vibration. For example, the vibration device 12 is placed at any position that contacts the user's 2 body. For example, any vibration motor or the like that can generate notification vibrations or the like can be used as the vibration device 12. By driving the vibration device 12, it becomes possible to present a tactile sensation to the user 2, and it becomes possible to notify information.
 図2に示す例では、周辺情報報知システム1は、さらに通信部14と、記憶部15とを含む。通信部14及び記憶部15は、コントローラ8にバス等を介して接続される。
 通信部14は、他のデバイスとの間で、ネットワーク通信や近距離無線通信等を実行するためのモジュールである。例えばWiFi等の無線LANモジュールや、Bluetooth(登録商標)等の通信モジュールが設けられる。
In the example shown in FIG. 2, the surrounding information notification system 1 further includes a communication section 14 and a storage section 15. The communication unit 14 and the storage unit 15 are connected to the controller 8 via a bus or the like.
The communication unit 14 is a module for performing network communication, short-range wireless communication, etc. with other devices. For example, a wireless LAN module such as WiFi or a communication module such as Bluetooth (registered trademark) is provided.
 なお、図2に示すセンサ部6と、コントローラ8とが、無線通信等を介して通信可能に接続される場合もあり得る。この場合、センサ部6にも通信部が構成される(図示は省略)。例えば、通信部が構成された物体検出センサ10が用いられる。
 同様に、図2に示す情報出力部7と、コントローラ8とが、無線通信等を介して通信可能に接続される場合もあり得る。この場合、情報出力部7にも通信部が構成される(図示は省略)。例えば、図1に示すヘッドセット13に通信部が構成され、無線通信を介して、コントローラ8と接続される。
Note that the sensor unit 6 shown in FIG. 2 and the controller 8 may be communicably connected via wireless communication or the like. In this case, a communication section is also configured in the sensor section 6 (not shown). For example, an object detection sensor 10 including a communication section is used.
Similarly, the information output unit 7 shown in FIG. 2 and the controller 8 may be communicably connected via wireless communication or the like. In this case, the information output section 7 is also configured with a communication section (not shown). For example, the headset 13 shown in FIG. 1 includes a communication section, and is connected to the controller 8 via wireless communication.
 記憶部15は、不揮発性メモリ等の記憶デバイスであり、例えばHDD(Hard Disk Drive)やSSD(Solid State Drive)等が用いられる。その他、コンピュータ読み取り可能な非一過性の任意の記憶媒体が用いられてよい。 The storage unit 15 is a storage device such as a nonvolatile memory, and for example, an HDD (Hard Disk Drive) or SSD (Solid State Drive) is used. In addition, any computer-readable non-transitory storage medium may be used.
 例えば、記憶部15には、周辺情報報知システム1の全体の動作を制御するための制御プログラムが記憶される。また記憶部15には、センサ部6による検出結果(センシング結果)の履歴や、取得された周辺情報の履歴、ユーザ2に関するユーザ情報、センサ部6及び情報出力部7に関する方式や特性等の情報、その他周辺情報報知システム1を動作させるために必要な種々の情報が記憶される。なお、制御プログラム等をインストールする方法は限定されない。 For example, the storage unit 15 stores a control program for controlling the overall operation of the surrounding information notification system 1. The storage unit 15 also contains a history of detection results (sensing results) by the sensor unit 6, a history of acquired surrounding information, user information regarding the user 2, and information such as methods and characteristics regarding the sensor unit 6 and the information output unit 7. , and other various information necessary for operating the surrounding information notification system 1 are stored. Note that the method of installing the control program etc. is not limited.
 コントローラ8は、周辺情報報知システム1が有する各ブロックの動作を制御する。コントローラ8は、例えばCPU、GPU、DSP等のプロセッサ、ROM、RAM等のメモリ、HDD等の記憶デバイス等、コンピュータに必要なハードウェアを有する。CPUが記憶部15やメモリに記憶されている本技術に係るプログラムをRAMにロードして実行することにより、本技術に係る情報処理方法が実行される。
 コントローラ8として、例えばFPGA(Field Programmable Gate Array)等のPLD(Programmable Logic Device)、その他ASIC(Application Specific Integrated Circuit)等のデバイスが用いられてもよい。
The controller 8 controls the operation of each block included in the surrounding information notification system 1. The controller 8 includes hardware necessary for a computer, such as a processor such as a CPU, GPU, or DSP, memory such as a ROM or RAM, and a storage device such as an HDD. The information processing method according to the present technology is executed by the CPU loading the program according to the present technology stored in the storage unit 15 or the memory into the RAM and executing it.
As the controller 8, a device such as a PLD (Programmable Logic Device) such as an FPGA (Field Programmable Gate Array), or another ASIC (Application Specific Integrated Circuit) may be used.
 本実施形態では、コントローラ8のプロセッサが本技術に係るプログラム(例えばアプリケーションプログラム)を実行することで、機能ブロックとして、周辺情報取得部17と、報知情報生成部18と、報知制御部19とが実現される。
 そしてこれらの機能ブロックにより、本実施形態に係る情報処理方法が実行される。なお各機能ブロックを実現するために、IC(集積回路)等の専用のハードウェアが適宜用いられてもよい。
In this embodiment, the processor of the controller 8 executes a program (for example, an application program) according to the present technology, so that the peripheral information acquisition section 17, the notification information generation section 18, and the notification control section 19 are configured as functional blocks. Realized.
The information processing method according to this embodiment is executed by these functional blocks. Note that dedicated hardware such as an IC (integrated circuit) may be used as appropriate to realize each functional block.
 周辺情報取得部17は、1以上の物体検出センサ10の検出結果に基づいて、周辺環境に関する周辺情報を取得する。例えば、周辺情報取得部17により、周辺の物体5の有無、物体5までの距離、物体5の形状、物体5のサイズ、物体5の材質等を、周辺情報として取得することが可能である。
 また物体検出センサ10として測距センサが用いられる場合等において、下り階段や駅のホームの端等の落下危険ポイントに関する情報を、周辺情報として取得することが可能である。例えば、周辺情報取得部17により、落下危険ポイントの有無、落下危険ポイントまでの距離、落下危険ポイントの形状、落下危険ポイントのサイズ等を、周辺情報として取得することも可能である。
 その他、周辺環境に関する様々な周辺情報が取得されてよい。
The surrounding information acquisition unit 17 obtains surrounding information regarding the surrounding environment based on the detection results of one or more object detection sensors 10. For example, the peripheral information acquisition unit 17 can acquire the presence or absence of a peripheral object 5, the distance to the object 5, the shape of the object 5, the size of the object 5, the material of the object 5, etc. as peripheral information.
Further, in cases where a distance measurement sensor is used as the object detection sensor 10, it is possible to obtain information regarding fall danger points such as descending stairs or the edge of a station platform as surrounding information. For example, the peripheral information acquisition unit 17 can also acquire information such as the presence or absence of a fall danger point, the distance to the fall danger point, the shape of the fall danger point, the size of the fall danger point, and the like.
In addition, various peripheral information regarding the surrounding environment may be acquired.
 1以上の物体検出センサ10として測距センサが用いられる場合は、測距センサにより物体5までの距離(測距値)が検出される。すなわち、測距センサにより、周辺情報が検出される。
 このように、物体検出センサ10により、周辺情報が検出される場合もあり得る。すなわちセンサ部6により周辺情報が生成される場合もあり得る。この場合、周辺情報取得部17は、センサ部6から周辺情報を受信することで、周辺情報を取得する。
When a distance measurement sensor is used as one or more object detection sensors 10, the distance to the object 5 (measured distance value) is detected by the distance measurement sensor. That is, surrounding information is detected by the ranging sensor.
In this way, peripheral information may be detected by the object detection sensor 10 in some cases. That is, the surrounding information may be generated by the sensor unit 6 in some cases. In this case, the surrounding information acquisition unit 17 obtains surrounding information by receiving the surrounding information from the sensor unit 6.
 一方、1以上の物体検出センサ10により検出された情報や信号等に基づいて認識処理や解析処理等が実行されることで、周辺情報が生成されてもよい。この場合、周辺情報取得部17は、1以上の物体検出センサによる検出結果に基づいて周辺情報を生成することで、周辺情報を取得する。 On the other hand, peripheral information may be generated by performing recognition processing, analysis processing, etc. based on information, signals, etc. detected by one or more object detection sensors 10. In this case, the peripheral information acquisition unit 17 acquires peripheral information by generating peripheral information based on detection results by one or more object detection sensors.
 すなわち本開示において、1以上の物体検出センサ10による検出結果に基づいて周辺情報を取得することは、1以上の物体検出センサ10により検出された周辺情報を受信すること、及び1以上の物体検出センサ10による検出結果に基づいて周辺情報を生成することの両方を含む。 That is, in the present disclosure, acquiring peripheral information based on detection results by one or more object detection sensors 10 means receiving peripheral information detected by one or more object detection sensors 10 and detecting one or more objects. This includes both generating surrounding information based on the detection results by the sensor 10.
 本周辺情報報知システム1では、以下のような検出結果を、周辺情報として取得することも可能である。
 ユーザ2の正面方向(進行方向)、上方、下方、左側方、右側方等に検出方向が設定された測距センサの測距値(距離情報)が閾値以下に達した場合に、各検出方向において障害物が接近していることを検出する。
 イメージセンサにより取得される画像情報に対して物体認識を実行することで、人や車といった特定の物体を検出する。
 下前方に向けた測距センサの測距値が閾値以上に達した場合に、段差等が接近していることを検出する。
 本周辺情報報知システム1が車載センサとして構成された場合、前方の歩行者や、車庫入れ時の後方の障害物・人を検出する。
 本周辺情報報知システム1がドローンに搭載された場合、イメージセンサや測距センサによる建造物や木、崖等を検出する。
 ここで、閾値については周辺情報報知システム1側で自動または動的に設定されるものでもよいし、ユーザ2によって適宜設定されるものであってもよい。
In this surrounding information notification system 1, it is also possible to acquire the following detection results as surrounding information.
When the distance measurement value (distance information) of a distance sensor whose detection direction is set to the front direction (progressing direction), upward direction, downward direction, left side, right side, etc. of user 2 reaches a threshold value or less, each detection direction detects that an obstacle is approaching.
By performing object recognition on image information acquired by an image sensor, specific objects such as people and cars are detected.
When the distance measurement value of the distance measurement sensor directed downward and forward reaches a threshold value or more, it is detected that a step or the like is approaching.
When the surrounding information notification system 1 is configured as a vehicle-mounted sensor, it detects pedestrians in front and obstacles and people behind when entering the garage.
When the present surrounding information notification system 1 is mounted on a drone, buildings, trees, cliffs, etc. are detected using an image sensor or a ranging sensor.
Here, the threshold value may be automatically or dynamically set by the surrounding information notification system 1 side, or may be set by the user 2 as appropriate.
 イメージセンサにより取得される画像情報に対する物体認識技術としては、例えば画像サイズの換算、文字認識、形状認識、物体のモデル画像を用いたマッチング処理、エッジ検出、射影変換等の任意の画像認識技術が用いられてよい。
 また、例えばDNN(Deep Neural Network:深層ニューラルネットワーク)、RNN(Recurrent Neural Network:回帰型ニューラルネットワーク)、CNN(Convolutional Neural Network:畳み込みニューラルネットワーク)等を用いた任意の機械学習アルゴリズムが用いられてもよい。例えばディープラーニング(深層学習)を行うAI(人工知能)等を用いることで、周辺情報を高い精度で生成して出力することが可能となる。
 例えば、画像情報に対してセマンティックセグメンテーションを実行することで、画像内の各画素に対して、物体の種類を判定することも可能となる。
 なお機械学習アルゴリズムの適用は、本開示内の任意の処理に対して実行されてよい。
As object recognition technology for image information acquired by an image sensor, any image recognition technology such as image size conversion, character recognition, shape recognition, matching processing using a model image of the object, edge detection, projective transformation, etc. can be used. May be used.
Furthermore, even if any machine learning algorithm using, for example, DNN (Deep Neural Network), RNN (Recurrent Neural Network), CNN (Convolutional Neural Network), etc. is used, good. For example, by using AI (artificial intelligence) that performs deep learning, it becomes possible to generate and output peripheral information with high accuracy.
For example, by performing semantic segmentation on image information, it is also possible to determine the type of object for each pixel in the image.
Note that the application of the machine learning algorithm may be performed to any processing within the present disclosure.
 材質情報としては、例えば超音波反射波の振幅情報のように、硬さと関連性のある情報を取得することが可能である。もちろん、他の材質に関する情報が取得されてもよい。 As material information, it is possible to acquire information related to hardness, such as amplitude information of ultrasonic reflected waves. Of course, information regarding other materials may also be acquired.
 報知情報生成部18は、ユーザ2に対して周辺情報を報知するための報知情報を生成する。報知情報は、情報出力部7として配置されるスピーカ11及び振動デバイス12による周辺情報の出力を実現するための任意の情報を含む。 The notification information generation unit 18 generates notification information for notifying the user 2 of surrounding information. The notification information includes any information for realizing output of peripheral information by the speaker 11 and the vibration device 12 arranged as the information output section 7.
 例えば、報知情報は、スピーカ11から出力させる音声情報と、当該音声情報をどのように出力させるかを規定するための出力制御情報とを含む。
 音声情報として、例えば、「〇〇m先に障害物があります」といったメッセージ、所定の楽曲を構成する楽音情報(メロディや伴奏等)、あるいは「ピッピッピッ」といった通知音等、様々な形態の音声情報が出力されてもよい。
 出力制御情報としては、音量、ピッチ、再生速度、BPM(Beats Per Minute)、音の定位(定位方向)等を規定する任意の情報が生成されてよい。例えば、音の定位を制御することで、立体音響による情報の報知を実現することも可能である。
For example, the notification information includes audio information to be output from the speaker 11 and output control information for specifying how to output the audio information.
Audio information can be in various forms, such as a message such as "There is an obstacle ahead", musical sound information (melody, accompaniment, etc.) that makes up a certain song, or a notification sound such as "beep beep". may be output.
As the output control information, arbitrary information defining volume, pitch, playback speed, BPM (Beats Per Minute), sound localization (localization direction), etc. may be generated. For example, by controlling the localization of sound, it is also possible to provide information using stereophonic sound.
 また報知情報生成部18は、振動デバイス12を振動させるための振動情報を、報知情報として生成する。例えば、振動の強さ(振幅)、周波数、テンポ等が規定された様々な振動パターンを実現するための振動情報が報知情報として生成される。 Additionally, the notification information generation unit 18 generates vibration information for vibrating the vibration device 12 as notification information. For example, vibration information for realizing various vibration patterns in which vibration strength (amplitude), frequency, tempo, etc. are specified is generated as notification information.
 報知制御部19は、報知情報に基づいて、情報出力部7を制御する。報知制御部19により、スピーカ11が駆動され、報知情報として生成された音声情報が出力される。また、振動デバイス12が駆動され、報知情報として生成された振動情報に対応する振動パターンが出力される。 The notification control section 19 controls the information output section 7 based on the notification information. The speaker 11 is driven by the notification control unit 19, and audio information generated as notification information is output. Further, the vibration device 12 is driven, and a vibration pattern corresponding to the vibration information generated as notification information is output.
 本実施形態では、コントローラ8を有する装置が、本技術に係る情報処理装置の一実施形態に相当する。例えば、センサ部6とコントローラ8とが一体的に構成される場合は、1以上の物体検出センサ10を含む形態で、本技術に係る情報処理装置の一実施形態が実現される。
 情報出力部7とコントローラ8とが一体的に構成されてもよい。例えば、ユーザ2の頭に装着されるヘッドセット13に、コントローラ8が構成されてもよい。この場合、スピーカ11等の報知用のデバイスを含む形態で、本技術に係る情報処理装置の一実施形態が実現される。
In this embodiment, a device including the controller 8 corresponds to an embodiment of an information processing device according to the present technology. For example, when the sensor section 6 and the controller 8 are integrally configured, one embodiment of the information processing apparatus according to the present technology is realized in a form that includes one or more object detection sensors 10.
The information output section 7 and the controller 8 may be integrally configured. For example, the controller 8 may be configured in the headset 13 worn on the user's 2 head. In this case, an embodiment of the information processing apparatus according to the present technology is implemented in a form that includes a device for notification such as the speaker 11.
 センサ部6、情報出力部7、及びコントローラ8が一体的に構成されてもよい。この場合、1以上の物体検出センサ10と、スピーカ11等の報知用のデバイスとを含む形態で、本技術に係る情報処理装置の一実施形態が実現される。
 このように、本周辺情報報知システム1として、様々な形態を採用することが可能である。
The sensor section 6, the information output section 7, and the controller 8 may be integrally configured. In this case, one embodiment of the information processing apparatus according to the present technology is realized in a form including one or more object detection sensors 10 and a notification device such as a speaker 11.
In this way, it is possible to adopt various forms as the peripheral information notification system 1.
 [周辺情報報知システムの基本動作例]
 図3は、周辺情報報知システム1の基本動作例を示すフローチャートである。
 周辺情報取得部17により、センサ部6による検出結果に基づいて周辺情報が取得される(ステップ101)。
 報知情報生成部18により、周辺情報を報知するための報知情報が生成される(ステップ102)。ステップ102では、ユーザ2に対して報知すべき周辺情報に応じた報知情報が生成される。
[Example of basic operation of surrounding information notification system]
FIG. 3 is a flowchart showing an example of the basic operation of the surrounding information notification system 1.
The surrounding information acquisition section 17 obtains surrounding information based on the detection result by the sensor section 6 (step 101).
The broadcast information generation unit 18 generates broadcast information for broadcasting surrounding information (step 102). In step 102, notification information corresponding to the surrounding information to be notified to the user 2 is generated.
 例えば、ステップ101にて、ユーザ2のすぐ近くに落下危険ポイントである駅のホームの端が存在する旨の周辺情報が取得されたとする。この場合、例えば危険レベル(危険度)が非常に高いとして、高音のアラーム音(警告音)に対応する音声情報と、当該アラーム音を比較的高い音量でヘッドセット13から出力させるための出力制御情報とが、報知情報として生成される。
 また、振動デバイス12から振幅が大きく周波数が抑えられた力強い振動パターンが出力されるような振動情報が報知情報として生成される。その他、周辺情報に応じた様々な態様の報知情報を生成することが可能である。
For example, assume that in step 101, surrounding information indicating that the edge of the station platform, which is a falling danger point, exists in the immediate vicinity of the user 2 is acquired. In this case, for example, assuming that the danger level (degree of danger) is very high, audio information corresponding to a high-pitched alarm sound (warning sound) and output control for outputting the alarm sound from the headset 13 at a relatively high volume. information is generated as broadcast information.
Further, vibration information such that a powerful vibration pattern with a large amplitude and a suppressed frequency is output from the vibration device 12 is generated as notification information. In addition, it is possible to generate various types of notification information depending on surrounding information.
 報知制御部19により、報知情報に基づいて情報出力部7が制御され、ユーザ2へ周辺情報が報知される(ステップ103)。本実施形態では、報知制御部19により、スピーカ11及び振動デバイス12が制御される。
 ユーザ2は、音声及び振動(触覚)を介して、周辺環境の状況を把握することが可能となり、危険を回避しながら移動することが可能となる。
The information output unit 7 is controlled by the notification control unit 19 based on the notification information, and surrounding information is notified to the user 2 (step 103). In this embodiment, the speaker 11 and the vibration device 12 are controlled by the notification control section 19.
The user 2 can grasp the situation of the surrounding environment through sound and vibration (tactile sense), and can move while avoiding danger.
 [センサ部6の構成例]
 図4~図6は、センサ部6の構成例について説明するための模式図である。
 図4Aに示す例では、ユーザ2の腰の位置(ベルトの位置)であり、ユーザ2の正面側の位置に、センサ部6が構成される。
 図4Bに示す例では、ユーザ2の頭部であり、ユーザ2の正面側の位置に、センサ部6が構成される。
[Configuration example of sensor section 6]
4 to 6 are schematic diagrams for explaining configuration examples of the sensor section 6. FIG.
In the example shown in FIG. 4A, the sensor section 6 is configured at the user's 2 waist position (belt position), and at a position on the front side of the user 2.
In the example shown in FIG. 4B, the sensor unit 6 is located on the head of the user 2, on the front side of the user 2.
 例えば、1以上の物体検出センサ10が配置されたセンサ本体21が、ユーザ2が装着可能なウェアラブルデバイスとして構成される。当該ウェアラブルデバイスにより、センサ部6が実現される。このように、1以上の物体検出センサ10が、ユーザ2に装着されるウェアラブルデバイスに配置されてもよい。 For example, the sensor main body 21 in which one or more object detection sensors 10 are arranged is configured as a wearable device that can be worn by the user 2. The sensor section 6 is realized by the wearable device. In this way, one or more object detection sensors 10 may be placed in a wearable device worn by the user 2.
 図4A及びBに示すように、ユーザ2は、ウェアラブルデバイスとして構成されたセンサ本体21を装着することで、様々な位置にてセンサ部6を実現することが可能である。例えば、手首に装着するリストバンド型、上腕に装着する腕輪型、頭に装着するヘッドバンド型(ヘッドマウント型)、首に装着するネックバンド型、胸に装着する胴体用の型、腰に装着するベルト型、足首に装着するアンクレット型等、種々の形態が採用されてよい。 As shown in FIGS. 4A and 4B, the user 2 can realize the sensor unit 6 at various positions by wearing the sensor main body 21 configured as a wearable device. For example, a wristband type worn on the wrist, a bracelet type worn on the upper arm, a headband type worn on the head (head mounted type), a neckband type worn around the neck, a torso type worn on the chest, and a type worn on the waist. Various forms may be adopted, such as a belt type worn on the ankle or an anklet type worn on the ankle.
 また、眼鏡型、指輪型、ネックレス型、イヤリング型、ピアス型のウェアラブルデバイスや、靴のつま先に装着可能な形態、クリップ等により任意の位置に取り付け可能な形態等が採用されてもよい。 In addition, wearable devices in the form of glasses, rings, necklaces, earrings, or piercings, a form that can be attached to the toe of a shoe, a form that can be attached to any position with a clip, etc. may be adopted.
 図5に示す例では、ユーザ2が保持することが可能な形態で、センサ部6が実現される。例えば、1以上の物体検出センサ10が配置されたセンサ本体21が、ユーザ2が保持することが可能なデバイスとして構成される。
 図5Aに示す例では、白杖4を保持する右手により、センサ本体21が保持されている。図5Bに示す例では、白杖4を保持する右手とは反対側の左手により、センサ本体21が保持されている。このように、1以上の物体検出センサ10は、ユーザ2が保持するデバイスに配置されてもよい。
In the example shown in FIG. 5, the sensor section 6 is realized in a form that can be held by the user 2. For example, a sensor main body 21 in which one or more object detection sensors 10 are arranged is configured as a device that can be held by the user 2.
In the example shown in FIG. 5A, the sensor main body 21 is held by the right hand holding the white cane 4. In the example shown in FIG. 5B, the sensor main body 21 is held by the left hand on the opposite side from the right hand holding the white cane 4. In this way, one or more object detection sensors 10 may be placed on a device held by the user 2.
 図6に示す例では、ユーザ2が保持する他のデバイスに、センサ本体21(センサ部6)が搭載されている。図6Aに示す例では、ユーザ2が引っ張りながら移動させるキャリア22に、1以上の物体検出センサ10が配置されたセンサ本体21が搭載される。
 図6Bに示す例では、ユーザ2が押して移動させる手押し車23に、センサ本体21が搭載される。このように、ユーザ2が保持する他のデバイスにセンサ本体21が搭載されることで、センサ部6が実現されてもよい。
In the example shown in FIG. 6, the sensor main body 21 (sensor section 6) is mounted on another device held by the user 2. In the example shown in FIG. 6A, a sensor body 21 on which one or more object detection sensors 10 are arranged is mounted on a carrier 22 that is pulled and moved by the user 2.
In the example shown in FIG. 6B, the sensor body 21 is mounted on a handcart 23 that the user 2 pushes to move. In this way, the sensor unit 6 may be realized by mounting the sensor main body 21 on another device held by the user 2.
 もちろん、ユーザ2が保持する白杖4に、センサ本体21(センサ部6)が搭載されてもよい。ユーザ2が保持する他のデバイスにセンサ本体21(センサ部6)が搭載される構成は、1以上の物体検出センサ10が、ユーザ2が保持するデバイスに配置される構成に含まれる。 Of course, the sensor body 21 (sensor section 6) may be mounted on the white cane 4 held by the user 2. A configuration in which the sensor main body 21 (sensor unit 6) is mounted on another device held by the user 2 is included in a configuration in which one or more object detection sensors 10 are placed in a device held by the user 2.
 [1以上の物体検出センサの構成例]
 1以上の物体検出センサ10の構成についても、様々なバリエーションが考えられる。
 例えば、物体検出センサ10の数、物体検出センサ10の種類(方式等)、物体検出センサ10の姿勢(検出方向等)、物体検出センサ10のセンシングパラメータ(フレームレート、ゲイン、レーザ強度等)等を任意に選択及び設定することで、様々な構成を実現することが可能である。
[Example of configuration of one or more object detection sensors]
Various variations can also be considered for the configuration of the one or more object detection sensors 10.
For example, the number of object detection sensors 10, the type (method, etc.) of object detection sensors 10, the attitude (detection direction, etc.) of object detection sensors 10, the sensing parameters of object detection sensors 10 (frame rate, gain, laser intensity, etc.), etc. By arbitrarily selecting and setting , it is possible to realize various configurations.
 図7は、1以上の物体検出センサ10の構成の一例を示す模式図である。
 図7に示す例では、ユーザ2が保持可能なセンサ本体21に、1以上の物体検出センサ10が配置される。ユーザ2は、センサ本体21の向き等を変えることで、周辺の環境に対して走査してセンシングをすることが可能となる。
FIG. 7 is a schematic diagram showing an example of the configuration of one or more object detection sensors 10.
In the example shown in FIG. 7, one or more object detection sensors 10 are arranged in a sensor main body 21 that can be held by the user 2. By changing the orientation of the sensor body 21, the user 2 can scan and sense the surrounding environment.
 図7に示す例では、1以上の物体検出センサ10として、方式の異なる2つの測距センサが用いられる。具体的には、光学レーザ方式の測距センサ(レーザ測距センサ)25と、超音波方式の測距センサ(超音波測距センサ)26とが用いられる。2つの測距センサ25及び26は、互いの検出方向が同じ方向となるように、センサ本体21に配置される。 In the example shown in FIG. 7, two ranging sensors with different methods are used as the one or more object detection sensors 10. Specifically, an optical laser distance measurement sensor (laser distance measurement sensor) 25 and an ultrasonic distance measurement sensor (ultrasonic distance measurement sensor) 26 are used. The two distance measuring sensors 25 and 26 are arranged on the sensor main body 21 so that their detection directions are the same.
 従って、ユーザ2によりセンサ本体21が向けられた方向を検出方向として、レーザ測距センサ25と、超音波測距センサ26とにより、センシングが実行される。2つの方式の測距センサ25及び26による検出結果により、高精度の周辺情報を取得することが可能となり、ユーザ2に報知することが可能となる。もちろん、採用される測距センサの方式は限定されず、任意に設定されてよい。 Therefore, sensing is performed by the laser ranging sensor 25 and the ultrasonic ranging sensor 26, with the direction in which the sensor body 21 is directed by the user 2 as the detection direction. The detection results by the two types of distance measuring sensors 25 and 26 make it possible to acquire highly accurate surrounding information and to notify the user 2 of the same. Of course, the method of the distance measuring sensor employed is not limited and may be set arbitrarily.
 図8は、1以上の物体検出センサ10の構成の他の例を示す模式図である。
 図8に示す例では、例えば、ユーザ2の手に装着可能なウェアラブルデバイス(図示は省略)に、1以上の物体検出センサ10が配置される。あるいは、白杖4のユーザ2が手で保持する部分の近傍に、1以上の物体検出センサ10が配置される。
FIG. 8 is a schematic diagram showing another example of the configuration of one or more object detection sensors 10.
In the example shown in FIG. 8, one or more object detection sensors 10 are arranged, for example, in a wearable device (not shown) that can be worn on the user's 2 hand. Alternatively, one or more object detection sensors 10 are arranged near the portion of the white cane 4 that the user 2 holds in his/her hand.
 図8に示す例では、互いに検出方向が異なる2つの測距センサ27及び28が用いられる(検出方向に、各測距センサ27及び28の符号を付す)。
 図8に示す例は、第1の方向を検出方向として配置された第1の測距センサと、第1の方向とは異なる第2の方向を検出方向として配置された第2の測距センサの一実施形態となる。測距センサ27及び28のどちらを第1の測距センサとしてもよい。
In the example shown in FIG. 8, two ranging sensors 27 and 28 having different detection directions are used (the detection directions are denoted by the symbols of the ranging sensors 27 and 28).
The example shown in FIG. 8 includes a first ranging sensor arranged with a first direction as a detection direction, and a second ranging sensor arranged with a second direction different from the first direction as a detection direction. This is an embodiment. Either of the distance measurement sensors 27 and 28 may be used as the first distance measurement sensor.
 本実施形態では、測距センサ27を第1の測距センサとする。測距センサ27は、ユーザ2の正面方向が検出方向となるように配置される。従って第1の方向は、ユーザ2の正面方向となる。以下、測距センサ27を、同じ符号を用いて、正面側測距センサ27と記載する。図8に示す例では、正面側測距センサ27は、地面3から高さHの位置で、地面3と平行となる方向が検出方向となるように配置される。なお、ユーザ2の正面方向は、ユーザ2の進行方向とも言える。 In this embodiment, the distance measurement sensor 27 is the first distance measurement sensor. The distance measurement sensor 27 is arranged so that the front direction of the user 2 is the detection direction. Therefore, the first direction is the front direction of the user 2. Hereinafter, the distance measuring sensor 27 will be referred to as the front side distance measuring sensor 27 using the same reference numerals. In the example shown in FIG. 8, the front distance measuring sensor 27 is arranged at a height H from the ground 3 so that the direction parallel to the ground 3 is the detection direction. Note that the front direction of the user 2 can also be said to be the direction of movement of the user 2.
 測距センサ28は第2の測距センサとなり、地面3に設定された測定点Pに向かう方向が検出方向となるように配置される。従って、第2の方向は、ユーザ2の手の位置(高さHの位置)から測定点Pに向かう方向となる。以下、測距センサ28を、同じ符号を用いて、地面側測距センサ28と記載する。 The distance measurement sensor 28 serves as a second distance measurement sensor, and is arranged so that the direction toward the measurement point P set on the ground 3 is the detection direction. Therefore, the second direction is the direction from the position of the user's 2 hand (the position at the height H) toward the measurement point P. Hereinafter, the distance measuring sensor 28 will be referred to as the ground side distance measuring sensor 28 using the same reference numerals.
 測定点Pは、地面3上のユーザ2から正面方向に沿って所定の距離D離れた位置に設定される。距離Dの大きさは、例えば、地面3上の物体5や落下危険ポイントを、どれぐらいの距離までに検出したいかどうか等により決定される。
 例えば、地面3上の物体5や落下危険ポイントを比較的遠い位置で早めに検出したい場合には、距離Dは比較的長く設定される。地面3上の物体5や落下危険ポイントがある程度近くの位置になった場合に検出したい場合には、距離Dは比較的短く設定される。
 例えば、ユーザ2の移動速度等も考慮にいれて、測定点Pまでの距離Dは設定されてもよい。もちろん、他の観点に基づいて、例えばユーザ2により、測定点Pまでの距離Dが任意に設定されてよい。
The measurement point P is set at a position on the ground 3 that is a predetermined distance D away from the user 2 along the front direction. The size of the distance D is determined, for example, depending on how far the object 5 on the ground 3 or the falling dangerous point is desired to be detected.
For example, if it is desired to quickly detect an object 5 on the ground 3 or a fall danger point at a relatively far position, the distance D is set to be relatively long. If it is desired to detect the object 5 on the ground 3 or the fall danger point when it is in a relatively close position, the distance D is set to be relatively short.
For example, the distance D to the measurement point P may be set taking into consideration the moving speed of the user 2 and the like. Of course, the distance D to the measurement point P may be arbitrarily set by the user 2, for example, based on other viewpoints.
 例えば、以下の三角測位の式により、地面側測距センサ28の検出方向と地面3との交差角度θを、設定したい距離Dに基づいて算出することが可能である。
 交差角度θ=arctan(H/D)・・・・(1)
For example, the intersection angle θ between the detection direction of the ground-side ranging sensor 28 and the ground 3 can be calculated based on the desired distance D using the following trigonometric positioning formula.
Intersection angle θ=arctan(H/D)...(1)
 例えば、正面側測距センサ27及び地面側測距センサ28の地面3からの高さH=0.75mとする。この場合、距離D=1.5mとなるように、測定点Pを設定したい場合には、式(1)により、交差角度θ=26.565°となる。
 すなわち、検出方向を、正面方向に対して下方側に26.565°傾けて設定することで、距離D=1.5mの測定点Pに向けて、地面側測距センサ28を配置することが可能である。
For example, it is assumed that the height H from the ground 3 of the front-side distance measurement sensor 27 and the ground-side distance measurement sensor 28 is 0.75 m. In this case, if it is desired to set the measurement point P so that the distance D=1.5 m, the intersection angle θ=26.565° according to equation (1).
That is, by setting the detection direction to be tilted downward by 26.565 degrees with respect to the front direction, it is possible to arrange the ground-side distance measuring sensor 28 toward the measurement point P at the distance D = 1.5 m. It is possible.
 このように正面方向及び地面方向(地面3上の測定点Pに向かう方向)を検出方向とする2つの測距センサ27及び28が配置されることで、正面方向及び地面方向の2チャンネルの検出結果を取得することが可能となる。これにより、高精度の周辺情報を取得することが可能となり、ユーザ2に報知することが可能となる。 By arranging the two ranging sensors 27 and 28 whose detection directions are the front direction and the ground direction (direction toward the measurement point P on the ground 3), two channels of detection can be performed in the front direction and the ground direction. It becomes possible to obtain the results. Thereby, it becomes possible to acquire highly accurate surrounding information, and it becomes possible to notify the user 2.
 2つの測距センサ27及び28の検出方向、すなわち第1の方向及び第2の方向は限定されず、任意に設定されてよい。
 例えば、(正面方向、背面方向)、(正面方向、上方側の方向)、(正面方向、左側の方向)、(正面方向、右側の方向)等、任意の方向の組み合わせが採用されてよい。例えば、状況を知りたい方向を検出方向として、測距センサが配置されればよい。
 例えば、正面と左側の壁との状況が知りたい場合には、(正面方向、左側の方向)とを検出方向とするように、2つの測距センサが配置される、測定点Pは、例えば左側の壁上の、正面方向に沿ってユーザ2から所定の距離D離れた位置に設定されればよい。
The detection directions of the two distance measuring sensors 27 and 28, that is, the first direction and the second direction, are not limited and may be set arbitrarily.
For example, any combination of directions may be employed, such as (front direction, back direction), (front direction, upper direction), (front direction, left direction), (front direction, right direction). For example, the distance measuring sensor may be arranged with the direction in which the situation is desired to be known as the detection direction.
For example, if you want to know the situation between the front and the left wall, two distance measuring sensors are arranged so that the detection direction is (front direction, left direction).The measurement point P is, for example, It may be set at a position a predetermined distance D away from the user 2 along the front direction on the left wall.
 また、3以上の任意の数の測距センサが、検出方向が互いに異なるように配置されてもよい。例えば、正面方向と、それに垂直な左右の方向(左右の両壁の方向)との3チャンネルとなるように、3つの測距センサが配置されてもよい。
 測距センサの数、各測距センサの検出方向として、任意にバリエーションが採用されてよい。
 また、例えば、アプリケーション上でユーザ2が測距センサ27及び28の検出方向を設定できるようにしてもよい。一方で、一般的に視覚障がい者にとっては、GUI(Graphical User Interface)上で各種設定を行うことは困難であることが想定されるため、例えばユーザ2からの音声を認識して、検出方向を適宜設定・変更できるようなものであってもよい。
Further, an arbitrary number of three or more distance measuring sensors may be arranged so that their detection directions are different from each other. For example, three ranging sensors may be arranged so as to provide three channels in the front direction and in the left and right directions perpendicular to the front direction (directions of both left and right walls).
Any variation may be adopted as the number of ranging sensors and the detection direction of each ranging sensor.
Furthermore, for example, the user 2 may be able to set the detection directions of the distance measuring sensors 27 and 28 on the application. On the other hand, it is generally assumed that it is difficult for visually impaired people to configure various settings on a GUI (Graphical User Interface). It may be something that can be set and changed as appropriate.
 [ユーザへの報知のバリエーション]
 周辺情報報知システム1におけるユーザ2への周辺情報の報知についても、様々なバリエーションが挙げられる。以下、第1~第4の実施形態として、ユーザ2への報知のバリエーションについて説明する。
[Variations of notifications to users]
There are also various variations regarding the notification of surrounding information to the user 2 in the surrounding information notification system 1. Hereinafter, variations in notification to the user 2 will be described as first to fourth embodiments.
 (第1の実施形態)
 図9は、第1の実施形態に係る周辺情報の報知を説明するためのブロック図である。
 図9に示す例では、周辺情報取得部17により、第1の周辺情報30及び第2の周辺情報31が取得される。すなわち、少なくとも種類の異なる2つの周辺情報が取得される。
(First embodiment)
FIG. 9 is a block diagram for explaining notification of surrounding information according to the first embodiment.
In the example shown in FIG. 9, the peripheral information acquisition unit 17 acquires first peripheral information 30 and second peripheral information 31. That is, at least two different types of peripheral information are acquired.
 例えば、図7に例示する構成のセンサ部6が採用されたとする。すなわち、レーザ測距センサ25と、超音波測距センサ26とが配置されているとする。この場合、レーザ測距センサ25の検出結果に基づいて取得される周辺情報が、第1の周辺情報30として取得される。また、超音波測距センサ26の検出結果に基づいて取得される周辺情報が、第2の周辺情報31として取得される。 For example, assume that the sensor section 6 having the configuration illustrated in FIG. 7 is adopted. That is, it is assumed that a laser ranging sensor 25 and an ultrasonic ranging sensor 26 are arranged. In this case, the surrounding information obtained based on the detection result of the laser ranging sensor 25 is obtained as the first surrounding information 30. Additionally, surrounding information acquired based on the detection result of the ultrasonic ranging sensor 26 is acquired as second surrounding information 31.
 例えば、レーザ測距センサ25から出力される物体5の検出の有無(ON/OFF)、物体5までの距離、及び物体5の材質(硬さ)等が、第1の周辺情報30として取得される。また超音波測距センサ26から出力される物体5の検出の有無(ON/OFF)、物体5までの距離、及び物体5の材質(硬さ)等が、第2の周辺情報31として取得される。 For example, the presence or absence (ON/OFF) of detection of the object 5 output from the laser ranging sensor 25, the distance to the object 5, the material (hardness) of the object 5, etc. are acquired as the first peripheral information 30. Ru. In addition, information such as whether or not the object 5 is detected (ON/OFF), the distance to the object 5, and the material (hardness) of the object 5 outputted from the ultrasonic ranging sensor 26 is acquired as second peripheral information 31. Ru.
 なお、レーザ測距センサ25では、物体5の硬さの影響を受けずに、物体5までの距離等を検出することが可能である。逆に言えば、レーザ測距センサ25により物体5の材質(硬さ)を検出することが難しい場合も多い。この場合、例えば、レーザ測距センサ25からは、物体5の材質(硬さ)の情報として、検出不能の旨の情報が出力されてもよい。 Note that the laser distance measurement sensor 25 can detect the distance to the object 5, etc. without being affected by the hardness of the object 5. Conversely, it is often difficult to detect the material (hardness) of the object 5 using the laser ranging sensor 25. In this case, for example, the laser distance measurement sensor 25 may output information indicating that the object 5 is undetectable as information about the material (hardness) of the object 5.
 図9に示す例では、報知情報生成部18内に、音声信号処理部32及び33と、音声合成処理部34とが構成される。また、記憶部15に、第1の周辺情報30を報知するための第1の楽音データと、第2の周辺情報31を報知するための第2の楽音データとが記憶されている。 In the example shown in FIG. 9, the broadcast information generation section 18 includes audio signal processing sections 32 and 33 and a speech synthesis processing section 34. The storage unit 15 also stores first musical tone data for notifying the first peripheral information 30 and second musical tone data for notifying the second peripheral information 31.
 楽音データは、楽音を構成する任意のデータを含む。例えば、所定の音階やメロディが規定されたデータや、特定の楽器の音声データ等が含まれる。
 例えば、所定の楽曲の主旋律の楽音データや、副旋律の楽音データ等が個別に用いられてもよい。あるいは、所定の楽曲を演奏している特定の楽器(ピアノ、バイオリン、ボーカル等のメロディ楽器や、ベースギター、コントラバス、バスドラム等のベース楽器、鉄琴、ドラム、ベル、チャイム等の打楽器等)の音声データが用いられてもよい。
 もちろん、「ピッピッピ」と不連続で周期的に再生される電子音等のデータが用いられてもよい。
The musical tone data includes any data that constitutes a musical tone. For example, the data includes data in which a predetermined scale or melody is defined, audio data of a specific musical instrument, and the like.
For example, the musical tone data of the main melody, the musical tone data of the sub melody, etc. of a predetermined song may be used individually. Or, specific instruments playing a given song (melody instruments such as piano, violin, vocals, bass instruments such as bass guitar, contrabass, bass drum, percussion instruments such as glockenspiel, drums, bells, chimes, etc.) ) audio data may be used.
Of course, data such as an electronic sound that is discontinuously and periodically reproduced as "pippipippi" may also be used.
 音声信号処理部32は、第1の周辺情報30に基づいて、第1の楽音データを用いて第1の音声情報を生成する。音声信号処理部33は、第2の周辺情報31に基づいて、第2の楽音データを用いて第2の音声情報を生成する。第1の音声情報及び第2の音声情報は、報知情報として生成される。 Based on the first peripheral information 30, the audio signal processing unit 32 generates first audio information using the first musical tone data. The audio signal processing unit 33 generates second audio information using the second musical tone data based on the second peripheral information 31. The first audio information and the second audio information are generated as broadcast information.
 図10は、第1の音声情報及び第2の音声情報の一例を示す模式図である。
 典型的には、第1の楽音データ及び第2の楽音データは、ユーザ2に対してともに出力された場合に、音楽的に違和感なく聴くことが可能な組み合わせとなる楽音データがそれぞれ採用される。
FIG. 10 is a schematic diagram showing an example of first audio information and second audio information.
Typically, the first musical sound data and the second musical sound data are musical sound data that, when output together to the user 2, form a combination that can be listened to without any musical discomfort. .
 例えば、第1の楽音データとして、所定の楽曲を構成するあるパートの楽音データが設定される。第2の楽音データとして、同じ楽曲を構成する他のパートの楽音データが設定される。
 例えば(主旋律、副旋律)、(メロディ、伴奏)、(高音楽器の旋律、低音楽器の旋律)、(メロディ楽器の旋律、打楽器の発音)等、第1の楽音データと、第2の楽音データとの組み合わせとして、任意の組み合わせが採用されてよい。
For example, musical tone data of a certain part of a predetermined musical piece is set as the first musical tone data. As the second musical sound data, musical sound data of other parts constituting the same music piece is set.
For example, first musical sound data and second musical sound data, such as (main melody, sub melody), (melody, accompaniment), (melody of high instrument, melody of low instrument), (melody of melody instrument, sound of percussion instrument), etc. Any combination may be adopted as the combination.
 音声信号処理部32は、第1の周辺情報30に基づいて、楽音パラメータを制御することで、第1の音声情報を生成する。楽音パラメータとしては、例えば、音量、周波数、ピッチ、曲の再生の速さ、BPM、テンポ等が挙げられる。
 例えば、第1の周辺情報30が距離情報の場合、距離が近づくと音量を上げる、ピッチを上げる、テンポを上げるといった楽音パラメータの制御が実行されて第1の音声情報が生成される。このように、距離情報に基づいて楽音パラメータを制御することで第1の音声情報は生成されてもよい。
 あるいは、物体5の検出に応じて、特定の楽器の音声データが第1の音声情報として生成されてもよい。
The audio signal processing unit 32 generates first audio information by controlling musical tone parameters based on the first peripheral information 30. Examples of musical sound parameters include volume, frequency, pitch, speed of song reproduction, BPM, and tempo.
For example, when the first peripheral information 30 is distance information, as the distance approaches, musical tone parameter control such as increasing the volume, pitch, and tempo is executed to generate the first audio information. In this way, the first audio information may be generated by controlling the musical tone parameters based on the distance information.
Alternatively, audio data of a specific musical instrument may be generated as the first audio information in response to the detection of the object 5.
 音声信号処理部33も同様に、第2の周辺情報31に基づいて、楽音パラメータを制御することで、第2の音声情報を生成する。
 本実施形態では、第1の音声情報として、所定の楽曲を構成する第1の楽音情報が生成される。第2の音声情報として、同じ楽曲を構成する第2の楽音情報が生成される。
Similarly, the audio signal processing unit 33 generates second audio information by controlling musical tone parameters based on the second peripheral information 31.
In this embodiment, first musical tone information constituting a predetermined music piece is generated as the first audio information. As the second audio information, second musical tone information constituting the same song is generated.
 音声合成処理部34は、第1の音声情報及び第2の音声情報を合成して、1つの音声情報(合成音声情報)として生成する。すなわち、音声合成処理部34により、図10に例示する第1の音声情報及び第2の音声情報がともに出力されるように、合成音声情報が生成される。音声情報(音声データ)を合成するためのミキシング技術等については、任意の技術が用いられてよい。 The speech synthesis processing unit 34 synthesizes the first speech information and the second speech information to generate one speech information (synthesized speech information). That is, the synthesized speech information is generated by the speech synthesis processing unit 34 so that both the first speech information and the second speech information illustrated in FIG. 10 are output. Any mixing technique or the like for synthesizing audio information (audio data) may be used.
 図9に示す例では、報知制御部19内に、オーディオ出力部35が構成される。オーディオ出力部35は、スピーカ11を制御して、第1の音声情報と第2の音声情報とが合成された合成音声情報を出力させる。これにより、第1の音声情報と第2の音声情報とがスピーカ11からともに出力される。 In the example shown in FIG. 9, an audio output section 35 is configured within the notification control section 19. The audio output unit 35 controls the speaker 11 to output synthesized audio information in which the first audio information and the second audio information are synthesized. As a result, both the first audio information and the second audio information are output from the speaker 11.
 ユーザ2は、第1の音声情報を介して第1の周辺情報30を把握するとともに、第2の音声情報を介して第2の周辺情報31を把握することが可能となる。すなわち、ユーザ2は、第1の周辺情報30及び第2の周辺情報31を、音声を介して、同時に把握することが可能となる。この結果、ユーザ2に周辺環境の情報を高い精度で報知することが可能となる。 The user 2 is able to grasp the first peripheral information 30 through the first audio information and the second peripheral information 31 through the second audio information. That is, the user 2 can grasp the first peripheral information 30 and the second peripheral information 31 at the same time via voice. As a result, it becomes possible to notify the user 2 of information about the surrounding environment with high accuracy.
 例えば、第1の音声情報として、ある楽器のメロディが出力される。第2の音声情報として、他の楽器のメロディが出力される。第1の音声情報と第2の音声情報とが合成されてともに出力されると、1つの旋律としてユーザ2に対して再生される。
 また、第1の音声情報として、ある音階の音が出力される。第2の音声情報として異なる音階の音が出力される。第1の音声情報と第2の音声情報とが合成されてともに出力されると、和音となってユーザ2に対して再生される。
 第1の音声情報及び第2の音声情報をともに出力させる方法として、このような様々なバリエーションが考えられ、本実施形態に係る報知方法として採用することが可能である。
For example, a melody of a certain musical instrument is output as the first audio information. The melody of another musical instrument is output as the second audio information. When the first audio information and the second audio information are combined and output together, they are played back to the user 2 as one melody.
Further, as the first audio information, a sound of a certain scale is output. Sounds of different scales are output as second audio information. When the first audio information and the second audio information are synthesized and output together, they form a chord and are played back to the user 2.
As a method for outputting both the first audio information and the second audio information, various variations as described above can be considered, and can be adopted as the notification method according to the present embodiment.
 例えば、レーザ測距センサ25及び超音波測距センサ26が配置された図7に示す構成のセンサ部6を用いて、ユーザ2が周辺を走査しながら移動するとする。もちろん、白杖4にセンサ部6が搭載されてもよい。 For example, it is assumed that the user 2 moves while scanning the surrounding area using the sensor section 6 having the configuration shown in FIG. 7 in which the laser distance measurement sensor 25 and the ultrasonic distance measurement sensor 26 are arranged. Of course, the sensor section 6 may be mounted on the white cane 4.
 周辺情報取得部17により、レーザ測距センサ25の検出結果に基づいて第1の周辺情報30が取得される。また、超音波測距センサ26の検出結果に基づいて第2の周辺情報31が取得される。なお、レーザ測距センサ25の検出方向と、超音波測距センサ26の検出方向とは、同じ方向に設定されているとする。 The first peripheral information 30 is acquired by the peripheral information acquisition unit 17 based on the detection result of the laser ranging sensor 25. Further, second surrounding information 31 is acquired based on the detection result of the ultrasonic ranging sensor 26. It is assumed that the detection direction of the laser distance measurement sensor 25 and the detection direction of the ultrasonic distance measurement sensor 26 are set in the same direction.
 図11は、レーザ測距センサ25と、超音波測距センサ26との差異点を示す表である。
 図7にも模式的に示しているように、超音波は広がりを持つため超音波測距センサ26の検出範囲は広くなる。一方、レーザは指向性が高いのでレーザ測距センサ25の検出範囲は狭い。超音波測距センサ26及びレーザ測距センサ25のいずれも検出範囲内に物体5が存在する場合にその測距値を返すため、レーザ測距センサ25の方が狙いを絞り易く、手や白杖4の向けた先に物体5があるかどうかを高い精度で知ることが可能となる。
FIG. 11 is a table showing differences between the laser distance measurement sensor 25 and the ultrasonic distance measurement sensor 26.
As schematically shown in FIG. 7, since ultrasonic waves have a spread, the detection range of the ultrasonic ranging sensor 26 becomes wide. On the other hand, since the laser has high directivity, the detection range of the laser ranging sensor 25 is narrow. Both the ultrasonic distance measurement sensor 26 and the laser distance measurement sensor 25 return the distance measurement value when an object 5 exists within the detection range, so the laser distance measurement sensor 25 is easier to aim at, It becomes possible to know with high accuracy whether or not there is an object 5 at the point where the cane 4 is pointed.
 逆に超音波測距センサ26は検出範囲が広いため検出された物体5のおよその方向しか分からない一方で、広範囲での検出を可能にする。視覚で例えるならば、レーザ測距センサ25は視野の狭い中心窩、超音波測距センサ26は視野の広い周辺視野に近いとも言える。 On the contrary, the ultrasonic ranging sensor 26 has a wide detection range, so while only the approximate direction of the detected object 5 can be known, it enables detection over a wide range. If we compare it with vision, it can be said that the laser distance measurement sensor 25 is close to the fovea, which has a narrow field of view, and the ultrasonic distance measurement sensor 26 is close to the peripheral field, which has a wide field of view.
 例えば手を少し動かして細い物体5の境界をレーザ測距センサ25のレーザがまたぐと、手の動きとの連動でその物体5の幅を知ることが可能である。一方で、検出範囲の広い超音波測距センサ26では広く手を動かさないと物体5が検出範囲を超えないため、物体5の幅や高さといった情報をユーザ2が細かく知ることが難しい。 For example, if the laser of the laser ranging sensor 25 straddles the boundary of a thin object 5 by slightly moving the hand, it is possible to know the width of the object 5 in conjunction with the hand movement. On the other hand, with the ultrasonic ranging sensor 26 having a wide detection range, the object 5 does not exceed the detection range unless the user moves his or her hand widely, so it is difficult for the user 2 to know information such as the width and height of the object 5 in detail.
 これらの点から、例えばレーザ測距センサ25の検出結果に対しては、検出結果の変化を細かく伝えることが出来る、音階の多い主旋律に適した楽器(例えばピアノやバイオリン、ボーカル等)による楽音(楽音データ)を割り当てる。
 超音波測距センサ26は検出範囲が広いので、検出範囲の狭いレーザ測距センサ25と比べると比較的頻繁に何かしらの物体5の検出を報知することになる。このため、例えば頻繁に報知し続けるのに適した、伴奏やベースに用いられる楽器(例えばベースギター、コントラバス、バスドラム等)による楽音を割り当てるようにする。
 検出結果が測距値のような連続値でなく、物体5の検出の有無(ON/OFF)等の2値の場合、例えば鉄琴、ドラム、ベル、チャイムのような打楽器の楽音が割り当てられてもよい。
From these points, for example, in response to the detection results of the laser distance measurement sensor 25, musical sounds (for example, piano, violin, vocals, etc.) made by instruments suitable for the main melody with many scales can be used to convey detailed changes in the detection results. musical sound data).
Since the ultrasonic ranging sensor 26 has a wide detection range, it will notify the detection of some object 5 relatively frequently compared to the laser ranging sensor 25, which has a narrow detection range. For this reason, for example, musical tones of accompaniment or bass instruments (eg, bass guitar, contrabass, bass drum, etc.) suitable for continuous notification are assigned.
If the detection result is not a continuous value such as a distance value, but a binary value such as whether or not the object 5 is detected (ON/OFF), for example, musical sounds of percussion instruments such as glockenspiel, drums, bells, and chimes are assigned. It's okay.
 図11に示すように、レーザ測距センサ25と超音波測距センサ26の他の差異点として、レーザ測距センサ25はガラスや黒い素材といった光反射率の低い物体の検出が難しく、一方で超音波測距センサ26は柔らかい素材の検出に適していないという差異点が挙げられる。 As shown in FIG. 11, another difference between the laser distance measurement sensor 25 and the ultrasonic distance measurement sensor 26 is that the laser distance measurement sensor 25 has difficulty detecting objects with low light reflectance such as glass or black materials; The difference is that the ultrasonic ranging sensor 26 is not suitable for detecting soft materials.
 このようにどちらの方式の測距センサも長所及び短所がある。本実施形態に係る周辺情報報知システム1では、両方の測距センサの短所を補い合うことが可能となる。
 例えばレーザが検出できないガラス素材の壁やドアがある場合でも超音波が測距値を返すため、周辺に何かしらの物体5があることをユーザ2に報知することが可能である。このように、検出可能な物体5の種類を増加させることが可能となる。また環境耐性を向上させることも可能である。
As described above, both types of distance measuring sensors have advantages and disadvantages. In the surrounding information notification system 1 according to the present embodiment, it is possible to compensate for the shortcomings of both distance measuring sensors.
For example, even if there is a wall or door made of glass that cannot be detected by a laser, the ultrasonic wave returns a measured distance value, so it is possible to notify the user 2 that there is some object 5 in the vicinity. In this way, it is possible to increase the types of objects 5 that can be detected. It is also possible to improve environmental resistance.
 例えば、レーザ測距センサ25を主となる測距センサとして使用し、高音楽器の楽音データを割り当てる。副となる超音波測距センサ26には、低音楽器の楽音データを割りてる。その他、ユーザ2が両方の測距センサに対応する音声を明確に聞き分けられるように、楽音データを割り当てる。これにより、非常に高い精度でユーザ2に周辺環境の情報を報知することが可能となる。 For example, the laser distance measurement sensor 25 is used as the main distance measurement sensor, and musical tone data of a high-pitched musical instrument is assigned. The secondary ultrasonic ranging sensor 26 receives musical tone data from a bass musical instrument. In addition, musical sound data is assigned so that the user 2 can clearly hear the voices corresponding to both distance measuring sensors. This makes it possible to notify the user 2 of information about the surrounding environment with very high accuracy.
 例えばガラスの壁があった場合、レーザ測距センサ25の測距値に応じて主旋律を出力すると、その出力が途切れ途切れになったり聞こえなくなったりする場合がある。ユーザ2にとっては、それが例えば木々や雪といった細かい物体5の影響なのか、光透過・吸収素材によるものかの判断が難しい。
 本実施形態では、超音波測距センサ26の測距値に応じて伴奏音を聴かせることが可能である。これにより、主旋律が途切れ途切れになったり聞こえなくなったとしても、伴奏音が安定していれば材質の影響だと容易に判断出来る。このように、レーザ測距センサ25が検出を苦手とする素材かどうかを、超音波測距センサ26の測距値を反映した補助音(伴奏)として同時に聴かせることで判断することが可能となる。
For example, if there is a glass wall, if the main melody is output according to the distance measurement value of the laser distance measurement sensor 25, the output may be interrupted or become inaudible. It is difficult for the user 2 to judge whether this is due to the influence of small objects 5 such as trees or snow, or whether it is due to the light transmitting/absorbing material.
In this embodiment, it is possible to listen to accompaniment sounds according to the distance measurement value of the ultrasonic distance measurement sensor 26. As a result, even if the main melody becomes choppy or inaudible, if the accompaniment sound is stable, it can be easily determined that this is due to the material. In this way, it is possible to judge whether the material is difficult for the laser distance measurement sensor 25 to detect by simultaneously listening to the auxiliary sound (accompaniment) that reflects the distance measurement value of the ultrasonic distance measurement sensor 26. Become.
 また、レーザ測距センサ25の測距値の出力(主旋律)が途切れ途切れであり、超音波測距センサ26の測距値の出力(伴奏音)が安定していることにより、物体5が光透過材又は光吸収素材であると判断できる場合に、さらに超音波測距センサ26から得られる硬さ(硬度)の情報を利用することも可能である。例えば、光透過性又は光吸収性を有し硬度が高い物体5は、ガラスの壁や黒い壁等の危険度の高い障害物である場合が多いとの仮定が成り立つ。そのような仮定のもと、光透過性又は光吸収性を有し硬度が低い物体は、ガラス等ではなく黒い衣服等の柔らかい光吸収素材であると推定することが可能である。さらに、それが移動していれば、黒い衣服を着た人である可能性が高いと推定することも可能である。例えばこのような推定を機械学習等で行うことで、レーザ測距センサ25と超音波測距センサ26とを組み合わせたより高度な材質推定を実現することが可能となる。また、ガラス等の壁であるか、光吸収素材からなる衣服を着た人であるかといった、物体5の種類についての推定も可能となる。 In addition, since the distance measurement value output (main melody) of the laser distance measurement sensor 25 is interrupted and the distance measurement value output (accompaniment sound) of the ultrasonic distance measurement sensor 26 is stable, the object 5 is If it can be determined that the material is a transparent material or a light-absorbing material, it is also possible to further utilize information on hardness obtained from the ultrasonic ranging sensor 26. For example, it is assumed that objects 5 that are light-transmissive or light-absorbing and have high hardness are often highly dangerous obstacles such as glass walls or black walls. Based on such an assumption, it is possible to estimate that an object having light transmittance or light absorption and low hardness is not glass or the like but a soft light-absorbing material such as black clothing. Furthermore, if it is moving, it can be estimated that it is likely to be a person wearing black clothing. For example, by performing such estimation using machine learning or the like, it becomes possible to realize more advanced material estimation by combining the laser distance measurement sensor 25 and the ultrasonic distance measurement sensor 26. It is also possible to estimate the type of object 5, such as whether it is a wall such as glass or a person wearing clothes made of light-absorbing material.
 このように周辺情報取得部17により、第1の周辺情報30と、第2の周辺情報31とに基づいて、さらに周辺情報を生成することが可能である。すなわち、第1の周辺情報と、第2の周辺情報とを統合して、周辺環境に関する周辺情報(以下、統合周辺情報と記載する)を生成することが可能である。 In this way, the peripheral information acquisition unit 17 can further generate peripheral information based on the first peripheral information 30 and the second peripheral information 31. That is, it is possible to integrate the first surrounding information and the second surrounding information to generate surrounding information regarding the surrounding environment (hereinafter referred to as integrated surrounding information).
 また周辺情報取得部17により、レーザ測距センサ25の検出の安定性、及び超音波測距センサ26の検出の安定性に基づいて、統合周辺情報を生成することも可能である。なお本開示において、センサの検出の安定性は、センサの検出結果に含まれる。 It is also possible for the surrounding information acquisition unit 17 to generate integrated surrounding information based on the stability of detection by the laser ranging sensor 25 and the stability of detection by the ultrasonic ranging sensor 26. Note that in the present disclosure, the stability of sensor detection is included in the sensor detection result.
 上記で説明したように、レーザ測距センサ25の検出の安定性が低く、かつ超音波測距センサ26の検出の安定性が高い場合には、周辺に光透過性部材又は光吸収性部材が存在する旨の統合周辺情報を生成することが可能である。 As explained above, when the detection stability of the laser distance measurement sensor 25 is low and the detection stability of the ultrasonic distance measurement sensor 26 is high, there is a light-transmitting member or a light-absorbing member in the vicinity. It is possible to generate integrated peripheral information indicating that the peripheral information exists.
 さらに、超音波測距センサ26の検出結果に基づいて第2の周辺情報31として取得される硬度の情報に基づいて、光透過性部材又は光吸収性部材に対する材質及び物体の種類の少なくとも一方に関する詳細な情報を、統合周辺情報として生成することも可能である。 Further, based on the hardness information acquired as the second peripheral information 31 based on the detection result of the ultrasonic ranging sensor 26, information regarding at least one of the material and the type of object for the light-transmitting member or the light-absorbing member is determined. It is also possible to generate detailed information as integrated peripheral information.
 例えば、レーザ測距センサ25による検出結果を、身体や可動部の動きに連動した微細な音色の変化に変換することで、物体形状を詳細に把握する。これとともに、超音波測距センサ26による広範囲の検出結果を伴奏等の音声に変換することで、進路を阻みそうな障害物の存在をなるべく事前に察知するといったことも可能となる。このような視覚における中心窩と周辺視野のような役割分担を実現することも可能となる。
 また各測距センサの欠点を補い合う役割分担も実現可能となる。例えば通常はレーザ測距センサ25を主体で使用するが、光反射率の低いガラスや黒い素材のある場所では超音波測距センサ26の情報を頼りにするといった使い方も可能となる。
For example, the shape of an object can be grasped in detail by converting the detection results from the laser distance measurement sensor 25 into minute changes in tone that are linked to the movements of the body or movable parts. At the same time, by converting the wide range detection results by the ultrasonic ranging sensor 26 into audio such as accompaniment, it is possible to detect in advance the presence of obstacles that are likely to block the route. It is also possible to realize the division of roles between the fovea and the peripheral visual field in vision.
It is also possible to share roles that compensate for the shortcomings of each distance measurement sensor. For example, although the laser distance measurement sensor 25 is normally used, it is also possible to rely on information from the ultrasonic distance measurement sensor 26 in places where there is glass or black material with low light reflectivity.
 図9に示す第1の周辺情報30及び第2の周辺情報31の具体的な種類等は限定されない。例えば、図8に例示するように、互いに検出方向が異なるように2つの測距センサが設定されたとする。
 この場合、第1の方向を検出方向として配置された測距センサの検出結果に基づいて取得される周辺情報が、第1の周辺情報30として取得されてもよい。また、第2の方向を検出方向として配置された測距センサの検出結果に基づいて取得される周辺情報が、第2の周辺情報31として取得されてもよい。
The specific types of the first peripheral information 30 and the second peripheral information 31 shown in FIG. 9 are not limited. For example, as illustrated in FIG. 8, it is assumed that two distance measuring sensors are set so that their detection directions are different from each other.
In this case, surrounding information obtained based on the detection result of a ranging sensor arranged with the first direction as the detection direction may be obtained as the first surrounding information 30. Further, surrounding information obtained based on the detection result of a ranging sensor arranged with the second direction as the detection direction may be obtained as the second surrounding information 31.
 図8に示す例では、例えば正面方向を検出方向として配置された正面側測距センサ27の検出結果に基づいて第1の周辺情報30が取得される。また測定点Pに向かう方向を検出方向として配置された地面側測距センサ28の検出結果に基づいて第2の周辺情報31が取得されてもよい。これにより、ユーザ2は、正面側の環境の情報と、地面側の環境の情報とを、音声を介して同時に把握することが可能となる。なお、地面側測距センサ28の検出結果に基づいて第1の周辺情報30が取得され、正面側測距センサ27の検出結果に基づいて第2の周辺情報31が取得されてもよい。 In the example shown in FIG. 8, the first surrounding information 30 is acquired based on the detection result of the front distance measuring sensor 27 arranged with the front direction as the detection direction, for example. Further, the second surrounding information 31 may be acquired based on the detection result of the ground-side ranging sensor 28 arranged with the direction toward the measurement point P as the detection direction. This allows the user 2 to simultaneously grasp information about the environment on the front side and information on the environment on the ground side via audio. Note that the first surrounding information 30 may be acquired based on the detection result of the ground-side distance measurement sensor 28, and the second surrounding information 31 may be obtained based on the detection result of the front-side distance measurement sensor 27.
 例えば、第1の周辺情報30として正面側測距センサ27の測距値が取得される。そして、当該測距値に応じて楽音パラメータが制御され、第1の音声情報として所定のメロディが出力される。
 一方で、第2の周辺情報としては、地面側測距センサ28の測距値に基づいて、落下危険ポイントの有無が取得される。例えば、地面側測距センサ28の測距値が大きくなった場合に、落下危険ポイントが有りと判定される。落下危険ポイントの検出に応じて、打楽器等の音声データにより第2の音声情報が生成され出力される。
 この結果、第1の音声情報であるメロディと、第2の音声情報である打楽器等の音声が、ユーザ2に対して同時に再生される。ユーザ2は、正面方向の物体5の近接と、地面3における落下危険ポイントの有無を同時に把握することが可能となる。
For example, the distance measurement value of the front side distance measurement sensor 27 is acquired as the first peripheral information 30. Then, musical tone parameters are controlled according to the measured distance value, and a predetermined melody is output as first audio information.
On the other hand, as second surrounding information, the presence or absence of a fall danger point is acquired based on the distance measurement value of the ground-side distance measurement sensor 28. For example, when the distance measurement value of the ground-side distance measurement sensor 28 becomes large, it is determined that there is a fall danger point. In response to detection of a fall danger point, second audio information is generated and output from audio data of a percussion instrument or the like.
As a result, the melody, which is the first audio information, and the audio of a percussion instrument, which is the second audio information, are simultaneously played back to the user 2. The user 2 can simultaneously grasp the proximity of the object 5 in the front direction and the presence or absence of a fall danger point on the ground 3.
 同じ方式の測距センサを、互いに検出方向が異なるように配置する。例えば、複数のレーザ測距センサを、前後左右上下等の異なる方向を検出方向としてそれぞれ配置する。
 この場合、例えば正面方向のレーザ測距センサに対して主旋律が割り当てられる。正面方向以外の方向のレーザ測距センサに対して伴奏等が割り当てられる。また、各レーザ測距センサに対して、立体音響により異なる定位を割り当てるといったことも可能である。すなわち、各レーザ測距センサの検出方向に基づいて、出力される音声情報の定位が制御されてもよい。
 例えば、複数の測距センサのうち第1の測距センサの検出方向に基づいて、第1の音声情報の定位が制御される。複数の測距センサのうち第2の測距センサの検出方向に基づいて、第2の音声情報の定位が制御される。このような処理も可能である。
Distance measuring sensors of the same type are arranged so that their detection directions are different from each other. For example, a plurality of laser ranging sensors are arranged with different detection directions such as front, back, left, right, top and bottom.
In this case, for example, the main melody is assigned to the laser distance measurement sensor in the front direction. Accompaniment etc. are assigned to the laser ranging sensor in a direction other than the front direction. It is also possible to assign different localizations to each laser distance measurement sensor using stereophonic sound. That is, the localization of the output audio information may be controlled based on the detection direction of each laser ranging sensor.
For example, the localization of the first audio information is controlled based on the detection direction of the first ranging sensor among the plurality of ranging sensors. The localization of the second audio information is controlled based on the detection direction of the second ranging sensor among the plurality of ranging sensors. Such processing is also possible.
 図12は、物体検出センサ10としてイメージセンサが配置された場合の例を説明するための模式図である。
 イメージセンサにより画像情報38が生成され、検出結果として周辺情報取得部17に出力される。周辺情報取得部17は、画像情報38に対して、物体認識処理を実行する。これにより、画角に含まれる領域に対して、物体5の検出の有無(ON/OFF)、物体5の種類、物体5までの距離、及び物体5の材質(硬さ)等を周辺情報として取得することが可能である。
FIG. 12 is a schematic diagram for explaining an example in which an image sensor is arranged as the object detection sensor 10.
Image information 38 is generated by the image sensor and output to the surrounding information acquisition unit 17 as a detection result. The surrounding information acquisition unit 17 performs object recognition processing on the image information 38. As a result, for the area included in the angle of view, the presence or absence of detection of the object 5 (ON/OFF), the type of the object 5, the distance to the object 5, the material (hardness) of the object 5, etc. are used as peripheral information. It is possible to obtain.
 図12Aに示すように、画像情報38のうち上半分の画素領域38aの情報に基づいて、第1の周辺情報30が取得されてもよい。また画像情報38のうち下半分の画素領域38bの情報に基づいて、第2の周辺情報31が取得されてもよい。
 このように、画像情報38のうちの一部の画素領域の情報に基づいて第1の周辺情報30が取得されてもよい。また画像情報38のうちの他の画素領域の情報に基づいて第2の周辺情報が取得されてもよい。これにより、ユーザ2は、上方側の環境の情報と、下方側の環境の情報とを、音声を介して同時に把握することが可能となる。
As shown in FIG. 12A, the first peripheral information 30 may be acquired based on the information of the upper half pixel region 38a of the image information 38. Further, the second peripheral information 31 may be acquired based on the information of the lower half pixel region 38b of the image information 38.
In this way, the first peripheral information 30 may be acquired based on information on a part of the pixel area of the image information 38. Further, the second peripheral information may be acquired based on information on other pixel areas in the image information 38. This allows the user 2 to simultaneously grasp information on the upper environment and information on the lower environment via audio.
 図12Bに示す例では、画像情報38に基づいて検出される車39に関する情報が、第1の周辺情報30として取得される。また画像情報38に基づいて検出される人物40に関する情報が、第2の周辺情報31として取得される。
 このように、画像情報38に基づいて検出される物体の種類ごとに、第1の周辺情報30及び第2の周辺情報31が取得されてもよい。すなわち、画像情報38に基づいて検出される第1の種類の物体に関する情報が第1の周辺情報として取得され、第1の種類とは異なる第2の種類に関する情報が第2の周辺情報として取得されてもよい。
In the example shown in FIG. 12B, information regarding a car 39 detected based on image information 38 is acquired as first surrounding information 30. Further, information regarding the person 40 detected based on the image information 38 is acquired as the second peripheral information 31.
In this way, the first peripheral information 30 and the second peripheral information 31 may be acquired for each type of object detected based on the image information 38. That is, information regarding a first type of object detected based on the image information 38 is acquired as first peripheral information, and information regarding a second type different from the first type is acquired as second peripheral information. may be done.
 物体に関する第1の種類及び第2の種類は、任意に設定可能である。例えば、第1の種類及び第2の種類として、(人物、車両)、(二輪車、自動者)、(大人、子供)(歩行者、自転車)等の、任意の組み合わせを設定することが可能である。
 ユーザ2は、種類の異なる2つの物体に関する情報を、音声を介して同時に把握することが可能となる。
The first type and second type of object can be set arbitrarily. For example, it is possible to set any combination of (person, vehicle), (motorcycle, automobile), (adult, child) (pedestrian, bicycle) as the first type and second type. be.
The user 2 can simultaneously grasp information regarding two different types of objects via voice.
 報知情報生成部18により、第1の周辺情報30に基づいて第1の音声情報を出力するか否かが判定されてもよい。例えば、物体5の検出の有無や、物体5までの距離等に基づいて第1の音声情報を出力するか否かが判定される。例えば、物体5の検出がない場合や、物体5までの距離が所定の閾値(例えば5m等)よりも大きい場合等では、第1の音声情報を出力しないと判定される。 The notification information generation unit 18 may determine whether to output the first audio information based on the first peripheral information 30. For example, it is determined whether or not to output the first audio information based on whether or not the object 5 is detected, the distance to the object 5, and the like. For example, if the object 5 is not detected, or if the distance to the object 5 is greater than a predetermined threshold (eg, 5 m, etc.), it is determined that the first audio information is not output.
 第1の音声情報を出力しないと判定された場合は、オーディオ出力部35による第1の音声情報の出力が規制される。すなわち、第1の周辺情報30が所定の条件を満たす場合には第1の音声情報が出力され、所定の条件を満たさない場合には第1の音声情報は出力されない。このような処理も可能である。第1の音声情報を出力するか否かの判定の基準となる条件については、任意に設定されてよい。 If it is determined that the first audio information is not to be output, the output of the first audio information by the audio output unit 35 is restricted. That is, when the first peripheral information 30 satisfies a predetermined condition, the first audio information is output, and when the predetermined condition is not satisfied, the first audio information is not output. Such processing is also possible. The conditions that serve as the criteria for determining whether or not to output the first audio information may be set arbitrarily.
 第2の音声情報の出力についても同様に、所定の条件に基づいて、第2の音声情報を出力するか否かが判定され、判定結果に基づいて第2の音声情報の出力が制御されてもよい。
 第1の音声情報の出力の有無に関する判定条件と、第2の音声情報の出力の有無に関する判定条件として、同じ判定条件が設定されてもよいし、異なる判定条件が個別に設定されてもよい。
Similarly, regarding the output of the second audio information, it is determined whether or not to output the second audio information based on predetermined conditions, and the output of the second audio information is controlled based on the determination result. Good too.
The same determination condition may be set as the determination condition regarding the output of the first audio information and the determination condition regarding the output of the second audio information, or different determination conditions may be set separately. .
 3種類以上の周辺情報が取得され、各周辺情報を報知するための楽音データがそれぞれ用いられて、複数の音声情報が生成されるとともに出力されてもよい。
 例えば、第1の周辺情報、第2の周辺情報、及び第3の周辺情報の各々に対して、第1の楽音データ、第2の楽音データ、及び第3の楽音データが準備される。
 第1の周辺情報に基づいて第1の楽音データが用いられて第1の音声情報が生成される。第2の周辺情報に基づいて第2の楽音データが用いられて第2の音声情報が生成される。第3の周辺情報に基づいて第3の楽音データが用いられて第3の音声情報が生成される。
 第1~第3の音声情報をともに出力することで、ユーザ2は第1~第3の周辺情報を同時に把握することが可能となる。なお、音声を介して同時に報知する周辺情報の種類と数、ならびに周辺情報に紐づけられる楽音データの種類と数は限定されない。
Three or more types of peripheral information may be acquired, and musical tone data for reporting each peripheral information may be used to generate and output a plurality of pieces of audio information.
For example, first musical tone data, second musical tone data, and third musical tone data are prepared for each of first peripheral information, second peripheral information, and third peripheral information.
First musical tone data is used to generate first audio information based on the first peripheral information. Second audio information is generated using the second musical tone data based on the second peripheral information. Third audio information is generated using third musical tone data based on the third peripheral information.
By outputting the first to third audio information together, the user 2 can grasp the first to third peripheral information at the same time. Note that the type and number of peripheral information to be simultaneously notified via voice and the type and number of musical tone data linked to the peripheral information are not limited.
 ユーザ2に報知したい周辺情報が多い場合でも、各周辺情報を音声情報に適宜変換することで、各周辺情報をユーザ2に報知することが可能となる。ユーザ2にとっては、視覚を奪われることなく、複数の周辺情報を瞬時に知覚することが可能となる。 Even if there is a large amount of peripheral information that the user 2 would like to be notified of, it is possible to notify the user 2 of each peripheral information by appropriately converting each peripheral information into audio information. For the user 2, it becomes possible to instantly perceive a plurality of peripheral information without being deprived of vision.
 センサ部6に配置される物体検出センサ10として、複数次元の情報を出力可能なセンサが配置されてもよい。例えば、図12等に示す画像情報38を出力可能なイメージセンサは、各画素の情報を1次元の情報として、複数次元の情報を出力可能なセンサとも言える。複数次元の情報を出力可能な物体検出センサ10として、画素毎に距離情報を取得可能なToF(Time of Flight)センサを挙げることも可能である。 As the object detection sensor 10 disposed in the sensor unit 6, a sensor capable of outputting multidimensional information may be disposed. For example, an image sensor capable of outputting image information 38 shown in FIG. 12 and the like can also be said to be a sensor capable of outputting multidimensional information, with information on each pixel being one-dimensional information. As the object detection sensor 10 that can output multidimensional information, it is also possible to use a ToF (Time of Flight) sensor that can acquire distance information for each pixel.
 例えば、VGA解像度の場合は640×480=307200画素となる。FullHDの場合は1920×1080=2073600画素となる。例えば、各画素の情報に対して楽音データを用いて音声情報が生成され、ともに出力されてもよい。 For example, in the case of VGA resolution, it is 640×480=307200 pixels. In the case of Full HD, the number is 1920×1080=2073600 pixels. For example, audio information may be generated using musical tone data for each pixel information and output together.
 一方で、このように次元数が多い場合には、次元数を間引いたりグルーピングしたりしない限り、一つ一つの次元に対して異なる音声情報を割り当てることは難しい。
 例えば画素ごとに立体音響の音源定位を配置する際に処理量を抑えたい場合は、画像サイズを縮小することで高速化が可能である。また左右の定位差は比較的聞き分けやすいが上下が聞き分けにくいといった問題が発生する場合には、上方向の画素ほど高音、下方向の画素ほど低音にするといった設定を採用することで、ユーザ2にとって聞き分けやすい報知が実現される。
On the other hand, when there are such a large number of dimensions, it is difficult to assign different audio information to each dimension unless the number of dimensions is thinned out or grouped.
For example, if you want to reduce the amount of processing when placing stereophonic sound source localization for each pixel, you can increase the speed by reducing the image size. In addition, if a problem arises in which it is relatively easy to distinguish between the left and right localization differences, but it is difficult to distinguish between the top and bottom, it is possible for user 2 to use settings such that the higher the pixel is in the upper direction, the higher the sound is, and the lower the pixel is, the lower the sound is. Announcements that are easy to hear are realized.
 以上、本実施形態に係る周辺情報報知システム1では、コントローラ8により、1以上の物体検出センサ10による検出結果に基づいて、第1の周辺情報30と第2の周辺情報31とが取得される。そして、第1の周辺情報30に基づいて第1の楽音データが用いられて第1の音声情報が生成される。また第2の周辺結果に基づいて第2の楽音データが用いられて第2の音声情報が生成される。第1の音声情報及び第2の音声情報はともに出力される。
 これにより、音声を介して、第1の周辺情報及び第2の周辺情報をともに報知することが可能となり、ユーザ2に対して周辺環境の情報を高い精度で報知することが可能となる。
As described above, in the peripheral information notification system 1 according to the present embodiment, the first peripheral information 30 and the second peripheral information 31 are acquired by the controller 8 based on the detection results by the one or more object detection sensors 10. . Then, first audio information is generated using the first musical tone data based on the first peripheral information 30. Also, second musical tone data is used to generate second audio information based on the second peripheral result. Both the first audio information and the second audio information are output.
Thereby, it becomes possible to notify both the first surrounding information and the second surrounding information via voice, and it becomes possible to notify the user 2 of information on the surrounding environment with high accuracy.
 従来から車載やドローン等において、周辺の空間の情報を把握して周辺物体との衝突を回避するためのセンサを搭載するものがある。このような場合、進路上の障害物との衝突を前もって回避するために障害物までの距離を測定し、操縦者に前もって通知する技術が必要である。 Conventionally, some vehicles, drones, etc. are equipped with sensors to grasp information about the surrounding space and avoid collisions with surrounding objects. In such cases, in order to avoid collisions with obstacles in the path, a technology is needed to measure the distance to the obstacles and notify the driver in advance.
 測距センサには、例えば光学レーザ方式、超音波式、ステレオカメラ等があるが、例えば光学レーザ方式は光反射率の低い物体の測距が出来ないことや環境光の影響を受けるといった課題があった。また超音波式は音波が広がるため測距範囲を絞るのが難しいといった課題があった。このような課題を解決するために、複数の異なる方式の測距センサを組み合わせた構成が挙げられる。 Distance sensors include, for example, optical laser systems, ultrasonic systems, and stereo cameras, but optical laser systems have problems such as being unable to measure distances from objects with low light reflectance and being affected by environmental light. there were. Another problem with the ultrasonic method was that it was difficult to narrow down the distance measurement range because the sound waves spread out. In order to solve such problems, a configuration in which a plurality of distance measuring sensors of different methods are combined can be cited.
 例えば、複数の異なる方式の測距センサのうちの1つが選択的に切替えられて使用される方法が挙げられる。または信頼度に応じて複数の測距センサの値を融合して、1つの検出結果として出力する方法が挙げられる。 For example, there is a method in which one of a plurality of different distance measuring sensors is selectively switched and used. Alternatively, there is a method of merging the values of a plurality of distance measuring sensors according to the reliability and outputting the result as one detection result.
 これらの方法が実行される場合、環境に応じて得られる距離情報の特性(例えば検出範囲、検出精度、変動誤差範囲等)が測距センサの方式ごとに異なっているので、安定した検出結果の出力が難しくなる。また一貫した検出結果の出力ではないため、ユーザ2が直感的に測距値の特性を理解しにくく使いにくいという問題があった。 When these methods are implemented, it is difficult to obtain stable detection results because the characteristics of distance information obtained depending on the environment (e.g. detection range, detection accuracy, fluctuation error range, etc.) differ depending on the distance measurement sensor method. Output becomes difficult. Furthermore, since the detection results are not output consistently, there is a problem in that it is difficult for the user 2 to intuitively understand the characteristics of the measured distance values, making it difficult to use.
 例えば、レーザ測距センサと、超音波測距センサとが適宜選択的に切替えられて使用されるとする。この場合、指向性の高いレーザ測距センサにより正面方向の障害物に対する検出結果が出力されている状態から、検出範囲が広い超音波測距センサにより周辺に広く存在する物体の検出結果に急に切替えられて出力されるといったこともあり得る。
 ユーザ2にとっては、現在音声等を介して報知されている検出結果が、正面方向における情報なのか、それとも周辺の広い範囲の情報なのかを理解するのが難しく、危険回避等が難しくなる場合もあり得る。
For example, it is assumed that a laser distance measurement sensor and an ultrasonic distance measurement sensor are selectively switched and used as appropriate. In this case, the highly directional laser distance measurement sensor outputs detection results for obstacles in the front direction, but the ultrasonic distance measurement sensor with a wide detection range suddenly outputs detection results for objects that are widely present in the surrounding area. It is also possible that the output is switched and output.
It may be difficult for the user 2 to understand whether the detection result currently being reported via audio or the like is information for the front direction or information for a wide area around the user, making it difficult to avoid danger. could be.
 本実施形態では、第1の周辺情報30及び第2の周辺情報31が音声情報に変換され、ユーザ2に対して同時に報知することが可能となる。これにより、測距センサを選択的に切替えたり、出力を融合することなく、聴覚から複数の周辺情報を直感的に把握することが可能となる。
 例えば、レーザ測距センサ25による検出結果に基づく周辺情報と、超音波測距センサ26による検出結果に基づく周辺情報とを、音声を介して、同時に把握することが可能となる。この結果、例えば正面方向の状況と、周辺の広い範囲の状況とを同時に把握することが可能となり、危険回避等を十分に行うことが可能となる。
In this embodiment, the first peripheral information 30 and the second peripheral information 31 are converted into audio information, and it becomes possible to notify the user 2 at the same time. This makes it possible to intuitively grasp multiple pieces of peripheral information through hearing without selectively switching distance sensors or merging outputs.
For example, it is possible to simultaneously grasp surrounding information based on the detection result by the laser distance measurement sensor 25 and surrounding information based on the detection result by the ultrasonic distance measurement sensor 26 via audio. As a result, it becomes possible to grasp, for example, the situation in the front direction and the situation in a wide range of surroundings at the same time, and it becomes possible to sufficiently avoid danger.
 上記した第1の実施形態において、周辺情報取得部17は、本技術に係る周辺情報取得部の一実施形態に相当する。
 報知情報生成部18内に構成される音声信号処理部32及び33、音声合成処理部34は、本技術に係る音声情報生成部の一実施形態に相当する。
 報知制御部19内に構成されるオーディオ出力部35は、第1の音声情報及び第2の音声情報をともに出力させる、本技術に係る報知制御部の一実施形態に相当する。なお、音声合成処理部34を、本技術に係る報知制御部としても機能するブロックと見做すことも可能である。
In the first embodiment described above, the peripheral information acquisition unit 17 corresponds to an embodiment of the peripheral information acquisition unit according to the present technology.
The audio signal processing units 32 and 33 and the audio synthesis processing unit 34 configured in the notification information generation unit 18 correspond to an embodiment of the audio information generation unit according to the present technology.
The audio output unit 35 configured in the notification control unit 19 corresponds to an embodiment of the notification control unit according to the present technology, which outputs both first audio information and second audio information. Note that the speech synthesis processing section 34 can also be regarded as a block that also functions as a notification control section according to the present technology.
 また、図7に示す例では、レーザ測距センサ25は、本技術に係る第1の物体検出センサの一実施形態に相当する。またレーザ測距センサ25は、第1の方式(光学レーザ方式)により動作する第1の測距センサの一実施形態にも相当する。
 超音波測距センサ26は、本技術に係る第2の物体検出センサの一実施形態に相当する。また超音波測距センサ26は、第1の方式とは異なる第2の方式(超音波方式)により動作する第2の測距センサの一実施形態にも相当する。
Moreover, in the example shown in FIG. 7, the laser ranging sensor 25 corresponds to an embodiment of the first object detection sensor according to the present technology. Further, the laser distance measurement sensor 25 also corresponds to an embodiment of a first distance measurement sensor that operates according to the first method (optical laser method).
The ultrasonic ranging sensor 26 corresponds to an embodiment of the second object detection sensor according to the present technology. Further, the ultrasonic ranging sensor 26 also corresponds to an embodiment of a second ranging sensor that operates according to a second method (ultrasonic method) different from the first method.
 図8に示す例では、正面側測距センサ27は、本技術に係る第1の物体検出センサの一実施形態に相当する。また正面側測距センサ27は、第1の方向(正面方向)を検出方向として配置された第1の測距センサの一実施系値にも相当する。
 地面側測距センサ28は、本技術に係る第2の物体検出センサの一実施形態に相当する。また、地面側測距センサ28は、第1の方向とは異なる第2の方向(地面方向)を検出方向として配置された第2の測距センサの一実施形態にも相当する。
In the example shown in FIG. 8, the front distance measuring sensor 27 corresponds to an embodiment of the first object detection sensor according to the present technology. Further, the front side distance measuring sensor 27 also corresponds to one implementation value of the first distance measuring sensor arranged with the first direction (front direction) as the detection direction.
The ground-side ranging sensor 28 corresponds to an embodiment of the second object detection sensor according to the present technology. Further, the ground-side ranging sensor 28 also corresponds to an embodiment of a second ranging sensor arranged with a detection direction in a second direction (ground direction) different from the first direction.
 (第2の実施形態)
 図13は、第2の実施形態に係る周辺情報の報知を実現するための構成例を示すブロック図である。
 本実施形態に係る周辺情報報知システム41では、図8に示したセンサ部6の構成が採用される。すなわち正面方向を検出方向とする正面側測距センサ27と、地面3上の測定点Pに向かう方向(地面方向)を検出方向とする地面側測距センサ28とが用いられる。
(Second embodiment)
FIG. 13 is a block diagram showing a configuration example for realizing notification of surrounding information according to the second embodiment.
In the surrounding information notification system 41 according to this embodiment, the configuration of the sensor unit 6 shown in FIG. 8 is adopted. That is, a front-side distance measurement sensor 27 whose detection direction is in the front direction, and a ground-side distance measurement sensor 28 whose detection direction is in a direction toward the measurement point P on the ground 3 (ground direction) are used.
 また本実施形態では、周辺情報取得部17内に、距離情報取得部42と、状況判定部43とが構成される。
 距離情報取得部42は、第1の方向を検出方向として配置された第1の測距センサにより検出される第1の距離情報と、第1の方向とは異なる第2の方向を検出方向として配置された第2の測距センサにより検出される第2の距離情報とを取得する。
Further, in this embodiment, the surrounding information acquisition section 17 includes a distance information acquisition section 42 and a situation determination section 43.
The distance information acquisition unit 42 receives first distance information detected by a first ranging sensor arranged with a first direction as a detection direction, and a second direction different from the first direction as a detection direction. and second distance information detected by the second distance measuring sensor arranged.
 本実施形態では、正面側測距センサ27により検出される距離情報(以下、正面側距離情報と記載する)が、第1の距離情報として取得される。また地面側測距センサ28により検出される距離情報(以下、地面側距離情報と記載する)が、第2の距離情報として取得される。 In this embodiment, the distance information detected by the front distance measuring sensor 27 (hereinafter referred to as front distance information) is acquired as the first distance information. Further, distance information detected by the ground-side distance measuring sensor 28 (hereinafter referred to as ground-side distance information) is acquired as second distance information.
 状況判定部43は、第1の距離情報の変動及びばらつきを含む第1の検出情報と、第2の距離情報の変動及びばらつきを含む第2の検出情報との少なくとも一方に基づいて、周辺環境の状況を判定する。
 本実施形態では、正面側距離情報の変動及びばらつきを含む第1の検出情報と、地面側距離情報の変動及びばらつきを含む第2の検出情報との両方が用いられる。すなわち、正面側距離情報の変動、正面側距離情報のばらつき、地面側距離情報の変動、及び地面側距離情報のばらつきの4つの情報が用いられて、周辺環境の状況が判定される。
The situation determination unit 43 determines the surrounding environment based on at least one of first detection information including fluctuations and dispersion of the first distance information and second detection information including fluctuations and dispersion of the second distance information. Determine the situation.
In this embodiment, both the first detection information including fluctuations and variations in the front side distance information and the second detection information including fluctuations and variations in the ground side distance information are used. That is, the situation of the surrounding environment is determined using four pieces of information: variation in front distance information, variation in front distance information, variation in ground side distance information, and variation in ground side distance information.
 なお本開示において、「距離情報の変動」は、距離情報の変動の大きさ、変動の方向(増加/減少)、変動時間、変動していない時間等の距離情報の変動に関する任意の情報が含まれる。 In this disclosure, "variation in distance information" includes any information related to variation in distance information, such as the magnitude of variation in distance information, the direction of variation (increase/decrease), time of variation, and time of no variation. It will be done.
 また「距離情報のばらつき」は、所定のフレームレートで時系列に沿って検出される複数の距離情報のばらつきに関する任意の情報が含まれる。例えば、直近の所定の期間において検出された複数の距離情報のばらつきに関する情報や、直近に検出された所定の個数の距離情報のばらつきに関する情報が用いられてもよい。
 例えば、ばらつきを示す分散や偏差(標準偏差)等の値が算出される。そして偏差の変動時間、偏差の変動の大きさ、偏差の変動の方向(増加/減少)、偏差が変動していない時間等が「距離のばらつき」に関する情報として取得される。なお、分散や偏差(標準偏差)の値は、公知の演算式により求めることが可能である。
Further, "distance information variations" includes any information regarding variations in a plurality of distance information detected in time series at a predetermined frame rate. For example, information regarding dispersion of a plurality of pieces of distance information detected in the most recent predetermined period or information regarding dispersion of a predetermined number of distance information detected most recently may be used.
For example, values such as variance and deviation (standard deviation) indicating variations are calculated. The variation time of the deviation, the magnitude of the variation in the deviation, the direction of the variation in the deviation (increase/decrease), the time during which the deviation does not vary, etc. are acquired as information regarding the "distance variation". Note that the values of variance and deviation (standard deviation) can be determined using known arithmetic expressions.
 周辺環境の状況としては、例えば、周辺の物体5の有無、物体5までの距離、物体5の形状、物体5の種類、物体5のサイズ、物体5の材質等を、判定結果として出力することが可能である。
 また例えば、落下危険ポイントの有無、落下危険ポイントまでの距離、落下危険ポイントの種類、落下危険ポイントの形状、落下危険ポイントのサイズ等を、判定結果として出力することが可能である。もちろん他の状況が判定されてもよい。
As the situation of the surrounding environment, for example, the presence or absence of a surrounding object 5, the distance to the object 5, the shape of the object 5, the type of the object 5, the size of the object 5, the material of the object 5, etc. may be output as a determination result. is possible.
Further, for example, the presence or absence of a fall danger point, the distance to the fall danger point, the type of fall danger point, the shape of the fall danger point, the size of the fall danger point, etc. can be output as the determination results. Of course, other situations may also be determined.
 なお、距離情報取得部42により取得される正面側距離情報及び地面側距離情報は、本技術に係る周辺情報に含まれる情報である。また、状況判定部43により出力される周辺環境の状況の判定結果も、本技術に係る周辺情報に含まれる情報である。 Note that the front side distance information and the ground side distance information acquired by the distance information acquisition unit 42 are information included in the surrounding information according to the present technology. Further, the determination result of the situation of the surrounding environment outputted by the situation judgment unit 43 is also information included in the surrounding information according to the present technology.
 図14は、本実施形態に係る周辺情報の報知例を示すフローチャートである。
 ユーザ2が本体電源をONにすると、本周辺情報報知システム41が起動する(ステップ201)。例えば、センサ本体21に電源ボタン等が設置されており、ユーザ2により電源ボタン等が押圧される。あるいは、マイク等の音声入力デバイスが搭載されている場合には、ユーザ2による音声入力により、電源がONにされてもよい。
FIG. 14 is a flowchart showing an example of notification of surrounding information according to this embodiment.
When the user 2 turns on the power of the main body, the surrounding information notification system 41 starts up (step 201). For example, a power button or the like is installed on the sensor body 21, and the user 2 presses the power button or the like. Alternatively, if a voice input device such as a microphone is installed, the power may be turned on by voice input by the user 2.
 測距センサの自動キャリブレーションが開始される(ステップ202)。本実施形態では、正面側測距センサ27及び地面側測距センサ28に対して、自動キャリブレーションが実行される。 Automatic calibration of the ranging sensor is started (step 202). In the present embodiment, automatic calibration is performed on the front-side ranging sensor 27 and the ground-side ranging sensor 28.
 自動キャリブレーションの結果が異常なしかどうか判定される(ステップ203)。具体的には、各測距センサの距離情報(測距値)が取得され、適正な値となっているかが判定される。例えば、センサ本体21の装着ミスや、各測距センサに対してユーザ2の手等が覆いかぶさっている場合等では、適正な測距値とはならず、異常があると判定される。また各測距センサが故障している場合等でも、異常があると判定される。 It is determined whether the automatic calibration results are normal (step 203). Specifically, the distance information (distance measurement value) of each distance measurement sensor is acquired, and it is determined whether the value is appropriate. For example, if the sensor body 21 is attached incorrectly or if the user's 2 hand or the like is covering each distance measurement sensor, the distance measurement value will not be appropriate and it will be determined that there is an abnormality. Further, even if each distance measuring sensor is out of order, it is determined that there is an abnormality.
 自動キャリブレーションの結果が異常有りと判定された場合には(ステップ203のNo)、スピーカ11からエラーの旨の音声ガイドが出力される(ステップ204)。例えば「適正に装着されているか確認してくだい」等の音声ガイドが出力される。
 ユーザ2により、エラー対応として、各測距センサの向き等が修正される(ステップ205)。
If the result of automatic calibration is determined to be abnormal (No in step 203), a voice guide indicating an error is output from the speaker 11 (step 204). For example, a voice guide such as "Please check whether the device is worn properly" is output.
The user 2 corrects the orientation of each distance measuring sensor as a countermeasure for the error (step 205).
 自動キャリブレーションの結果が異常なしと判定された場合には(ステップ203のYes)、ユーザ2が障害物(物体5)・落下危険ポイントを測距サーチしながら行動する。本周辺情報報知システム41では、距離情報取得部42により、所定のフレームレートで検出される正面側距離情報及び地面側距離情報が取得される(ステップ206)。 If it is determined that there is no abnormality as a result of the automatic calibration (Yes in step 203), the user 2 moves while searching for obstacles (objects 5) and falling danger points. In the surrounding information notification system 41, the distance information acquisition unit 42 acquires front side distance information and ground side distance information detected at a predetermined frame rate (step 206).
 状況判定部43により、正面側及び地面側の2チャンネルの距離情報である正面側距離情報及び地面側距離情報の、変動及びばらつきに基づいて、周辺環境の状況が判定される。本実施形態では、障害物及び落下危険ポイントに関する判定結果が出力される。具体的には、障害物及び落下ポイントが存在するか否かが判定される(ステップ207)。 The situation determination unit 43 determines the situation of the surrounding environment based on fluctuations and dispersion of front side distance information and ground side distance information, which are two channels of distance information on the front side and the ground side. In this embodiment, determination results regarding obstacles and fall danger points are output. Specifically, it is determined whether an obstacle and a falling point are present (step 207).
 図15は、周辺環境の状況の判定の一例を示す表である。
 図16~図18は、図15に示す判定例を説明するための模式図である。
 図16は、ユーザ2の正面方向の位置に障害物が存在する場合を示す模式図である。
 図17は、ユーザ2の正面側の地面3に障害物が存在する場合を示す模式図である。
 図18は、ユーザ2の正面方向の位置に、落下危険ポイントが存在する場合を示す模式図である。
FIG. 15 is a table showing an example of determination of the situation of the surrounding environment.
16 to 18 are schematic diagrams for explaining the determination example shown in FIG. 15.
FIG. 16 is a schematic diagram showing a case where an obstacle exists in a position in the front direction of the user 2.
FIG. 17 is a schematic diagram showing a case where an obstacle exists on the ground 3 in front of the user 2.
FIG. 18 is a schematic diagram showing a case where a fall danger point exists at a position in the front direction of the user 2.
 図16A及びBに示すように、正面方向の位置に高さがH以上ある障害物44(物体5)が存在する場合、ユーザ2の正面方向への移動に応じて正面側距離は小さくなる。また正面距離の偏差も変動し続ける。従って、状況判定部43は、正面側距離が小さくなり、正面側距離の偏差の変動時間が所定の時間よりも長い場合、正面方向の位置に障害物44が存在すると判定することが可能である。
 さらに詳しくいえば、正面方向の位置に、少なくとも高さがH以上ある障害物44が存在すると判定することが可能となる。
As shown in FIGS. 16A and 16B, when an obstacle 44 (object 5) with a height of H or more exists at a position in the front direction, the front distance becomes smaller as the user 2 moves in the front direction. Furthermore, the deviation of the front distance also continues to fluctuate. Therefore, the situation determining unit 43 can determine that the obstacle 44 is present at the position in the front direction when the front distance becomes small and the variation time of the deviation of the front distance is longer than a predetermined time. .
More specifically, it is possible to determine that an obstacle 44 with a height of at least H or more exists in a position in the front direction.
 なお、障害物44の存在を判定する際に、正面距離情報に関する閾値が設定されてもよい。例えば、正面距離情報が所定の閾値よりも小さくなった場合に、障害物44が存在すると判定されてもよい。
 例えば、5mや10mといった閾値よりも、正面距離が小さくなった場合に、正面方向の位置に障害物44が存在すると判定される。これにより、衝突する可能性のある距離内に存在する障害物44を高精度に検出することが可能となる。
Note that when determining the presence of the obstacle 44, a threshold regarding front distance information may be set. For example, when the front distance information becomes smaller than a predetermined threshold value, it may be determined that the obstacle 44 exists.
For example, when the front distance becomes smaller than a threshold value such as 5 m or 10 m, it is determined that the obstacle 44 exists at a position in the front direction. Thereby, it becomes possible to detect with high precision the obstacle 44 that exists within the distance where there is a possibility of collision.
 なお、判定の基準となる「所定の時間」は、周辺情報報知システム41を構築する際に適宜設定すればよい。例えば、正面方向の位置に障害物44を配置してキャリブレーション等を実行することで、障害物44を検出可能な適正な時間を算出する。当該算出された時間をもとに「所定の時間」として閾値を設定する。例えば、算出された時間がそのまま閾値として採用されてもよいし、算出された時間に近い時間が閾値として採用されてもよい。当該正面側距離の偏差の変動時間が当該閾値よりも長い場合に、正面側距離の偏差の変動時間が「所定の時間よりも長い」と判定することが可能である。もちろん、このような設定に限定される訳ではない。 Note that the "predetermined time" that serves as the criterion for determination may be appropriately set when constructing the surrounding information notification system 41. For example, by arranging the obstacle 44 at a position in the front direction and performing calibration or the like, an appropriate time during which the obstacle 44 can be detected is calculated. A threshold value is set as a "predetermined time" based on the calculated time. For example, the calculated time may be directly used as the threshold, or a time close to the calculated time may be used as the threshold. When the variation time of the deviation of the front side distance is longer than the threshold value, it is possible to determine that the variation time of the deviation of the front side distance is "longer than a predetermined time". Of course, the settings are not limited to this.
 図16Aでは、障害物44として、車45が存在している。図16Bでは、障害物44として、上り階段46が存在している。 In FIG. 16A, a car 45 exists as an obstacle 44. In FIG. 16B, an upward staircase 46 is present as the obstacle 44.
 図16Aに示すように、ユーザ2が車45に向かって移動すると、測定点P付近まで車45が相対的に近づくことになる。地面側測距センサ28の検出範囲に車45の一部分(例えば、タイヤやバンパー等)が入ってくると、地面側距離情報が小さくなる。逆にいえば、車45が測定点P付近まで近づくまでは、地面側距離情報はほぼ変動しない。
 図16Aに示すように、車45が測定点P付近まで近づいた場合、正面側距離情報はユーザ2から測定点Pまでの距離Dにほぼ等しくなる。
As shown in FIG. 16A, when the user 2 moves toward the car 45, the car 45 comes relatively close to the measurement point P. When a portion of the vehicle 45 (for example, tires, bumper, etc.) comes into the detection range of the ground-side distance measuring sensor 28, the ground-side distance information becomes smaller. In other words, the ground side distance information does not substantially change until the vehicle 45 approaches the measurement point P.
As shown in FIG. 16A, when the car 45 approaches near the measurement point P, the front distance information becomes approximately equal to the distance D from the user 2 to the measurement point P.
 図16Bに示すように、ユーザ2が上り階段46に向かって移動すると、測定点P付近まで上り階段46が相対的に近づくことになる。地面側測距センサ28の検出範囲に上り階段46の最下段(1段目)が入ってくると、地面側距離情報が小さくなる。逆にいえば、上り階段46が測定点P付近まで近づくまでは、地面側距離情報はほぼ変動しない。 As shown in FIG. 16B, when the user 2 moves toward the up stairs 46, the up stairs 46 relatively approaches the vicinity of the measurement point P. When the lowest step (first step) of the ascending stairs 46 comes into the detection range of the ground-side distance measuring sensor 28, the ground-side distance information becomes smaller. In other words, the ground side distance information does not substantially change until the up stairs 46 approaches the measurement point P.
 上り階段46の上方側の段差の部分は、最下段よりも正面方向において奥側に位置し、ユーザ2からは離れた位置となる。従って、上り階段46が測定点P付近まで近づいた場合、正面側距離情報はユーザ2から測定点Pまでの距離Dよりも大きい値となる。
 その後、ユーザ2が上り階段に向かって移動すると、地面側距離情報はさらに小さくなり、地面側距離情報の偏差は変動し続ける。すなわち、正面距離情報が測定点Pまでの距離Dにほぼ等しくなるまで、地面側距離情報は小さくなり、地面側距離情報の偏差は変動し続ける。
The step portion on the upper side of the ascending staircase 46 is located further back in the front direction than the lowest step, and is located away from the user 2. Therefore, when the upward stairs 46 approaches the measurement point P, the front side distance information has a value larger than the distance D from the user 2 to the measurement point P.
After that, when the user 2 moves toward the upward stairs, the ground side distance information becomes further smaller, and the deviation of the ground side distance information continues to fluctuate. That is, until the front distance information becomes approximately equal to the distance D to the measurement point P, the ground side distance information becomes smaller and the deviation of the ground side distance information continues to fluctuate.
 これらの点に着目して、正面方向の位置に存在する障害物44が、例えば車45であるか、上り階段46であるかを判定することが可能である。具体的には、地面側距離情報が小さくなるタイミングにおける正面側距離情報に着目して、正面方向の位置に存在する障害物が車45であるか、上り階段46であるかを判定することが可能である。 By focusing on these points, it is possible to determine whether the obstacle 44 present in the front direction is, for example, a car 45 or an upward staircase 46. Specifically, it is possible to determine whether the obstacle present in the front direction is a car 45 or an upward staircase 46 by focusing on the front side distance information at the timing when the ground side distance information becomes small. It is possible.
 本実施形態では、正面側距離情報が小さくなり正面側距離情報の偏差の変動時間が所定の時間よりも長い状態であり、所定の閾値よりも正面側距離情報が小さくなるまで地面側距離情報が変動なしの状態である場合に、正面方向の位置に上方に向かって前記ユーザから離れる向きに斜めに構築された物体以外の物体が存在すると判定される。 In this embodiment, the front side distance information becomes smaller and the fluctuation time of the deviation of the front side distance information is longer than a predetermined time, and the ground side distance information is changed until the front side distance information becomes smaller than a predetermined threshold. If there is no change, it is determined that there is an object other than the object constructed obliquely upward and away from the user at a position in the front direction.
 また、正面距離情報が小さくなり正面距離情報の偏差の変動時間が所定の時間よりも長い状態であり、所定の閾値よりも正面距離情報が小さくなる前に、地面側距離情報が小さくなり地面側距離情報の偏差の変動時間が所定の時間よりも長い場合に、正面方向の位置に上方に向かって前記ユーザから離れる向きに斜めに構築された物体が存在すると判定される。 In addition, if the front distance information becomes small and the fluctuation time of the deviation of the front distance information is longer than a predetermined time, and before the front distance information becomes smaller than the predetermined threshold, the ground side distance information becomes small and the ground side If the variation time of the deviation of the distance information is longer than a predetermined time, it is determined that there is an object constructed diagonally upward and away from the user at a position in the front direction.
 正面距離情報に関する「所定の閾値」は、ユーザ2から測定点Pまでの地面3上の距離Dに基づいて設定される。例えば、距離Dがそのまま閾値として用いられてもよい。あるいは、距離Dに近い値が、閾値として用いられてもよい。例えば「所定の閾値」は、キャリブレーション等により算出されてもよいし、ユーザ2が任意に設定できるものであってもよい。 The "predetermined threshold" regarding the front distance information is set based on the distance D on the ground 3 from the user 2 to the measurement point P. For example, the distance D may be used as it is as the threshold value. Alternatively, a value close to the distance D may be used as the threshold. For example, the "predetermined threshold" may be calculated by calibration or the like, or may be set arbitrarily by the user 2.
 「距離情報が変動なしの状態」とは、距離情報が全く変動しない状態のみならず、ほぼ変動なしの状態も含まれる。例えば、ある程度幅が小さい範囲が設定され、その範囲の中に距離情報が収まる状態を、「距離情報が変動なしの状態」と規定することが可能である。例えば基準となる距離情報に対して±10%の範囲を、「距離情報が変動なしの状態」であるか否かを判定するための範囲として設定することが可能である。もちろんこのような範囲に限定される訳ではない。また、システム側またはユーザ側で任意に設定されるものであってもよい。 "A state in which the distance information does not change" includes not only a state in which the distance information does not change at all, but also a state in which there is almost no change. For example, a state in which a range with a relatively small width is set and the distance information falls within the range can be defined as a "state in which the distance information does not change." For example, it is possible to set a range of ±10% with respect to the reference distance information as the range for determining whether "the distance information is in a state where there is no change." Of course, it is not limited to this range. Alternatively, it may be set arbitrarily on the system side or the user side.
 本実施形態では、「上方に向かって前記ユーザから離れる向きに斜めに構築された物体」として、上方に向かう階段形状の物体の存在が判定される。従って、「上方に向かって前記ユーザから離れる向きに斜めに構築された物体以外の物体」は、上方に向かう階段形状の物体以外の物体となる。 In the present embodiment, the presence of an object in the shape of an upward staircase is determined as "an object constructed diagonally upward and away from the user." Therefore, "an object other than an object obliquely constructed upward and away from the user" is an object other than an upward staircase-shaped object.
 図16Aでは、車45が、上方に向かう階段形状の物体以外の物体の一実施形態となる。もちろん、車45に限定されず、上方に向かう階段形状の物体以外の任意の物体が含まれる。
 図16Bでは、上り階段46が、上方に向かう階段形状の物体の一実施形態となる。上り階段46に限定されず、例えば上りエスカレータも、上方に向かう階段形状の物体に含まれれる。その他の任意の階段形状の物体も含まれる。
In FIG. 16A, a car 45 is an embodiment of an object other than an upwardly directed staircase-shaped object. Of course, the object is not limited to the car 45, and includes any object other than an object shaped like an upward staircase.
In FIG. 16B, an ascending staircase 46 is one embodiment of an upwardly directed staircase-shaped object. The object is not limited to the upward staircase 46, and for example, an upward escalator is also included in the upward staircase-shaped object. Other arbitrary step-shaped objects are also included.
 図15に示す表では、「正面障害物」が、上方に向かう階段形状の物体以外の物体に相当する。「上り階段・上りエスカレータ」は、上方に向かう階段形状の物体に相当する。
 図15に示す表では、「正面障害物」と、「上り階段・上りエスカレータ」とを判定するための地面側距離情報の条件として、正面側距離情報が所定の閾値よりも小さくなるまでの条件が記載されている。
In the table shown in FIG. 15, a "frontal obstacle" corresponds to an object other than an upward staircase-shaped object. The "up stairs/up escalator" corresponds to a staircase-shaped object heading upward.
In the table shown in FIG. 15, the condition for the ground side distance information for determining a "frontal obstacle" and "up stairs/up escalator" is that the front side distance information becomes smaller than a predetermined threshold. is listed.
 図17では、地面3上に、高さがHよりも低い障害物48(物体5)が存在している。以下、高さがHよりも低い障害物48を、同じ符号を用いて地面障害物48と記載する。図17Aに地面障害物48の方が、図17Bに示す地面障害物48よりもサイズが大きい。 In FIG. 17, an obstacle 48 (object 5) whose height is lower than H exists on the ground 3. Hereinafter, an obstacle 48 whose height is lower than H will be referred to as a ground obstacle 48 using the same reference numeral. The ground obstacle 48 shown in FIG. 17A is larger in size than the ground obstacle 48 shown in FIG. 17B.
 図15に示すように、本実施形態では、正面側距離情報が変動なしの状態において、地面側距離情報が小さくなり地面側距離情報の偏差の変動時間が所定の時間よりも長い場合に、地面3上に所定の大きさよりも大きな地面障害物48が存在すると判定される。
 また、正面側距離情報が変動なしの状態において、地面側距離情報が小さくなり地面側距離情報の偏差の変動時間が所定の時間よりも短い場合に、地面3上に所定の大きさよりも小さな地面障害物48が存在すると判定される。
As shown in FIG. 15, in this embodiment, when the ground side distance information becomes small and the variation time of the deviation of the ground side distance information is longer than a predetermined time in a state where the front side distance information does not change, the ground side distance information is changed. It is determined that a ground obstacle 48 larger than a predetermined size exists on the ground.
In addition, in a state where the front side distance information does not change, if the ground side distance information becomes small and the variation time of the deviation of the ground side distance information is shorter than a predetermined time, a ground smaller than the predetermined size is placed on the ground 3. It is determined that an obstacle 48 exists.
 例えば、図17Aに示す地面障害物48が地面3に存在している場合の方が、図17Bに示す地面障害物48が地面3に存在している場合と比べて、地面側距離情報の偏差の変動時間が長くなる。地面距離情報の偏差の変動時間に基づいて、地面3上に存在している地面障害物48の相対的なサイズを判定することが可能となる。すなわち、地面障害物48が「所定の大きさ」よりも大きいか否かを判定することが可能である。 For example, when the ground obstacle 48 shown in FIG. 17A exists on the ground 3, the deviation of the ground side distance information is higher than when the ground obstacle 48 shown in FIG. 17B exists on the ground 3. The fluctuation time becomes longer. Based on the variation time of the deviation of the ground distance information, it is possible to determine the relative size of the ground obstacle 48 existing on the ground 3. That is, it is possible to determine whether the ground obstacle 48 is larger than a "predetermined size".
 例えば、地面側距離情報の偏差の変動時間に対して閾値(「所定の時間」)を設定する。そして、地面側距離情報の偏差の変動時間が閾値よりも長い場合に、相対的なサイズが大きい地面障害物48が地面3上に存在すると判定する。地面側距離情報の偏差の変動時間が閾値よりも短い場合に、相対的なサイズが小さい地面障害物48が地面3上に存在すると判定する。 For example, a threshold value ("predetermined time") is set for the variation time of the deviation of the ground side distance information. Then, when the variation time of the deviation of the ground side distance information is longer than the threshold value, it is determined that a ground obstacle 48 having a relatively large size exists on the ground 3. If the variation time of the deviation of the ground side distance information is shorter than the threshold value, it is determined that a ground obstacle 48 with a relatively small size exists on the ground 3.
 閾値を適宜設定することで、どの程度のサイズの地面障害物48をサイズが相対的に大きい地面障害物48とし、どの程度のサイズの地面障害物48をサイズが相対的に小さい地面障害物48とするかを、適宜設定することが可能となる。すなわち閾値を適宜設定することで、地面障害物48の大きさの判定基準となる「所定の大きさ」を適宜設定することが可能である。
 例えば、地面障害物48の大きさの判定基準となる「所定の大きさ」が設定され、その設定された「所定の大きさ」を基準とする判定が可能なように、地面側距離情報の偏差の変動時間に対して閾値(「所定の時間」)が設定されてもよい。判定の基準となる閾値(「所定の時間」や「所定の大きさ」)はシステム側またはユーザ側で任意に設定されるものであってもよい。
By setting the threshold value appropriately, what size of the ground obstacle 48 is considered to be a relatively large ground obstacle 48, and what size of the ground obstacle 48 is considered to be a relatively small ground obstacle 48. It becomes possible to set as appropriate. That is, by appropriately setting the threshold value, it is possible to appropriately set a "predetermined size" that is a criterion for determining the size of the ground obstacle 48.
For example, a "predetermined size" is set as a criterion for determining the size of the ground obstacle 48, and the ground side distance information is set so that determination can be made based on the set "predetermined size". A threshold value (“predetermined time”) may be set for the variation time of the deviation. The threshold value (“predetermined time” or “predetermined size”) serving as a criterion for determination may be arbitrarily set by the system or the user.
 ユーザ2の移動速度(歩行速度)を取得し、当該情報を用いて閾値が設定されてもよい。
 また、地面障害物48のサイズについても、地面障害物48の高さを基準に規定するか、地面障害物48の情報から見た面積を基準に規定するか、これら両方のパラメータを基準に規定するか等、任意設定されてよい。
The moving speed (walking speed) of the user 2 may be acquired and the threshold value may be set using the information.
The size of the ground obstacle 48 is also defined based on the height of the ground obstacle 48, the area seen from the information on the ground obstacle 48, or both of these parameters. It may be set arbitrarily.
 図15の表に記載の「地面障害物(大)」が、地面3上に存在する所定の大きさよりも大きな(相対的にサイズが大きい)地面障害物48に相当する。「地面障害物(小)」が、地面3上に存在する所定の大きさよりも小さい(相対的にサイズが小さい)地面障害物48に相当する。 "Ground obstacle (large)" listed in the table of FIG. 15 corresponds to a ground obstacle 48 that is larger than a predetermined size (relatively large in size) that exists on the ground 3. “Ground obstacle (small)” corresponds to a ground obstacle 48 existing on the ground 3 that is smaller than a predetermined size (relatively small in size).
 図18Aでは、ユーザ2の正面方向の位置に、落下危険ポイント50として、下り階段51が存在している。図18Bでは、正面方向の位置に、落下危険ポイント50として、ホームの端52が存在している。 In FIG. 18A, a downward staircase 51 exists as a fall danger point 50 at a position in the front direction of the user 2. In FIG. 18B, an edge 52 of the platform exists as a fall danger point 50 in a position in the front direction.
 図15に示すように、本実施形態では、地面側距離情報が大きくなる場合に、正面方向の位置に下方に向かって凹状となる領域である落下危険ポイント50が存在すると判定される。図15の表では、正面側距離情報は、変動なしの状態、あるいは小さくなる場合が条件として記載されている。これに限定されず、正面側距離情報の条件は問わず、地面側距離情報が大きくなる場合は落下危険ポイント50が存在すると判定されてもよい。 As shown in FIG. 15, in this embodiment, when the ground side distance information becomes large, it is determined that a fall danger point 50, which is an area concave downward, exists in a position in the front direction. In the table of FIG. 15, the front side distance information is described as being unchanged or decreasing. The present invention is not limited to this, and regardless of the conditions of the front side distance information, it may be determined that the fall danger point 50 exists when the ground side distance information becomes large.
 図18A及びBでは、落下危険ポイント50として、下り階段51及びホームの端52が図示されている。これに限定されず、下りエスカレータや、ユーザ2が落下する危険のある任意の落下危険ポイント50に対しても、地面側距離情報に基づいて存在を判定することが可能である。なお、図15中の落下点は、落下危険ポイント50に相当する。 In FIGS. 18A and 18B, a descending staircase 51 and an edge 52 of the platform are illustrated as falling danger points 50. The present invention is not limited to this, and it is also possible to determine the presence of a downward escalator or any fall danger point 50 where there is a risk of the user 2 falling based on the ground side distance information. Note that the falling point in FIG. 15 corresponds to the falling danger point 50.
 報知情報生成部18及び報知制御部19により、障害物44(48)及び落下危険ポイント50の報知処理が実行される(ステップ208)。 The notification information generation unit 18 and the notification control unit 19 execute notification processing of the obstacles 44 (48) and the falling danger points 50 (step 208).
 図19は、障害物及び落下危険ポイントの報知処理の一例を示す表である。
 本実施形態では、報知情報生成部18により、状況判定部43による判定結果に基づいて、ユーザ2の移動に対する危険レベルが判定される。そして、危険レベルに対応するように報知情報が生成されて出力される。
FIG. 19 is a table showing an example of a process for notifying obstacles and falling danger points.
In this embodiment, the notification information generation unit 18 determines the danger level for movement of the user 2 based on the determination result by the situation determination unit 43. Then, notification information is generated and output so as to correspond to the danger level.
 本実施形態では、障害物44(48)の存在、及び落下危険ポイント50の存在の各々を判定することが可能である。図19に示すように、落下危険ポイント50が存在する場合の方が、障害物44(48)が存在する場合よりも危険レベルが高いと判定される。
 具体的には、「下り階段・下りエスカレータ・他の落下点」は、ユーザ2の危険の程度が高いとして、危険レベルは「高」に設定される。障害物44(48)は、危険レベルは「高」より低い「中」や「低」に設定される。これにより、ユーザ2に対して、落下の危険性をより強く報知することが可能となる。
In this embodiment, it is possible to determine each of the presence of the obstacle 44 (48) and the presence of the fall danger point 50. As shown in FIG. 19, the danger level is determined to be higher when the fall danger point 50 exists than when the obstacle 44 (48) exists.
Specifically, "down stairs, down escalators, and other falling points" pose a high degree of danger to the user 2, and the danger level is set to "high." The danger level of the obstacle 44 (48) is set to "medium" or "low," which is lower than "high." This makes it possible to more strongly notify the user 2 of the danger of falling.
 また、本実施形態では、障害物44(48)の種類に応じて、危険レベルが判定される。具体的には、「正面障害物」は、衝突時のユーザ2の怪我の程度が中程度として、危険レベルは「中」に設定される。「地面障害物(大)」は、躓く可能性や衝突時のユーザ2の怪我の程度が中程度として、危険レベルは「中」に設定される。 Furthermore, in this embodiment, the danger level is determined depending on the type of obstacle 44 (48). Specifically, the danger level of the "frontal obstacle" is set to "medium", assuming that the degree of injury to the user 2 at the time of a collision is moderate. The risk level of the "ground obstacle (large)" is set to "medium" because the possibility of tripping or the degree of injury to the user 2 in the event of a collision is moderate.
 「地面障害物(小)」は、躓く可能性や衝突時のユーザ2の怪我の程度が低いとして、危険レベルは「小」に設定される。「上り階段・上りエスカレータ」は、ユーザ2の怪我の程度が低いとして危険レベルは「低」に設定される。このように危険レベルを設定することで、高精度の報知が実現される。なお、危険レベルの設定はこの例に限定されない。また、その時々のユーザ状態(性別、年齢、健康状態、補聴器装着有無、等)のデータを基にシステム側で動的に設定されてもよいし、ユーザ側で任意に設定されるものであってもよい。例えば、同じ視覚障がい者であっても、若年者と高齢者とでは、転倒時等の危険度は異なってくるものと考えられる。そのため、ユーザ2が若年層である場合は「地面障害物(大)」の危険レベルは「中」に設定されるが、ユーザ2が高齢者の場合は「高」に変更される、といった処理が行われてもよい。
 なお、ユーザ状態(性別、年齢、健康状態、補聴器装着有無、等)のデータを基に距離Dや楽音データが適宜設定されてもよい。例えば、難聴者の場合は健常者に比べて一般的に高周波数帯域の音が聞きづらくなるとされるため、ユーザ2が難聴者の場合は、高音楽器ではなく、低音楽器をメインに楽音データに割り当てる、といった処理が行われてもよい。
The danger level of the "ground obstacle (small)" is set to "small" because the possibility of tripping or the degree of injury to the user 2 in the event of a collision is low. The danger level of the "up stairs/up escalator" is set to "low" because the degree of injury to the user 2 is low. By setting the danger level in this way, highly accurate notification can be achieved. Note that the setting of the danger level is not limited to this example. In addition, it may be dynamically set by the system based on data on the user's status at the time (gender, age, health condition, whether or not hearing aids are worn, etc.), or it may be set arbitrarily by the user. It's okay. For example, even if a visually impaired person is the same, it is thought that the degree of risk of falling may differ between a young person and an elderly person. Therefore, if User 2 is a young person, the risk level of "Ground Obstacle (Large)" is set to "Medium", but if User 2 is an elderly person, it is changed to "High". may be performed.
Note that the distance D and musical tone data may be set as appropriate based on data on user status (gender, age, health condition, presence or absence of hearing aids, etc.). For example, it is generally said that people with hearing loss have difficulty hearing sounds in high frequency ranges compared to healthy people, so if user 2 is hard of hearing, he or she may use a low-frequency instrument instead of a high-frequency instrument as the main source of musical sound data. Processing such as allocation may also be performed.
 また図19に示すように、本実施形態では、音声による報知と、振動による報知とが使い分けられる。具体的には、「正面障害物」及び「上り階段・上りエスカレータ」に関しては音声による報知が実行される。「地面障害物(大)」「地面側障害物(小)」「下り階段・下りエスカレータ・他の落下点」に関しては、振動による報知が実行される。 Furthermore, as shown in FIG. 19, in this embodiment, notification by voice and notification by vibration are used. Specifically, audio notifications are performed regarding "frontal obstacles" and "up stairs/up escalators." Vibration notification is performed for "ground obstacles (large)", "ground-side obstacles (small)", "down stairs, down escalators, and other falling points".
 すなわち、本実施形態では、正面方向に対応する周辺環境の状況を報知するために音声情報が報知情報として生成される。一方、地面方向に対応する周辺環境の状況を報知するために振動情報が報知情報として生成される。
 これにより、ユーザ2は、音声を介して正面方向の状況を把握することが可能となるとともに、振動を介して地面方向の状況を把握することが可能となる。すなわち、周辺環境の情報を高い精度で報知することが可能となる。
 もちろん、第1の実施形態にて説明した、音声を介したユーザ2への同時報知が採用されてもよい。また正面方向に対応する周辺環境の状況を報知するために振動情報が報知情報として生成され、地面方向に対応する周辺環境の状況を報知するために音声情報が報知情報として生成されてもよい。
That is, in this embodiment, audio information is generated as notification information in order to notify the situation of the surrounding environment corresponding to the front direction. On the other hand, vibration information is generated as notification information in order to notify the situation of the surrounding environment corresponding to the ground direction.
As a result, the user 2 can grasp the situation in the front direction through the sound, and can also grasp the situation in the ground direction through the vibration. That is, it becomes possible to broadcast information about the surrounding environment with high accuracy.
Of course, simultaneous notification to the user 2 via voice as described in the first embodiment may be adopted. Further, vibration information may be generated as notification information to notify the situation of the surrounding environment corresponding to the front direction, and audio information may be generated as the notification information to notify the situation of the surrounding environment corresponding to the ground direction.
 図19に示すように、本実施形態では、「安全距離範囲」及び「報知距離範囲」が設定される。
 「安全距離範囲」は、障害物44(48)や落下危険ポイント50から距離が離れており安全であると判定される距離範囲である。障害物44(48)や落下危険ポイント50からの距離が「安全距離範囲」に含まれている場合は、報知は不要として、報知情報の再生は停止される。すなわち、スピーカ11からの音声情報の出力や、振動デバイス12の振動情報に応じた振動パターンの出力は停止される。
As shown in FIG. 19, in this embodiment, a "safety distance range" and a "notification distance range" are set.
The "safe distance range" is a distance range that is far from the obstacle 44 (48) or the fall danger point 50 and is determined to be safe. If the distance from the obstacle 44 (48) or fall danger point 50 is included in the "safe distance range", no notification is necessary and reproduction of the notification information is stopped. That is, the output of audio information from the speaker 11 and the output of the vibration pattern according to the vibration information of the vibration device 12 are stopped.
 「報知距離範囲」は、障害物44(48)や落下危険ポイント50が近づいており、報知が必要であると判定される距離範囲である。障害物44(48)や落下危険ポイント50からの距離が「報知距離範囲」に含まれている場合は、例えば以下のように危険レベルに応じた報知が実行される。
 「正面障害物」(危険レベル「中」)…不連続な中域の音声を出力
 「上り階段・上りエスカレータ」(危険レベル「低」)…不連続な低域の音声を出力
 「地面障害物(大)」(危険レベル「中」)…不連続な中域の振動を出力
 「地面障害物(小)」(危険レベル「低」)…不連続な高域の振動を出力
 「下り階段・下りエスカレータ・他の落下点」…不連続な低域の振動を出力
 このように、危険レベルに応じた報知を実行することで、高精度の報知が実現される。これにより、ユーザ2は危険レベルを直感的に把握することが可能となる。
The "notification distance range" is a distance range in which it is determined that the obstacle 44 (48) or fall danger point 50 is approaching and that notification is necessary. When the distance from the obstacle 44 (48) or the falling danger point 50 is included in the "reporting distance range", the notification according to the danger level is executed as follows, for example.
"Front obstacle" (danger level "medium")...Outputs discontinuous mid-range sound "Up stairs/up escalator" (danger level "low")...Outputs discontinuous low-range sound "Ground obstacle (Large)" (Danger level "Medium")...Outputs discontinuous mid-range vibrations. "Ground obstruction (Small)" (Danger level "Low")...Outputs discontinuous high-range vibrations. "Downward escalators and other falling points"...outputs discontinuous low-frequency vibrations.In this way, by executing notifications according to the danger level, highly accurate notifications can be achieved. This allows the user 2 to intuitively grasp the danger level.
 「安全距離範囲」及び「報知距離範囲」は、例えば距離に関する閾値を用いて設定することが可能である。障害物44(48)や落下危険ポイント50が閾値よりも離れている場合は「安全距離範囲」に含まれていると判定する。障害物44(48)や落下危険ポイント50が閾値よりも近い場合は「報知距離範囲」に含まれていると判定する。 The "safety distance range" and the "notification distance range" can be set using, for example, a threshold related to distance. If the obstacle 44 (48) or the fall danger point 50 is further away than the threshold value, it is determined that the distance is within the "safe distance range". If the obstacle 44 (48) or the falling danger point 50 is closer than the threshold value, it is determined that the obstacle 44 (48) or the fall danger point 50 is included in the "reported distance range".
 閾値は、例えば、ユーザ2から測定点Pまでの地面3上の距離Dに基づいて設定される。例えば、距離Dがそのまま閾値として用いられてもよい。あるいは、距離Dに近い値が、閾値として用いられてもよい。
 例えば、ユーザ2から測定点Pまでの距離Dが比較的小さい場合には、距離Dがそのまま閾値として用いられる。ユーザ2から測定点Pまでの距離Dが比較的大きい場合には、距離Dよりも短い値が閾値として用いられる。その他、任意の設定方法が採用されてよい。
The threshold value is set, for example, based on the distance D on the ground 3 from the user 2 to the measurement point P. For example, the distance D may be used as it is as the threshold value. Alternatively, a value close to the distance D may be used as the threshold.
For example, when the distance D from the user 2 to the measurement point P is relatively small, the distance D is used as it is as the threshold value. When the distance D from the user 2 to the measurement point P is relatively large, a value shorter than the distance D is used as the threshold value. In addition, any other setting method may be adopted.
 なお、危険レベルの判定方法、報知情報の出力方法、「安全距離範囲」及び「報知距離範囲」の設定方法等は限定されず、任意に設定されてよい。例えば、ユーザ2が移動する環境(毎日歩行するルートにどのような物体が存在するか等)、ユーザ2の歩行に関する情報(歩行速度等)等に基づいて、個人化やカスタマイズが自由に行われてよい。 Note that the method of determining the danger level, the method of outputting notification information, the method of setting the "safe distance range" and the "notification distance range", etc. are not limited and may be set arbitrarily. For example, personalization and customization can be freely performed based on the environment in which User 2 moves (such as what objects are present on the route he walks every day), information about User 2's walking (such as walking speed), etc. It's fine.
 図20は、障害物及び落下危険ポイントの報知処理の他の例を説明するための模式図である。
 図20に示す例では、「報知距離範囲」が、さらに「ソフト報知距離範囲」と「危険報知距離範囲」とに分けられる。
FIG. 20 is a schematic diagram for explaining another example of the process for notifying obstacles and falling danger points.
In the example shown in FIG. 20, the "notification distance range" is further divided into a "soft notification distance range" and a "danger notification distance range."
 「ソフト報知距離範囲」は、障害物44(48)や落下危険ポイント50が近づいていることを報知する距離範囲である。すなわち、警戒が必要であることを報知する距離範囲である。
 「危険報知距離範囲」は、障害物44(48)や落下危険ポイント50が目の前に迫ってきており、危険レベルが高い距離範囲である。すなわち、障害物44(48)に衝突してしまう状態や、落下危険ポイント50に落下してしまう状態が間近にせまっていることを報知する距離範囲である。
The "soft notification distance range" is a distance range for notifying that the obstacle 44 (48) or fall danger point 50 is approaching. In other words, it is a distance range that notifies you of the need for vigilance.
The "danger warning distance range" is a distance range in which the obstacles 44 (48) and falling danger points 50 are approaching and the danger level is high. In other words, this is a distance range that notifies you that the vehicle is about to collide with the obstacle 44 (48) or fall into the fall danger point 50.
 図20に示す例では、「安全距離範囲」と「報知距離範囲」との境界を示す距離として4mが設定される。そして、「ソフト報知距離範囲」と「危険報知距離範囲」との境界を示す距離として2mが設定されている。
 すなわち、障害物や落下危険ポイントから0m~2mまでの範囲が「危険距離範囲」となる。障害物や落下危険ポイントから2m~4mまでの範囲が「ソフト距離範囲」となる。障害物や落下危険ポイントから4m以上の範囲が「安全距離範囲」となる。ここで、「安全距離範囲」と「報知距離範囲」との境界を示す距離は任意に設定されてよい。
In the example shown in FIG. 20, 4 m is set as the distance indicating the boundary between the "safety distance range" and the "notification distance range". Further, 2 m is set as the distance indicating the boundary between the "soft alert distance range" and the "danger alert distance range."
In other words, the range from 0m to 2m from an obstacle or a fall danger point is the "dangerous distance range." The "soft distance range" is a range of 2m to 4m from an obstacle or fall hazard point. The "safe distance range" is a range of 4 meters or more from an obstacle or fall hazard point. Here, the distance indicating the boundary between the "safety distance range" and the "notification distance range" may be set arbitrarily.
 「危険報知距離範囲」では、「ソフト報知距離範囲」と比べて、ユーザ2に対してより危険であることを報知するために、十分に注意を喚起することが可能な報知方法が採用される。 In the "danger warning distance range", in order to notify the user 2 that there is more danger than in the "soft warning distance range", a notification method that can sufficiently call attention is adopted. .
 図20に示す例では、不連続に出力される音声及び振動に関して、テンポが切替えられる。具体的には、「ソフト報知距離範囲」では比較的Lowテンポで、不連続な音声及び振動が出力される。「危険報知距離範囲」では、比較的Hiテンポで、不連続な音声及び振動が出力される。もちろん、これに限定されず、音の強さ、曲の再生の速さ、BPM、振動の強さ、周波数等が制御されてもよい。
 「ソフト報知距離範囲」と「危険報知距離範囲」とで報知方法を変えることで、ユーザ2は障害物44(48)や落下危険ポイント50がどのぐらい接近しているかを直感的に把握することが可能となり、周辺の状況を高精度に把握することが可能となる。
In the example shown in FIG. 20, the tempo is switched for the discontinuously output sounds and vibrations. Specifically, in the "soft notification distance range", discontinuous sounds and vibrations are output at a relatively low tempo. In the "danger warning distance range", discontinuous sounds and vibrations are output at a relatively high tempo. Of course, the present invention is not limited to this, and the intensity of the sound, the playback speed of the song, the BPM, the intensity of vibration, the frequency, etc. may be controlled.
By changing the notification method between the "soft notification distance range" and the "danger notification distance range," the user 2 can intuitively grasp how close the obstacle 44 (48) or falling danger point 50 is. This makes it possible to grasp the surrounding situation with high precision.
 ユーザ2が本体電源をOFFにするまで、周辺情報の報知は継続される。ユーザ2が本体電源をOFFにすると、本周辺情報報知システム41の動作は終了する(ステップ209)。 The notification of surrounding information continues until the user 2 turns off the main body power. When the user 2 turns off the main body power, the operation of the peripheral information notification system 41 ends (step 209).
 図21は、「地面障害物(大)」「地面障害物(小)」の検出例を示す模式図である。
 図21に示すように、地面側測距センサ28の検出範囲が地面障害物48に到達するまでは、地面側距離情報は地面3までの距離を基準として変動なしの状態となり、地面側距離情報の偏差は安定している。すなわち、図21に示すように「偏差安定期間」となる。
FIG. 21 is a schematic diagram showing an example of detection of "ground obstacle (large)" and "ground obstacle (small)."
As shown in FIG. 21, until the detection range of the ground-side distance measuring sensor 28 reaches the ground obstacle 48, the ground-side distance information remains unchanged based on the distance to the ground 3, and the ground-side distance information The deviation of is stable. That is, as shown in FIG. 21, there is a "deviation stabilization period".
 地面障害物48が測定点Pの位置まで近づくと、地面障害物48の検出が開始され、地面側距離情報は小さくなる。地面障害物48が検出されている間は地面側距離情報の偏差は変動し、図21に示すように「偏差変動期間」となる。 When the ground obstacle 48 approaches the position of the measurement point P, detection of the ground obstacle 48 is started, and the ground side distance information becomes smaller. While the ground obstacle 48 is being detected, the deviation of the ground side distance information fluctuates, resulting in a "deviation fluctuation period" as shown in FIG. 21.
 地面側距離情報が障害物検出前の値に戻ると、地面障害物48の検出は終了となる。その後、地面側距離情報は地面3までの距離を基準として変動なしの状態となり、図21に示すように再び「偏差安定期間」となる。 When the ground side distance information returns to the value before the obstacle detection, the detection of the ground obstacle 48 ends. Thereafter, the ground side distance information remains unchanged with the distance to the ground 3 as a reference, and the "deviation stable period" again occurs as shown in FIG. 21.
 ここで図21に示すように、地面障害物48を検出する際に、検出開始タイミング、及び検出終了タイミングにて、地面側距離情報の偏差が規定外の値となる場合があり得る。従って、地面障害物48の検出開始タイミングにおける地面側距離情報の偏差と、障害物の検出終了タイミングにおける地面側距離情報の偏差とを、障害物判定には含めないように処理されてもよい。
 この場合、図21に示すように、検出開始タイミング、及び検出終了タイミングにおける地面距離情報の偏差を除く偏差のデータを、障害物判定可能なデータとして用いる。これにより、高い精度で地面障害物48を検出することが可能となる。
Here, as shown in FIG. 21, when detecting the ground obstacle 48, the deviation of the ground side distance information may become an unspecified value at the detection start timing and the detection end timing. Therefore, the deviation of the ground side distance information at the detection start timing of the ground obstacle 48 and the deviation of the ground side distance information at the obstacle detection end timing may be processed so as not to be included in the obstacle determination.
In this case, as shown in FIG. 21, deviation data excluding the deviation of the ground distance information at the detection start timing and the detection end timing is used as data that can be used to determine an obstacle. This makes it possible to detect the ground obstacle 48 with high accuracy.
 以上、本実施形態に係る周辺情報報知システム41では、コントローラ8により、正面側測距センサ27(第1の測距センサ)により検出される正面側距離情報(第1の距離情報)の変動及びばらつきを含む第1の検出情報と、地面側測距センサ28(第2の測距センサ)により検出される地面側距離情報(第2の距離情報)の変動及びばらつきを含む第2の検出情報とに基づいて、報知情報が生成される。これにより、周辺環境の情報を高い精度で検出し、ユーザ2に報知することが可能となる。 As described above, in the surrounding information notification system 41 according to the present embodiment, the controller 8 controls the fluctuation of the front side distance information (first distance information) detected by the front side distance measurement sensor 27 (first distance measurement sensor) and First detection information including variations, and second detection information including variations and variations in ground-side distance information (second distance information) detected by the ground-side distance measurement sensor 28 (second distance measurement sensor). Broadcast information is generated based on this. This makes it possible to detect information about the surrounding environment with high accuracy and notify the user 2 of the information.
 弱視や全盲等の視覚障がい者の歩行中に起こり得る危険課題として「障害物との衝突」「床面の異常による転倒・転落」が挙げられる。これらをともに検出し、安価なデバイス、安価なシステム構成で視覚障がい者の危険回避を達成することが可能なアクセシビリティデバイスを実現することは重要である。 Dangerous issues that can occur while walking for people with visual impairments such as amblyopia or total blindness include ``collisions with obstacles'' and ``falls due to irregularities on the floor.'' It is important to realize an accessibility device that can detect both of these factors and help visually impaired people avoid danger using an inexpensive device and an inexpensive system configuration.
 本実施形態に係る周辺情報報知システム41では、危険回避を低価格・軽量化・小型化に不向きなカメラ認識技術を用いず、軽量で安価な測距センサを用いて上記のようなアクセシビリティデバイスを実現することが可能である。
 図15等に示すように、「正面方向」と「地面方向」の2チャンネルの距離情報の変動及びばらつきに着目することで、歩行中の周辺環境・状況を詳細、かつ瞬時に把握できるようになる。すなわち「障害物との衝突」「床面の異常による転倒・転落」を同時に検出することが可能となる。これにより、視覚障がい者であるユーザ2の安全性と安心感が向上する効果がある。
In the surrounding information notification system 41 according to the present embodiment, danger avoidance does not use camera recognition technology that is unsuitable for low cost, weight reduction, and miniaturization, but uses a lightweight and inexpensive distance measurement sensor to implement the above-mentioned accessibility device. It is possible to achieve this.
As shown in Figure 15, by focusing on the fluctuations and dispersion of distance information in two channels, ``front direction'' and ``ground direction,'' it is possible to grasp the surrounding environment and situation while walking in detail and instantly. Become. In other words, it is possible to simultaneously detect "collision with an obstacle" and "fall/fall due to an abnormality on the floor". This has the effect of improving the safety and sense of security of the visually impaired user 2.
 また本実施形態では、障害物特性(危険度)の推定が可能となる。また、安全安価なアクセシビリティデバイスとしての導入ハードルが低くなり、社会貢献に寄与し得る。 Additionally, in this embodiment, it is possible to estimate the obstacle characteristics (danger level). In addition, the hurdles for introducing it as a safe and inexpensive accessibility device will be lowered, and it can contribute to society.
 上記でも述べたが、複数の測距センサの検出方向は、任意に設定されてよい。検出方向が様々な方向に設定された複数の測距センサの各距離情報の変動及びばらつきに基づいて、様々な環境の情報を、ユーザ2に対して高精度に報知することが可能となる。 As mentioned above, the detection directions of the plurality of ranging sensors may be set arbitrarily. Based on the fluctuations and variations in distance information of a plurality of distance measuring sensors whose detection directions are set in various directions, it is possible to notify the user 2 of various environmental information with high accuracy.
 測距センサの数及び各測距センサの検出方向を適宜設定することで、落下危険ポイントの種類が判定可能であってもよい。例えば、下方に向かう階段形状の落下危険ポイントであるか、それ以外の形状の落下危険ポイントであるかが判定可能であってもよい。また、駅のホームの端である旨が判定可能であってもよい。そして、落下危険ポイントの種類に応じて危険レベルが判定されてもよい。 The type of fall danger point may be able to be determined by appropriately setting the number of ranging sensors and the detection direction of each ranging sensor. For example, it may be possible to determine whether the point is a fall danger point in the shape of a staircase going downward or a fall danger point in another shape. Furthermore, it may be possible to determine that it is at the edge of a station platform. Then, the danger level may be determined according to the type of fall danger point.
 ユーザ2が毎日通るルートや経路に基づいて、正面方向において障害物を検出の有無を判定する閾値や、ユーザ2から測定点Pまでの距離D(地面3に向かう方向の交差角度θ)が調整されてもよい。
 例えば、GPS等を使ってユーザ2がよく使う経路データを取得する。経路データに基づいて、正面方向における障害物検出の最大距離や地面方向の照射角度等のセッティングが、自動調整(個人化)されてもよい。
The threshold value for determining whether or not an obstacle is detected in the front direction and the distance D from the user 2 to the measurement point P (crossing angle θ in the direction toward the ground 3) are adjusted based on the route and path that the user 2 takes every day. may be done.
For example, route data frequently used by user 2 is acquired using GPS or the like. Based on the route data, settings such as the maximum distance for obstacle detection in the front direction and the irradiation angle in the ground direction may be automatically adjusted (personalized).
 例えば、上り階段が多く存在するルートの場合は、上り階段を検出するのに適したセッティングが採用される。危険落下ポイントが多く存在するルートの場合は、落下危険ポイントを検出するのに適したセッティングが採用される。ルートの途中で、セッティングが自動調整されてもよい。 For example, in the case of a route with many upward stairs, settings suitable for detecting upward stairs are adopted. If the route has many dangerous falling points, settings suitable for detecting the dangerous falling points are adopted. The settings may be automatically adjusted during the route.
 マイク等の音声入力デバイスが搭載されている場合には、ユーザ2による音声入力により、正面方向における障害物検出の最大距離や地面方向の照射角度等のセッティングが調整されてもよい。
 なお、正面方向における障害物検出の最大距離や地面方向の照射角度等のセッティングを自動的に調整するための機構としては、モータ機構やアクチュエータ機構等を適宜構成することが可能である。
If a voice input device such as a microphone is installed, settings such as the maximum distance for obstacle detection in the front direction and the irradiation angle in the ground direction may be adjusted by voice input by the user 2.
Note that a motor mechanism, an actuator mechanism, or the like can be appropriately configured as a mechanism for automatically adjusting settings such as the maximum distance for obstacle detection in the front direction and the irradiation angle in the ground direction.
 各測距センサの検出方向を制御可能な構成は、第1の方向及び第2の方向の少なくとも一方を変動させる方向制御部とも言える。 The configuration that can control the detection direction of each ranging sensor can also be said to be a direction control unit that changes at least one of the first direction and the second direction.
 また、正面側測距センサ27のみが駆動するモード、地面側測距センサ28のみが駆動するモード、2つの測距センサ27及び28がともに駆動するモードが選択的に切替え可能であってもよい。例えば、電池残量によっては重要なチャンネルのみを駆動させるモードが自動的に設定されてもよいし、ユーザ2が適宜設定できるようにしてもよい。 Furthermore, it may be possible to selectively switch between a mode in which only the front-side distance measurement sensor 27 is driven, a mode in which only the ground-side distance measurement sensor 28 is driven, and a mode in which both the two distance measurement sensors 27 and 28 are driven. . For example, a mode in which only important channels are driven may be automatically set depending on the remaining battery level, or the user 2 may be able to set the mode as appropriate.
 イメージセンサ等が搭載され、画像情報に基づいて人物の検出が可能である場合は、検出された人物が障害物の検出から除外されてもよい。
 また、センサ部6の構成として、スマートフォン等に搭載の測距センサ等を活用して、スマートフォンとの一体的な構成が採用されてもよい。
If an image sensor or the like is installed and a person can be detected based on image information, the detected person may be excluded from obstacle detection.
Further, as the configuration of the sensor section 6, an integral configuration with a smartphone may be adopted by utilizing a distance measuring sensor etc. mounted on the smartphone.
 上記した第2の実施形態において、距離情報取得部42は、本技術に係る距離情報取得部の一実施形態に相当する。
 状況判定部43は、本技術に係る状況判定部の一実施形態に相当する。
 報知情報生成部18及び報知制御部19は、状況判定部による判定結果を報知するための報知情報を生成して出力させる報知部の一実施形態として機能する。
In the second embodiment described above, the distance information acquisition section 42 corresponds to one embodiment of the distance information acquisition section according to the present technology.
The situation determining unit 43 corresponds to an embodiment of the situation determining unit according to the present technology.
The notification information generation section 18 and the notification control section 19 function as an embodiment of a notification section that generates and outputs notification information for notifying the determination result by the situation determination section.
 (第3の実施形態)
 図22は、第3の実施形態に係る周辺情報の報知を実現するための構成例を示すブロック図である。
(Third embodiment)
FIG. 22 is a block diagram showing a configuration example for realizing notification of surrounding information according to the third embodiment.
 本実施形態に係る周辺情報報知システム54では、センサ部6に、さらに、9軸センサ55と、GPS56とが搭載される。
 9軸センサ55は、3軸加速度センサ、3軸ジャイロセンサ、及び3軸コンパスセンサを含む。9軸センサ55により、センサ部6(センサ本体21)の、3軸における加速度、角速度、及び方位を検出することが可能である。その他、任意の構成を有するIMU(Inertial Measurement Unit)センサが用いられてもよい。GPS56は、センサ部6(センサ本体21)の現在位置の情報を取得する。また、必要に応じて脈拍、心拍、体温、脳波等の生体情報を取得するセンサが用いられてもよい。
In the surrounding information notification system 54 according to this embodiment, the sensor unit 6 is further equipped with a 9-axis sensor 55 and a GPS 56.
The 9-axis sensor 55 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor. The nine-axis sensor 55 can detect acceleration, angular velocity, and orientation in three axes of the sensor section 6 (sensor main body 21). In addition, an IMU (Inertial Measurement Unit) sensor having any configuration may be used. The GPS 56 acquires information on the current position of the sensor section 6 (sensor main body 21). Further, a sensor that acquires biological information such as pulse, heartbeat, body temperature, and brain waves may be used as necessary.
 また本実施形態では、コントローラ8内に、さらに、自己位置推定部57と、マップ情報生成部58とが構成される。これらのブロックは、コントローラ8のプロセッサが本技術に係るプログラムを実行することで実現される。各機能ブロックを実現するために、IC(集積回路)等の専用のハードウェアが適宜用いられてもよい。 In this embodiment, the controller 8 further includes a self-position estimating section 57 and a map information generating section 58. These blocks are realized by the processor of the controller 8 executing a program according to the present technology. In order to realize each functional block, dedicated hardware such as an IC (integrated circuit) may be used as appropriate.
 自己位置推定部57は、センサ部6(センサ本体21)の自己位置を推定する。本開示において、自己位置は、センサ本体21の位置及び姿勢を含む。自己位置推定部57により、センサ本体21がどの位置にいるかを示す位置情報と、センサ本体21がどの方向を向いているか等の姿勢情報とを算出することが可能である。
 さらに、センサ本体21の姿勢情報に基づいて、現在のユーザ2の正面方向が、どの方向であるかを検出することが可能である。すなわち正面側測距センサ27の検出方向がどの方向を向いているかを検出することが可能である。
The self-position estimating section 57 estimates the self-position of the sensor section 6 (sensor main body 21). In the present disclosure, the self-position includes the position and orientation of the sensor body 21. The self-position estimating unit 57 can calculate position information indicating where the sensor body 21 is located and posture information such as which direction the sensor body 21 is facing.
Furthermore, based on the posture information of the sensor body 21, it is possible to detect which direction the user 2 is currently facing. In other words, it is possible to detect which direction the front distance measuring sensor 27 is facing.
 センサ本体21の姿勢、位置、移動(動き)は、ユーザ2の姿勢、位置、移動(動き)と見做すことも可能である。 The attitude, position, and movement (movement) of the sensor body 21 can also be regarded as the attitude, position, and movement (movement) of the user 2.
 センサ本体21の自己位置は、、センサ部6からの検出結果により算出される。自己位置を推定するために、周辺の画像情報を取得するためのイメージセンサ等が搭載されてもよい。 The self-position of the sensor body 21 is calculated based on the detection results from the sensor section 6. In order to estimate the self-position, an image sensor or the like for acquiring surrounding image information may be installed.
 例えば、センサ本体21の自己位置を算出するために、周辺の空間に対して3次元の座標系が設定される。例えば、絶対座標系(ワールド座標系)により規定される座標値(例えばXYZ座標値)が用いられてもよい。あるいは、所定の点を基準(原点)とした相対座標系により規定される座標値(例えばxyz座標値又はuvd座標値)が用いられてもよい。相対座標系が用いられる場合、基準となる原点は、任意に設定されてよい。 For example, in order to calculate the self-position of the sensor body 21, a three-dimensional coordinate system is set for the surrounding space. For example, coordinate values (for example, XYZ coordinate values) defined by an absolute coordinate system (world coordinate system) may be used. Alternatively, coordinate values (for example, xyz coordinate values or uvd coordinate values) defined by a relative coordinate system with a predetermined point as a reference (origin) may be used. When a relative coordinate system is used, the origin serving as a reference may be set arbitrarily.
 自己位置推定部57により、設定された3次元座標系における位置座標が算出される。またX軸をピッチ軸、Y軸をロール軸、Z軸をヨー軸とした場合における、ユーザ2(センサ本体21)の正面方向を基準とした、ピッチ角度、ロール角度、及びヨー角度が算出される。もちろん、ユーザ2(センサ本体21)の位置情報及び姿勢情報の具体的な形式等は限定されない。 The self-position estimating unit 57 calculates position coordinates in the set three-dimensional coordinate system. In addition, when the X axis is the pitch axis, the Y axis is the roll axis, and the Z axis is the yaw axis, the pitch angle, roll angle, and yaw angle are calculated based on the front direction of user 2 (sensor body 21). Ru. Of course, the specific format of the position information and posture information of the user 2 (sensor body 21) is not limited.
 センサ本体21の自己位置を推定するためのアルゴリズムは限定されず、SLAM(Simultaneous Localization and Mapping)等の任意のアルゴリズムが用いられてもよい。その他、任意の機械学習アルゴリズム等が用いられてもよい。 The algorithm for estimating the self-position of the sensor body 21 is not limited, and any algorithm such as SLAM (Simultaneous Localization and Mapping) may be used. In addition, any machine learning algorithm or the like may be used.
 マップ情報生成部58は、状況判定部43による判定結果の履歴に基づいて、周辺環境に対応する障害物空間マップを生成する。障害物空間マップは、本技術に係る周辺マップ情報の一実施形態に相当する。 The map information generation unit 58 generates an obstacle space map corresponding to the surrounding environment based on the history of determination results by the situation determination unit 43. The obstacle space map corresponds to one embodiment of surrounding map information according to the present technology.
 第2の実施形態にて説明したように、状況判定部43により、障害物44(48)や落下危険ポイント50を検出することが可能である。本実施形態では、自己位置推定部57により検出される正面方向の情報に基づいて、検出された障害物44(48)や落下危険ポイント50の3次元座標系における位置情報を算出することが可能である。
 すなわち、現在までに検出された障害物44(48)や落下危険ポイント50の判定結果の履歴に基づいて、現在までに検出された障害物44(48)や落下危険ポイント50)の位置情報を含む障害物空間マップが生成される。
As described in the second embodiment, the situation determination unit 43 can detect the obstacles 44 (48) and the fall danger points 50. In this embodiment, it is possible to calculate the position information in the three-dimensional coordinate system of the detected obstacles 44 (48) and fall danger points 50 based on the front direction information detected by the self-position estimating unit 57. It is.
That is, based on the history of the judgment results of the obstacles 44 (48) and fall danger points 50 detected up to now, the position information of the obstacles 44 (48) and fall danger points 50) detected up to now is determined. An obstacle space map containing the obstacles is generated.
 図23は、本実施形態に係る周辺情報の報知例を示すフローチャートである。
 図23のステップ301~307は、図14に示ステップ201~207と同様である。
 本実施形態では、ステップ308にて、マップ情報生成部58により、障害物空間マップが生成される。
FIG. 23 is a flowchart showing an example of notification of surrounding information according to this embodiment.
Steps 301-307 in FIG. 23 are similar to steps 201-207 shown in FIG.
In this embodiment, in step 308, the map information generation unit 58 generates an obstacle space map.
 図24は、障害物空間マップの一例を示す模式図である。図24では、XY方向を水平方向とし、Z方向を高さ方向とする3次元座標系が設定されている。
 ステップ308では、ユーザ2の自己位置と測距センサ(正面側測距センサ27)のヨー角度(Z軸を基準とする回転角度)に基づいて、検出された障害物44(48)や落下危険ポイント50のXY座標値が位置情報として算出される。そして、検出された障害物44(48)や落下危険ポイント50の位置情報を含む障害物空間マップ60が生成される。
FIG. 24 is a schematic diagram showing an example of an obstacle space map. In FIG. 24, a three-dimensional coordinate system is set in which the XY direction is the horizontal direction and the Z direction is the height direction.
In step 308, based on the self-position of the user 2 and the yaw angle (rotation angle with respect to the Z axis) of the distance measurement sensor (front distance measurement sensor 27), detected obstacles 44 (48) and falling danger are detected. The XY coordinate values of point 50 are calculated as position information. Then, an obstacle space map 60 containing positional information of the detected obstacles 44 (48) and fall danger points 50 is generated.
 図24に示す例では、障害物44(48)と落下危険ポイント50とが、すべて同様の態様で模式的に図示されている。もちろん、障害物44(48)の種類や、落下危険ポイント50の種類等の情報も含めて、障害物空間マップ60を生成することが可能である。
 すなわち、障害物44(48)及び落下危険ポイント50の位置、種類、属性の空間位置情報を含む障害物空間マップ60を生成することが可能である。
 以下、障害物44(48)や落下危険ポイント50をまとめて回避対象物と記載する場合がある。回避対象物を危険物と呼ぶことも可能である。
In the example shown in FIG. 24, the obstacles 44 (48) and fall danger points 50 are all schematically illustrated in the same manner. Of course, it is possible to generate the obstacle space map 60 including information such as the types of obstacles 44 (48) and the types of fall danger points 50.
In other words, it is possible to generate an obstacle space map 60 that includes spatial position information of the positions, types, and attributes of the obstacles 44 (48) and fall danger points 50.
Hereinafter, the obstacles 44 (48) and the fall danger points 50 may be collectively referred to as objects to be avoided. The object to be avoided can also be called a dangerous object.
 ステップ309では、報知情報生成部18及び報知制御部19により、音声を介した報知が実行される。本実施形態では、障害物空間マップ60上でユーザ2から最も近距離の回避対象物について、立体音響による報知が実行される。すなわち、現在のユーザ2の向き(正面方向)を基準として、最も近距離の回避対象物の位置から音声が聞えるように、音声情報の定位が設定される。また、回避対象物までの距離に応じて音量が減衰するように、音声情報が出力される。なお、本実施形態では、例として、ユーザ2から最も近距離の回避対象物について、立体音響による報知が実行されるとしたが、どの回避対象物を対象として立体音響による報知が実行されるかは、任意に設定されてよい。例えば、ユーザ2から最も近距離の回避対象物と、2番目に近距離である回避対象物について、立体音響による報知が実行されてもよい。また、複数の回避対象物が、ユーザ2に対して同距離にある場合は、それら複数のオブジェクト全てに対して立体音響による報知が実行されてもよい。 In step 309, the notification information generation unit 18 and notification control unit 19 execute notification via audio. In this embodiment, notification is performed using stereophonic sound regarding the object to be avoided that is closest to the user 2 on the obstacle space map 60. That is, the localization of the audio information is set so that the audio can be heard from the position of the object to be avoided at the closest distance with respect to the current direction of the user 2 (front direction). Additionally, audio information is output such that the volume is attenuated depending on the distance to the object to be avoided. In addition, in this embodiment, as an example, it is assumed that notification using stereophonic sound is performed for the object to be avoided at the closest distance from the user 2, but which object to avoid is targeted for notification using stereophonic sound? may be set arbitrarily. For example, notification using stereophonic sound may be performed regarding the object to be avoided that is closest to the user 2 and the object to be avoided that is the second closest to the user 2 . Furthermore, if a plurality of objects to be avoided are located at the same distance from the user 2, notification using stereophonic sound may be performed for all of the plurality of objects.
 これにより、ユーザ2は、最も近い位置に回避対象物が、現在の正面方向に対して、どの方向の位置に存在するかを直感的に把握することが可能となる。この結果、高い精度で危険回避を行うことが可能となる。 This allows the user 2 to intuitively grasp in which direction the closest object to be avoided exists with respect to the current front direction. As a result, it becomes possible to avoid danger with high precision.
 例えば、立体音響による報知ではなくて、正面方向の位置に存在する回避対象物の検出に応じた報知が実行されるとする。この場合、例えば図24Aの障害物空間マップ60に示す状態では、ユーザ2の正面方向の位置に存在する回避対象物を検出して報知することが可能である。 For example, assume that instead of notification using stereophonic sound, notification is performed in response to the detection of an object to be avoided that exists in a position in the front direction. In this case, for example, in the state shown in the obstacle space map 60 of FIG. 24A, it is possible to detect and notify an object to be avoided that exists in a position in front of the user 2.
 図24Bに示すように、ユーザ2が報知された回避対象物を避けるために右側に進行方向を変更したとする。この場合、先ほど検出された回避対象物が、ユーザ2の正面方向から外れると、検出及び報知が終了する。その後、回避対象物が検出されない状態で、当該回避対象物のすぐ側をユーザ2が歩行するといったこともあり得る。
 この結果、障害物44(48)端の部分に衝突してしまったり、落下危険ポイント50の端の部分から落下してしまうといったこともあり得る。
As shown in FIG. 24B, it is assumed that the user 2 changes the direction of travel to the right in order to avoid the notified object to be avoided. In this case, when the previously detected object to be avoided deviates from the front direction of the user 2, the detection and notification end. After that, the user 2 may walk right next to the avoidable object without being detected.
As a result, the vehicle may collide with the end of the obstacle 44 (48) or fall from the end of the fall danger point 50.
 また例えば、センサ部6(センサ本体21)をユーザ2の頭部や腕部に装着して回避対象物をサーチする場合、正面方向に回避対象物が検出されて報知が行われた状態から、例えば首振りや腕振り等が行われるとする。そうすると、回避対象物の検出が外れてしまい報知が終了する場合もあり得る。その状態で、直前に検出されていた回避対象物に向かって移動し、当該回避対象物への衝突や落下が発生してしまう場合もあり得る。すなわち、一度検出された回避対象物に対して、回避ができなくなるといったことも起こり得る。 For example, when the sensor unit 6 (sensor main body 21) is attached to the head or arm of the user 2 to search for an object to be avoided, from a state in which an object to be avoided is detected in the front direction and a notification is performed, For example, let us assume that a person shakes his or her head or swings an arm. In this case, the detection of the object to be avoided may fail and the notification may end. In this state, the vehicle may move toward the object to be avoided that was detected immediately before, causing a collision with or falling from the object to be avoided. That is, it may become impossible to avoid an object to be avoided once it has been detected.
 本実施形態では、過去に検出された回避対象物の位置情報を含む障害物空間マップ60が生成される。そして、ユーザ2から近距離にある回避対象物が、ユーザ2に対する方向が反映されるように立体音響により報知される。
 これにより、もしも回避対象物がユーザ2の正面方向から外れてしまった場合でも、当該回避対象物をユーザ2に報知することが可能となる。これにより、上記したような問題を解決することが可能となり、検出済みの回避対象物の回避成功率を向上させることが可能となる。
In this embodiment, an obstacle space map 60 is generated that includes position information of objects to be avoided that have been detected in the past. Then, an object to be avoided that is close to the user 2 is notified by stereophonic sound so that the direction toward the user 2 is reflected.
Thereby, even if the object to be avoided deviates from the front direction of the user 2, it is possible to notify the user 2 of the object to be avoided. This makes it possible to solve the above-mentioned problems and improve the success rate of avoiding the detected object to be avoided.
 第2の実施形態で説明した報知方法と、第3の実施形態で説明した報知方法とが併用されてもよい。すなわち、正面方向及び地面方向に対するリアルタイムの報知と、障害物空間マップ60を用いた立体音響による報知とがともに実行されてもよい。もちろん、第1の実施形態で説明した報知方法が併用されてもよい。 The notification method described in the second embodiment and the notification method described in the third embodiment may be used together. That is, both real-time notification for the front direction and ground direction and notification using stereophonic sound using the obstacle space map 60 may be performed. Of course, the notification method described in the first embodiment may be used in combination.
 なお、ユーザ2が聴覚障がい者でもある場合(例えば難聴であったり、補聴器・集音器を使用している、など)場合、立体音響の代わりに振動による報知を行うことも有効である。すなわち、現在のユーザ2の向き(正面方向)を基準として、最も近距離の回避対象物の位置に対応する体の部位に振動が提示される。また、回避対象物までの距離に応じて振動の強さが減衰するようにしてもよい。立体音響と振動どちらで報知するかをユーザ2側で適宜設定できるようにしてもよい。また、ユーザ2が補聴器や集音器等を使用していた場合は、報知される立体音響に対して補聴処理等の処理が施されるようにしてもよい。 Note that if the user 2 is also a hearing-impaired person (for example, is hard of hearing or uses a hearing aid/sound collector, etc.), it is also effective to provide notification using vibration instead of stereophonic sound. That is, vibrations are presented to the body part corresponding to the position of the object to be avoided at the closest distance with respect to the current direction of the user 2 (front direction). Further, the strength of the vibration may be attenuated depending on the distance to the object to be avoided. The user 2 may be able to appropriately set whether to notify using stereophonic sound or vibration. Further, if the user 2 is using a hearing aid, a sound collector, etc., processing such as hearing aid processing may be performed on the stereophonic sound that is notified.
 図25は、本実施形態に係る周辺情報報知システムの他の構成例を示す模式図である。
 図25に示す例では、図1に示すコントローラ8と、ネットワーク62上に配置されたサーバ装置63とが協働することで、周辺情報報知システム64が実現される。
FIG. 25 is a schematic diagram showing another configuration example of the surrounding information notification system according to the present embodiment.
In the example shown in FIG. 25, the peripheral information notification system 64 is realized by the controller 8 shown in FIG. 1 and the server device 63 arranged on the network 62 working together.
 ネットワーク62は、例えばインターネットや広域通信回線網等により構築される。その他、任意のWAN(Wide Area Network)やLAN(Local Area Network)等が用いられてよく、ネットワーク62を構築するためのプロトコルは限定されない。 The network 62 is constructed by, for example, the Internet or a wide area communication network. In addition, any WAN (Wide Area Network), LAN (Local Area Network), etc. may be used, and the protocol for constructing the network 62 is not limited.
 サーバ装置63は、例えばCPU、ROM、RAM、及びHDD等のコンピュータの構成に必要なハードウェアを有する。例えばPC(Personal Computer)等の任意のコンピュータにより、サーバ装置63を実現することが可能である。
 図25に示すように、本実施形態では、サーバ装置63により、マップ情報生成部58が実現される。
The server device 63 includes hardware necessary for configuring a computer, such as a CPU, ROM, RAM, and HDD. For example, the server device 63 can be realized by any computer such as a PC (Personal Computer).
As shown in FIG. 25, in this embodiment, the map information generation unit 58 is implemented by the server device 63.
 コントローラ8から、状況判定部43による判定結果(図中の「障害物・落下危険ポイント情報」)がネットワーク62を介して、サーバ装置63に送信される。また図示は省略しているが、自己位置推定部57により推定された自己位置の情報も、ネットワーク63を介してサーバ装置63に送信される。 From the controller 8, the determination result by the situation determination unit 43 ("obstacle/fall danger point information" in the figure) is transmitted to the server device 63 via the network 62. Although not shown, information on the self-position estimated by the self-position estimating unit 57 is also transmitted to the server device 63 via the network 63.
 サーバ装置63内には、状況判定部43による判定結果の履歴が、障害物情報DBに格納される。そして、マップ情報生成部58により、図24で例示したような障害物空間マップ60が生成される。サーバ装置63により生成された障害物空間マップ60は、ネットワーク62を介してコントローラ8に送信される。
 なお、DBは、サーバ装置63内の記憶デバイスに構築されてもよいし、サーバ装置63がアクセス可能な外部の記憶デバイスに構築されてもよい。
In the server device 63, a history of determination results by the situation determining section 43 is stored in an obstacle information DB. Then, the map information generation unit 58 generates an obstacle space map 60 as illustrated in FIG. 24 . The obstacle space map 60 generated by the server device 63 is transmitted to the controller 8 via the network 62.
Note that the DB may be constructed in a storage device within the server device 63, or may be constructed in an external storage device that the server device 63 can access.
 コントローラ8の報知情報生成部18及び報知制御部19により、受信した障害物空間マップ60に基づいて、立体音響による報知が実行される。 The notification information generation unit 18 and notification control unit 19 of the controller 8 execute notification using stereophonic sound based on the received obstacle space map 60.
 このように、クラウド(クラウドコンピューティング)を用いたクラウドシステムにより、周辺情報報知システム64を実現することが可能である。クラウドシステムを用いることで、ユーザ2側で構成されるエッジ端末に高い処理能力が備わっていない場合でも、広範囲の障害物空間マップ60を用いた報知を実現することが可能となる。
 この結果、軽量で安価でありながら、高い精度で周辺環境の情報を報知可能なアクセシビリティデバイスを実現することが可能となる。
In this way, the surrounding information notification system 64 can be realized by a cloud system using the cloud (cloud computing). By using a cloud system, even if the edge terminal configured on the user 2 side is not equipped with high processing capacity, it is possible to realize notification using a wide range obstacle space map 60.
As a result, it is possible to realize an accessibility device that is lightweight and inexpensive, yet can report information about the surrounding environment with high accuracy.
 ネットワーク62上のサーバ装置63により、周辺情報報知システム64を利用する複数のユーザ2から送信される「障害物・落下危険ポイント情報」が統合されてもよい。そして、統合された「障害物・落下危険ポイント情報」に基づいて、各ユーザ2に対する障害物空間マップ60が生成されてもよい。 The server device 63 on the network 62 may integrate "obstacle/fall danger point information" sent from multiple users 2 using the surrounding information notification system 64. Then, the obstacle space map 60 for each user 2 may be generated based on the integrated "obstacle/fall danger point information".
 例えば、あるエリアを歩行しているユーザ2に装着されているセンサ本体21により、正面方向及び地面方向に対してサーチが行われる。コントローラ8から、検出された回避対象物検出に関する「障害物・落下危険ポイント情報」が、ネットワーク62を介してサーバ装置63に送信される。 For example, the sensor main body 21 worn by the user 2 walking in a certain area performs a search in the front direction and the ground direction. “Obstacle/fall danger point information” related to the detection of the detected object to be avoided is transmitted from the controller 8 to the server device 63 via the network 62 .
 ここで同じエリアを歩行している他のユーザ2が存在しているとする。他のユーザ2に装着されているセンサ本体21により、正面方向及び地面方向に対してサーチが行われる。コントローラ8から、検出された回避対象物検出に関する「障害物・落下危険ポイント情報」が、ネットワーク62を介してサーバ装置63に送信される。 Here, it is assumed that there is another user 2 walking in the same area. The sensor main body 21 attached to the other user 2 performs a search in the front direction and the ground direction. “Obstacle/fall danger point information” related to the detection of the detected object to be avoided is transmitted from the controller 8 to the server device 63 via the network 62 .
 サーバ装置63は、ユーザ2のサーチにより検出された回避対象物の情報と、他のユーザ2のサーチにより検出された回避対象物の情報を統合して、障害物情報DBに格納する。そして、ユーザ2のサーチにより検出された回避対象物の位置情報と、他のユーザ2のサーチにより検出された回避対象物の位置情報との両方を含む障害物空間マップ60を生成する。生成された障害物空間マップ60は、ネットワーク62を介して、ユーザ2と他のユーザ2の両方に対して送信される。 The server device 63 integrates the information on the avoidable object detected by the user 2's search and the information on the avoidable object detected by the other user 2's search, and stores it in the obstacle information DB. Then, an obstacle space map 60 is generated that includes both the positional information of the avoidable object detected by the user 2's search and the positional information of the avoidable object detected by the search of another user 2. The generated obstacle space map 60 is transmitted to both the user 2 and other users 2 via the network 62.
 これにより、ユーザ2に対して、ユーザ2のサーチでは検出されていないが、他のユーザ2のサーチにより検出された回避対象物の位置情報が含まれた障害物空間マップを送信することが可能となる。この結果、ユーザ2に対して、周辺に存在する回避対象物の情報をより高精度に報知することが可能となる。
 もちろん、他のユーザ2に対しても同様に、周辺に存在する回避対象物の情報をより高精度に報知することが可能となる。例えば、ユーザ2本人およびユーザ2の家族により検出された回避対象物の情報を共有する事で、周辺に存在する回避対象物の情報をより高精度に報知することが可能となる、といったケースが想定される。
As a result, it is possible to send to user 2 an obstacle space map that includes position information of objects to be avoided that were not detected by user 2's search but were detected by other user 2's searches. becomes. As a result, it becomes possible to notify the user 2 of information on objects to be avoided existing in the vicinity with higher accuracy.
Of course, it is also possible to notify other users 2 of information on objects to be avoided in the vicinity with higher precision. For example, by sharing information on objects to be avoided that have been detected by User 2 and his or her family, it is possible to notify information on objects to be avoided in the surrounding area with higher accuracy. is assumed.
 (第4の実施形態)
 図26は、第4の実施形態に係る周辺情報報知システムの構成例を示す模式図である。
 本実施形態に係る周辺情報報知システム66は、クラウドシステムにより構成される。すなわち、図1に示すコントローラ8と、ネットワーク62上に配置されたサーバ装置63とが協働することで、周辺情報報知システム66が実現される。
(Fourth embodiment)
FIG. 26 is a schematic diagram showing a configuration example of a surrounding information notification system according to the fourth embodiment.
The surrounding information notification system 66 according to this embodiment is configured by a cloud system. That is, the peripheral information notification system 66 is realized by the controller 8 shown in FIG. 1 and the server device 63 arranged on the network 62 working together.
 また、周辺情報報知システム66は、ネットワーク62上を介して、コントローラ8及びサーバ装置63と通信可能に接続される案内装置67を含む。案内装置67は、ディスプレイを有し、ユーザ2に対してルート案内等を行うオペレータ(ルート案内等発信者)68により利用される遠隔端末として用いられる。ルート案内は、歩行誘導通知とも言える。 Additionally, the surrounding information notification system 66 includes a guide device 67 that is communicably connected to the controller 8 and the server device 63 via the network 62. The guidance device 67 has a display and is used as a remote terminal used by an operator (route guidance etc. sender) 68 who provides route guidance etc. to the user 2. The route guidance can also be called a walking guidance notification.
 図26に示すように、サーバ装置63は、ユーザ2から送信された回避対象物の情報(「障害物・落下危険ポイント情報」)を、障害物情報DBに格納する。またサーバ装置63には、現実世界の地図情報が格納された現実世界マップ情報DBが構築されている。例えば、ネットワーク62上の地図サービスを提供する地図サーバ等から様々な地域の地図情報が取得され、現実世界マップ情報DBに格納される。 As shown in FIG. 26, the server device 63 stores the information on the object to be avoided ("obstacle/fall danger point information") sent from the user 2 in the obstacle information DB. Further, the server device 63 has built therein a real world map information DB in which map information of the real world is stored. For example, map information of various regions is acquired from a map server that provides map services on the network 62, and stored in the real world map information DB.
 図27は、本実施形態に係る周辺情報の報知例を示すフローチャートである。
 図27のステップ401~408は、図23に示ステップ301~308と同様である。
FIG. 27 is a flowchart showing an example of notification of surrounding information according to this embodiment.
Steps 401-408 in FIG. 27 are similar to steps 301-308 shown in FIG. 23.
 ステップ408では、コントローラ8から、状況判定部43による判定結果(「障害物・落下危険ポイント情報」)と、自己位置推定部57により推定された自己位置の情報が、ネットワーク63を介してサーバ装置63に送信される。そして、サーバ装置63内のマップ情報生成部58により、障害物空間マップ60が生成される。 In step 408 , the controller 8 transmits the determination result by the situation determining unit 43 (“obstacle/fall danger point information”) and the self-position information estimated by the self-position estimating unit 57 to the server device via the network 63. 63. Then, the map information generation unit 58 in the server device 63 generates an obstacle space map 60.
 本実施形態では、マップ情報生成部58により、ユーザ2の現実世界における位置情報に基づいて、障害物空間マップ60に周辺の現実世界の情報が付加された現実空間危険物マップ69が生成される。図27に示すように、本実施形態では、障害物空間マップ60と現実世界のマップ情報と連携させ、ランドマーク情報等が付加された現実空間危険物マップ69が生成される(ステップ409)。 In this embodiment, the map information generation unit 58 generates a real space dangerous object map 69 in which surrounding real world information is added to the obstacle space map 60 based on the user 2's position information in the real world. . As shown in FIG. 27, in this embodiment, the obstacle space map 60 and real world map information are linked to generate a real space dangerous object map 69 to which landmark information and the like are added (step 409).
 現実空間危険物マップ69には、回避対象物(危険物)である障害物44(48)や落下危険ポイント50の現実世界における位置情報や属性情報等、ユーザ2の現実世界における位置情報、及びランドマーク情報等の現実世界の情報が含まれる。なお、現実世界の情報としては、地名や地形等の任意の地理情報が含まれる。
 現実空間危険物マップ69は、本技術に係る現実周辺マップ情報の一実施形態となる。
The real space dangerous object map 69 includes the user 2's real world position information, such as the real world position information and attribute information of the obstacles 44 (48) that are objects to be avoided (dangerous objects) and the falling danger points 50, and Contains real world information such as landmark information. Note that the real world information includes arbitrary geographic information such as place names and topography.
The real space dangerous object map 69 is an embodiment of real surrounding map information according to the present technology.
 現実空間危険物マップ69は、ネットワーク62を介して案内装置67に送信され、案内装置67のディスプレイに表示される。現実空間危険物マップ69の表示態様は限定されず、任意に設定されてよい。
 例えば、現実世界のマップ情報に回避対象物のアイコンが重畳され、当該アイコンを選択すると、回避対象物の詳細な情報が表示されてもよい。あるいは、ディスプレイの表示領域が区分され、回避対象物に関する情報がリスト表示されてもよい。
The real space dangerous object map 69 is transmitted to the guide device 67 via the network 62 and displayed on the display of the guide device 67. The display mode of the real space dangerous object map 69 is not limited and may be set arbitrarily.
For example, an icon of an object to be avoided may be superimposed on map information of the real world, and when the icon is selected, detailed information of the object to be avoided may be displayed. Alternatively, the display area of the display may be divided and information regarding the avoidable object may be displayed in a list.
 本実施形態では、オペレータ68により、現実空間危険物マップ69を用いてルート案内が実行される。例えば、「5m先に〇〇駅入り口の下り階段があります、注意して進んでください」「〇〇駐車場の入場口に車がとまっています、一旦停止してください」「地面に大きめの障害物が落ちています、歩行速度をゆるめてください」といった、回避危険物の情報と、現実世界の情報とを融合させたルート案内が可能となる。これにより、ユーザ2は、健常者が普通に得られる情報を得ることが可能となり、危険回避のさらなる安全性の向上を実現することが可能となる。 In this embodiment, the operator 68 executes route guidance using the real space dangerous object map 69. For example, ``There are stairs leading down to the entrance of 〇〇 station 5 meters ahead. Please proceed with caution.'' ``A car is parked at the entrance of 〇〇 parking lot. Please stop temporarily.'' ``There is a large obstacle on the ground. It becomes possible to provide route guidance that combines information from the real world with information about dangerous objects to avoid, such as "There is an object falling. Please slow down." As a result, the user 2 can obtain information normally obtained by a healthy person, and it is possible to further improve safety in avoiding danger.
 ユーザ2側では、コントローラ8内の報知情報生成部18により、オペレータ68のルート案内の内容を含む案内情報が受信される(ステップ410)。受信した案内情報に基づいて、報知制御部19によりスピーカ11からルート案内の内容が出力される。 On the user 2 side, the notification information generation unit 18 in the controller 8 receives guidance information including the contents of the route guidance of the operator 68 (step 410). Based on the received guidance information, the notification control unit 19 outputs the contents of the route guidance from the speaker 11.
 案内情報を、ネットワーク62を介してユーザ2側のコントローラ8に送信するための構成や方法は限定されない。例えば、案内装置67のマイク等の音声入力デバイスや通信デバイス等を用いた周知の技術により、案内情報の送信は実現可能である。 The configuration and method for transmitting the guidance information to the controller 8 on the user 2 side via the network 62 are not limited. For example, the guidance information can be transmitted using a well-known technique using a voice input device such as a microphone of the guidance device 67, a communication device, or the like.
 オペレータ68のルート案内は、現実空間危険物マップ69に基づいた、現実世界の情報を用いた報知とも言える。従って、案内装置67が備えるオペレータ68のルート案内の内容を含む案内情報を送信するための機構等は、本周辺情報報知システム66において「報知部」として機能しているとも言える。 The route guidance provided by the operator 68 can also be said to be notification using real world information based on the real space dangerous object map 69. Therefore, it can be said that the mechanism for transmitting guidance information including the contents of route guidance for the operator 68 provided in the guidance device 67 functions as a "notification section" in the surrounding information notification system 66.
 なお、オペレータ68のルート案内に代えて、現実空間危険物マップ69に基づいた自動音声によるルート案内が実行されてもよい。現実空間危険物マップ69に基づいた自動音声によるルート案内も現実空間危険物マップ69に基づいた現実世界の情報を用いた報知に含まれる。また自動音声によるルート案内を実行する機構は、本周辺情報報知システム66において「報知部」として機能する。 Note that instead of the route guidance by the operator 68, automatic voice route guidance based on the real space dangerous object map 69 may be performed. Automatic voice route guidance based on the real space dangerous object map 69 is also included in the notification using real world information based on the real space dangerous object map 69. Further, the mechanism that executes route guidance using automatic voice functions as a "notification unit" in the surrounding information notification system 66.
 例えば、現実空間危険物マップ69が、ネットワーク62を介して、ユーザ2側のコントローラ8に送信されてもよい。そして、報知情報生成部18及び報知制御部19により、現実空間危険物マップ69に基づいたルート案内等が実行されてもよい。 For example, the real space dangerous object map 69 may be transmitted to the controller 8 on the user 2 side via the network 62. Then, the notification information generation section 18 and the notification control section 19 may perform route guidance or the like based on the real space dangerous object map 69.
 上記の各実施形態に係る周辺情報報知システムを適用することで、スマートアクセシビリティ商品を創造する企業イメージが高まり、社会的貢献を目指す企業として、企業のブランドイメージ、存在価値を高めるといった効果も期待できる。 By applying the surrounding information notification system according to each of the above embodiments, the image of a company that creates smart accessibility products will be enhanced, and the company's brand image and existence value can be expected to be enhanced as a company that aims to contribute to society. .
 <その他の実施形態>
 本技術は、以上説明した実施形態に限定されず、他の種々の実施形態を実現することができる。以下では、上記した各実施形態の周辺情報報知システムに対して、適宜適用可能な技術を列挙する。
<Other embodiments>
The present technology is not limited to the embodiments described above, and various other embodiments can be realized. In the following, techniques applicable to the surrounding information notification system of each embodiment described above will be listed as appropriate.
 図28は、距離に応じた音声の出力方法の他の例を示す模式図である。
 図28に示す例では、「安全距離範囲」では、所定の楽曲等の楽音情報が再生される。例えば、ユーザ2が好む楽曲等が再生される。
 「ソフト報知距離範囲」に入ると、回避対象物を検出したことを報知する検出通知音が、楽音情報に重ね合わされるようにフェードインされる。検出通知音としては、例えば、不連続な中域の音声が出力される。図28に示す例では、検出通知音についてミキシング量が直線的に増加されているが、これに限定されず他の様々なフェードイン制御が採用されてよい。
 「ソフト報知距離範囲」にて、回避対象物までの距離が短くなると、楽音情報と検出通知音はともに最大基準レベルで出力される。
FIG. 28 is a schematic diagram showing another example of a method of outputting sound according to distance.
In the example shown in FIG. 28, musical tone information such as a predetermined song is played in the "safe distance range." For example, songs that the user 2 likes are played.
When entering the "soft notification distance range", a detection notification sound that notifies that an object to be avoided has been detected is faded in so as to be superimposed on the musical sound information. As the detection notification sound, for example, discontinuous mid-range sound is output. In the example shown in FIG. 28, the mixing amount for the detection notification sound is increased linearly, but the present invention is not limited to this, and various other fade-in controls may be adopted.
When the distance to the object to be avoided becomes shorter in the "soft notification distance range", both the musical tone information and the detection notification sound are output at the maximum standard level.
 回避対象物までの距離が「ソフト報知距離範囲」と「危険報知距離範囲」との境界を示す2mに近づくと、楽音情報と検出通知音とがともにフェードアウトされる。図28に示す例では、楽音情報についてはミキシング量が曲線的に減少(急激に減少)され、検出通知音についてはミキシング量が直線的に減少(一定の割合で減少)されている。このような様々なフェードアウト制御が採用されてよい。 When the distance to the object to be avoided approaches 2 m, which indicates the boundary between the "soft alert distance range" and the "danger alert distance range," both the musical tone information and the detection notification sound are faded out. In the example shown in FIG. 28, the mixing amount for the musical tone information is reduced in a curve (rapidly), and the mixing amount for the detection notification sound is reduced linearly (reduced at a constant rate). Various such fade-out controls may be employed.
 楽音情報及び検出通知音のフェードアウトと同時に、危険が迫ってきている旨を示す危険通知音がフェードインされる。危険通知音としては、例えば不連続な高域の音声が出力される。
 回避対象物までの距離が「危険報知距離範囲」に入ると、危険通知音が最大基準レベルまで大きくなり、ユーザ2に対して強い注意喚起を促す。
At the same time as the musical sound information and the detection notification sound fade out, the danger notification sound indicating that danger is approaching is faded in. As the danger notification sound, for example, discontinuous high-frequency sound is output.
When the distance to the object to be avoided falls within the "danger warning distance range", the danger notification sound becomes louder to the maximum standard level to strongly alert the user 2.
 例えば、回避対象物が検出された場合にブザー音等が出力される場合等において、混雑する駅の構内等の人混みやエレベータ内、エスカレータ利用時等、ブザー音が鳴り続けてしまい、ユーザ2が不快を伴う場合もあり得る。 For example, when a buzzer sound is output when an object to be avoided is detected, the buzzer sound may continue to sound in a crowded station premises, inside an elevator, or when using an escalator. It may also be accompanied by discomfort.
 図28に示す例では、「好みの音楽」の楽曲情報をメイン音源にして、これに例えば、ソナー音等の検出通知音を回避対象物までの接近距離に応じてミキシング量をフェード的に増やしていく。回避対象物まで超近接する場合は、メイン音楽及び検出通知音と、危険通知音とがクロスフェードしてミキシングされる。このような報知が可能となり、ユーザ2にとって不快を伴う報知を解消することが可能となる。 In the example shown in FIG. 28, music information of "favorite music" is used as the main sound source, and detection notification sound such as sonar sound is mixed in a fading manner according to the approach distance to the object to be avoided. To go. When the object to be avoided is extremely close, the main music, detection notification sound, and danger notification sound are cross-faded and mixed. Such notification becomes possible, and it becomes possible to eliminate notifications that are unpleasant for the user 2.
 回避対象物が人物である(危険を回避する対象ではない)判定を加えることで、人物ではない物体への超近接時のケースで不要な報知をフィルタリングすることも可能である。例えば、超音波測距センサによる検出される超音波の振幅や、イメージセンサの画像情報に対する人物認識等により、超接近する対象物が人物であると判定された場合には、周辺情報の報知はしない設定が採用されてもよい。 By adding a determination that the object to be avoided is a person (not the object to be avoided), it is also possible to filter out unnecessary notifications in the case of extremely close proximity to an object that is not a person. For example, if it is determined that the object that approaches is a person based on the amplitude of ultrasonic waves detected by an ultrasonic ranging sensor or person recognition based on image information from an image sensor, surrounding information will not be reported. A setting that does not occur may also be adopted.
 方式の異なる複数の測距センサが配置される場合や、検出方向が異なる複数の測距センサが配置される場合等において、周辺情報として取得される環境音(電車走行音、人混み等)の情報や、現実世界のマップ情報等に基づいて、必要な測距チャネルが自動的に切替えられてもよい。例えば、図8に示す構成等において、周辺情報に基づいて、正面側測距センサ27による正面方向へのセンシングと、地面側測距センサ28による地面方向へのセンシングが自動的に切替えられてもよい。
 例えば、ユーザ2が駅のホームを移動中である場合は、地面方向へのセンシングを優先するといった処理も可能である。
Information on environmental sounds (sounds of trains running, crowds, etc.) acquired as surrounding information in cases where multiple ranging sensors with different methods or detection directions are arranged, etc. Alternatively, the necessary distance measurement channels may be automatically switched based on map information of the real world or the like. For example, in the configuration shown in FIG. 8, even if sensing in the front direction by the front-side distance measurement sensor 27 and sensing in the ground direction by the ground-side distance measurement sensor 28 are automatically switched based on surrounding information. good.
For example, when the user 2 is moving on a station platform, it is possible to prioritize sensing toward the ground.
 SLAM等による自己位置推定の結果、正面側距離情報の変化から、ユーザ2の歩行速度が推測され、地面側距離情報の偏差の変動時間とともに、地面障害物48のサイズの判定に用いられてもよい。 As a result of self-position estimation using SLAM or the like, the walking speed of the user 2 is estimated from the change in the front side distance information, and is used to determine the size of the ground obstacle 48 along with the variation time of the deviation of the ground side distance information. good.
 本技術に係る周辺情報報知システムを適用して、視覚障がい者向けに発話型のUIに対応したデバイスを実現することも可能である。
 一方、本技術に係る周辺情報報知システムを、健常者に対して構築することも可能である。例えば、正面方向を検出方向とする正面側測距センサと、背面方向(後方側に向かう方向)を検出方向とする背面側測距センサとが配置される。そして、背面側測距センサにより検出される背面側距離情報に基づいて、背後から忍び寄る不審者等の検出が実行されてもよい。
By applying the peripheral information notification system according to the present technology, it is also possible to realize a device compatible with a speech-type UI for visually impaired people.
On the other hand, it is also possible to construct a surrounding information notification system according to the present technology for healthy people. For example, a front-side distance measuring sensor whose detection direction is in the front direction and a back-side distance measuring sensor whose detection direction is in the back direction (direction toward the rear side) are arranged. Then, detection of a suspicious person sneaking up from behind may be performed based on the back side distance information detected by the back side distance measuring sensor.
 例えば、正面方向と、左右の方向との3チャンネルとなるように、3つの測距センサが配置される場合、左右の測距チャンネルの距離情報に基づいて、ユーザが真っすぐに歩行できているか否かを判定することも可能である。
 あるいは、各測距チャンネルの距離情報に基づいて、予め定められたルートから外れているか否か、点字ブロックから外れてしまっているか否か等が判定されてもよい。
 もちろん、これらの判定のために機械学習アルゴリズムが用いられてもよい。
For example, if three distance measurement sensors are arranged so that there are three channels in the front direction and in the left and right directions, it is possible to determine whether the user is walking straight based on the distance information of the left and right distance measurement channels. It is also possible to determine whether
Alternatively, based on the distance information of each distance measurement channel, it may be determined whether or not the route has deviated from a predetermined route, whether it has deviated from a Braille block, etc.
Of course, machine learning algorithms may be used for these determinations.
 例えば、地面側距離情報の偏差の変動がない状態で、正面側距離情報に基づいて正面側の一定距離で物体が検出されているとする。この場合、静止している物体が検出されているとして、アラーム音等の出力を停止する。このような、アラーム音等の鳴りっぱなしを防止する設定を採用することも可能である。 For example, assume that an object is detected at a constant distance on the front side based on the front side distance information with no variation in the deviation of the ground side distance information. In this case, it is assumed that a stationary object has been detected, and the output of the alarm sound etc. is stopped. It is also possible to adopt such a setting that prevents the alarm sound from continuously sounding.
 ユーザが歩行中の周辺環境の音を収音することで、サポートしてくれる人(例えばユーザの家族や介護者等)の方位に、ユーザ2を立体音響等により誘導することも可能である。
 また障害物に近づいた場合等において、NR(ノイズリダクション)やNC(ノイズキャンセリング)の効果を抑えて、周辺の環境音を聞きやすくしてもよい。例えば、電車の接近時には、NRやNCをOFFにして電車の音をユーザ2に十分に聞かせて注意を促すといったことも可能である。
By collecting sounds from the surrounding environment while the user is walking, it is also possible to guide the user 2 in the direction of a person who will support him (for example, the user's family, caregiver, etc.) using stereophonic sound or the like.
Furthermore, when approaching an obstacle, the effects of NR (noise reduction) and NC (noise canceling) may be suppressed to make it easier to hear surrounding environmental sounds. For example, when a train approaches, it is possible to turn off the NR and NC and let the user 2 hear the sound of the train to alert the user.
 全盲者の体験をしてもらうとったテーマパーク等において、目隠しされた健常者に、本技術に係る周辺情報報知システムが提供されてもよい。
 また健常者が歩きながらスマホを使用する場合等において、外向きカメラと同じ向きに測距センサ等を装着して、本技術に係る周辺情報報知システムが構築されてもよい。
 また、車両やドローン等に対して、本技術に係る周辺情報システムが構築されて、操縦者等に周辺情報の報知が実行されてもよい。
The surrounding information notification system according to the present technology may be provided to blindfolded healthy people at a theme park or the like where they can experience the experience of blind people.
Furthermore, when a healthy person uses a smartphone while walking, a surrounding information notification system according to the present technology may be constructed by attaching a distance measurement sensor or the like in the same direction as the outward-facing camera.
Further, a surrounding information system according to the present technology may be constructed for a vehicle, a drone, or the like, and the surrounding information may be notified to a pilot or the like.
 図29は、本技術に係る周辺情報報知システムを構築するために用いることが可能なコンピュータ(情報処理装置)70のハードウェア構成例を示すブロック図である。
 コンピュータ70は、CPU71、ROM72、RAM73、入出力インタフェース75、及びこれらを互いに接続するバス74を備える。入出力インタフェース75には、表示部76、入力部77、記憶部78、通信部79、及びドライブ部80等が接続される。
 表示部76は、例えば液晶、EL等を用いた表示デバイスである。入力部77は、例えばキーボード、ポインティングデバイス、タッチパネル、その他の操作装置である。入力部77がタッチパネルを含む場合、そのタッチパネルは表示部76と一体となり得る。
 記憶部78は、不揮発性の記憶デバイスであり、例えばHDD、フラッシュメモリ、その他の固体メモリである。ドライブ部80は、例えば光学記録媒体、磁気記録テープ等、リムーバブルの記録媒体81を駆動することが可能なデバイスである。
 通信部79は、LAN、WAN等に接続可能な、他のデバイスと通信するためのモデム、ルータ、その他の通信機器である。通信部79は、有線及び無線のどちらを利用して通信するものであってもよい。通信部79は、コンピュータ70とは別体で使用される場合が多い。
 上記のようなハードウェア構成を有するコンピュータ70による情報処理は、記憶部78またはROM72等に記憶されたソフトウェアと、コンピュータ70のハードウェア資源との協働により実現される。具体的には、ROM72等に記憶された、ソフトウェアを構成するプログラムをRAM73にロードして実行することにより、本技術に係る情報処理方法が実現される。
 プログラムは、例えば記録媒体61を介してコンピュータ70にインストールされる。あるいは、グローバルネットワーク等を介してプログラムがコンピュータ70にインストールされてもよい。その他、コンピュータ読み取り可能な非一過性の任意の記憶媒体が用いられてよい。
FIG. 29 is a block diagram showing an example of a hardware configuration of a computer (information processing device) 70 that can be used to construct a peripheral information notification system according to the present technology.
The computer 70 includes a CPU 71, a ROM 72, a RAM 73, an input/output interface 75, and a bus 74 that connects these to each other. A display section 76, an input section 77, a storage section 78, a communication section 79, a drive section 80, and the like are connected to the input/output interface 75.
The display section 76 is a display device using, for example, liquid crystal, EL, or the like. The input unit 77 is, for example, a keyboard, pointing device, touch panel, or other operating device. If the input section 77 includes a touch panel, the touch panel can be integrated with the display section 76.
The storage unit 78 is a nonvolatile storage device, such as an HDD, flash memory, or other solid-state memory. The drive section 80 is a device capable of driving a removable recording medium 81 such as an optical recording medium or a magnetic recording tape.
The communication unit 79 is a modem, router, or other communication equipment that can be connected to a LAN, WAN, etc., and is used to communicate with other devices. The communication unit 79 may communicate using either wired or wireless communication. The communication unit 79 is often used separately from the computer 70.
Information processing by the computer 70 having the above hardware configuration is realized by cooperation between software stored in the storage unit 78 or the ROM 72, and hardware resources of the computer 70. Specifically, the information processing method according to the present technology is realized by loading a program constituting software stored in the ROM 72 or the like into the RAM 73 and executing it.
The program is installed on the computer 70 via the recording medium 61, for example. Alternatively, the program may be installed on the computer 70 via a global network or the like. In addition, any computer-readable non-transitory storage medium may be used.
 ネットワーク等を介して通信可能に接続された複数のコンピュータが協働することで、本技術に係る情報処理方法(周辺情報報知方法)及びプログラムが実行され、本技術に係る情報処理装置が構築されてもよい。
 すなわち本技術に係る情報処理方法、及びプログラムは、単体のコンピュータにより構成されたコンピュータシステムのみならず、複数のコンピュータが連動して動作するコンピュータシステムにおいても実行可能である。
 なお本開示において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれもシステムである。
 コンピュータシステムによる本技術に係る情報処理方法、及びプログラムの実行は、例えば周辺情報の取得(距離情報の取得、状況判定)、報知情報の生成(音声情報の生成、振動情報の生成)、報知制御等が、単体のコンピュータにより実行される場合、及び各処理が異なるコンピュータにより実行される場合の両方を含む。また所定のコンピュータによる各処理の実行は、当該処理の一部または全部を他のコンピュータに実行させ、その結果を取得することを含む。
 すなわち本技術に係る情報処理方法及びプログラムは、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成にも適用することが可能である。
The information processing method (peripheral information notification method) and program according to the present technology are executed by multiple computers communicably connected via a network etc., and the information processing device according to the present technology is constructed. It's okay.
That is, the information processing method and program according to the present technology can be executed not only in a computer system configured by a single computer but also in a computer system in which multiple computers operate in conjunction with each other.
Note that in the present disclosure, a system means a collection of multiple components (devices, modules (components), etc.), and it does not matter whether all the components are in the same housing or not. Therefore, a plurality of devices housed in separate casings and connected via a network and a single device in which a plurality of modules are housed in one casing are both systems.
Execution of the information processing method and program according to the present technology by a computer system includes, for example, acquisition of surrounding information (obtaining distance information, situation determination), generation of notification information (generation of audio information, generation of vibration information), and notification control. This includes both the case where each process is executed by a single computer and the case where each process is executed by different computers. Furthermore, execution of each process by a predetermined computer includes causing another computer to execute part or all of the process and acquiring the results.
That is, the information processing method and program according to the present technology can also be applied to a cloud computing configuration in which one function is shared and jointly processed by a plurality of devices via a network.
 各図面を参照して説明した周辺情報報知システム、センサ部、情報出力部、コントローラ等の各構成、周辺情報の取得(距離情報の取得、状況判定)、報知情報の生成(音声情報の生成、振動情報の生成)、報知制御の処理フロー等はあくまで一実施形態であり、本技術の趣旨を逸脱しない範囲で、任意に変形可能である。すなわち本技術を実施するための他の任意の構成やアルゴリズム等が採用されてよい。 Each configuration of the surrounding information notification system, sensor section, information output section, controller, etc., described with reference to each drawing, acquisition of surrounding information (obtainment of distance information, situation determination), generation of notification information (generation of audio information, (generation of vibration information), the processing flow of notification control, etc. are just one embodiment, and can be arbitrarily modified without departing from the spirit of the present technology. That is, any other configuration, algorithm, etc. may be adopted for implementing the present technology.
 本開示において、説明の理解を容易とするために、「略」「ほぼ」「おおよそ」等の文言が適宜使用されている。一方で、これら「略」「ほぼ」「おおよそ」等の文言を使用する場合と使用しない場合とで、明確な差異が規定されるわけではない。
 すなわち、本開示において、「中心」「中央」「均一」「等しい」「同じ」「直交」「平行」「対称」「延在」「軸方向」「円柱形状」「円筒形状」「リング形状」「円環形状」等の、形状、サイズ、位置関係、状態等を規定する概念は、「実質的に中心」「実質的に中央」「実質的に均一」「実質的に等しい」「実質的に同じ」「実質的に直交」「実質的に平行」「実質的に対称」「実質的に延在」「実質的に軸方向」「実質的に円柱形状」「実質的に円筒形状」「実質的にリング形状」「実質的に円環形状」等を含む概念とする。
 例えば「完全に中心」「完全に中央」「完全に均一」「完全に等しい」「完全に同じ」「完全に直交」「完全に平行」「完全に対称」「完全に延在」「完全に軸方向」「完全に円柱形状」「完全に円筒形状」「完全にリング形状」「完全に円環形状」等を基準とした所定の範囲(例えば±10%の範囲)に含まれる状態も含まれる。
 従って、「略」「ほぼ」「おおよそ」等の文言が付加されていない場合でも、いわゆる「略」「ほぼ」「おおよそ」等を付加して表現され得る概念が含まれ得る。反対に、「略」「ほぼ」「おおよそ」等を付加して表現された状態について、完全な状態が必ず排除されるというわけではない。
In this disclosure, words such as "approximately,""approximately," and "approximately" are used as appropriate to facilitate understanding of the explanation. On the other hand, there is no clear difference between when words such as "abbreviation,""approximately," and "approximately" are used and when they are not.
That is, in the present disclosure, "center", "center", "uniform", "equal", "same", "orthogonal", "parallel", "symmetrical", "extending", "axial direction", "cylindrical shape", "cylindrical shape", "ring shape" Concepts that define the shape, size, positional relationship, state, etc., such as "circular shape", include "substantially centered,""substantiallycentral,""substantiallyuniform,""substantiallyequal," and "substantially "Substantially perpendicular""Substantiallyparallel""Substantiallysymmetrical""Substantiallyextending""Substantiallyaxial""Substantiallycylindrical""Substantiallycylindrical" The concept includes "substantially ring-shaped", "substantially annular-shaped", etc.
For example, "perfectly centered", "perfectly centered", "perfectly uniform", "perfectly equal", "perfectly identical", "perfectly orthogonal", "perfectly parallel", "perfectly symmetrical", "perfectly extended", "perfectly It also includes states that fall within a predetermined range (e.g. ±10% range) based on the following criteria: axial direction, completely cylindrical, completely cylindrical, completely ring-shaped, completely annular, etc. It will be done.
Therefore, even when words such as "approximately,""approximately," and "approximately" are not added, concepts that can be expressed by adding so-called "approximately,""approximately," and "approximately" may be included. On the other hand, when a state is expressed by adding words such as "approximately", "approximately", "approximately", etc., a complete state is not always excluded.
 本開示において、「Aより大きい」「Aより小さい」といった「より」を使った表現は、Aと同等である場合を含む概念と、Aと同等である場合を含まない概念の両方を包括的に含む表現である。例えば「Aより大きい」は、Aと同等は含まない場合に限定されず、「A以上」も含む。また「Aより小さい」は、「A未満」に限定されず、「A以下」も含む。
 本技術を実施する際には、上記で説明した効果が発揮されるように、「Aより大きい」及び「Aより小さい」に含まれる概念から、具体的な設定等を適宜採用すればよい。
In this disclosure, expressions using "more" such as "greater than A" and "less than A" are inclusive of both concepts that include the case of being equivalent to A and concepts that do not include the case of being equivalent to A. This is an expression included in For example, "greater than A" is not limited to not including "equivalent to A", but also includes "more than A". Moreover, "less than A" is not limited to "less than A", but also includes "less than A".
When implementing the present technology, specific settings etc. may be appropriately adopted from the concepts included in "greater than A" and "less than A" so that the effects described above are exhibited.
 以上説明した本技術に係る特徴部分のうち、少なくとも2つの特徴部分を組み合わせることも可能である。すなわち各実施形態で説明した種々の特徴部分は、各実施形態の区別なく、任意に組み合わされてもよい。また上記で記載した種々の効果は、あくまで例示であって限定されるものではなく、また他の効果が発揮されてもよい。 It is also possible to combine at least two of the feature parts according to the present technology described above. That is, the various characteristic portions described in each embodiment may be arbitrarily combined without distinction between each embodiment. Further, the various effects described above are merely examples and are not limited, and other effects may also be exhibited.
 <応用例>>
 本開示に係る技術は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット、建設機械、農業機械(トラクター)などのいずれかの種類の移動体に搭載される装置として実現されてもよい。
<Application example>>
The technology according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure can be applied to any type of transportation such as a car, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility vehicle, an airplane, a drone, a ship, a robot, a construction machine, an agricultural machine (tractor), etc. It may also be realized as a device mounted on the body.
 図30は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システム7000の概略的な構成例を示すブロック図である。車両制御システム7000は、通信ネットワーク7010を介して接続された複数の電子制御ユニットを備える。図30に示した例では、車両制御システム7000は、駆動系制御ユニット7100、ボディ系制御ユニット7200、バッテリ制御ユニット7300、車外情報検出ユニット7400、車内情報検出ユニット7500、及び統合制御ユニット7600を備える。これらの複数の制御ユニットを接続する通信ネットワーク7010は、例えば、CAN(Controller Area Network)、LIN(Local Interconnect Network)、LAN(Local Area Network)又はFlexRay(登録商標)等の任意の規格に準拠した車載通信ネットワークであってよい。 FIG. 30 is a block diagram showing a schematic configuration example of a vehicle control system 7000, which is an example of a mobile object control system to which the technology according to the present disclosure can be applied. Vehicle control system 7000 includes multiple electronic control units connected via communication network 7010. In the example shown in FIG. 30, the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside vehicle information detection unit 7400, an inside vehicle information detection unit 7500, and an integrated control unit 7600. . The communication network 7010 connecting these plurality of control units is, for example, a communication network based on any standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay (registered trademark). It may be an in-vehicle communication network.
 各制御ユニットは、各種プログラムにしたがって演算処理を行うマイクロコンピュータと、マイクロコンピュータにより実行されるプログラム又は各種演算に用いられるパラメータ等を記憶する記憶部と、各種制御対象の装置を駆動する駆動回路とを備える。各制御ユニットは、通信ネットワーク7010を介して他の制御ユニットとの間で通信を行うためのネットワークI/Fを備えるとともに、車内外の装置又はセンサ等との間で、有線通信又は無線通信により通信を行うための通信I/Fを備える。図30では、統合制御ユニット7600の機能構成として、マイクロコンピュータ7610、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660、音声画像出力部7670、車載ネットワークI/F7680及び記憶部7690が図示されている。他の制御ユニットも同様に、マイクロコンピュータ、通信I/F及び記憶部等を備える。 Each control unit includes a microcomputer that performs calculation processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used in various calculations, and a drive circuit that drives various devices to be controlled. Equipped with Each control unit is equipped with a network I/F for communicating with other control units via the communication network 7010, and also communicates with devices or sensors inside and outside the vehicle through wired or wireless communication. A communication I/F is provided for communication. In FIG. 30, the functional configuration of the integrated control unit 7600 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, an audio image output section 7670, An in-vehicle network I/F 7680 and a storage unit 7690 are illustrated. The other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.
 駆動系制御ユニット7100は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット7100は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。駆動系制御ユニット7100は、ABS(Antilock Brake System)又はESC(Electronic Stability Control)等の制御装置としての機能を有してもよい。 The drive system control unit 7100 controls the operation of devices related to the drive system of the vehicle according to various programs. For example, the drive system control unit 7100 includes a drive force generation device such as an internal combustion engine or a drive motor that generates drive force for the vehicle, a drive force transmission mechanism that transmits the drive force to wheels, and a drive force transmission mechanism that controls the steering angle of the vehicle. It functions as a control device for a steering mechanism to adjust and a braking device to generate braking force for the vehicle. The drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
 駆動系制御ユニット7100には、車両状態検出部7110が接続される。車両状態検出部7110には、例えば、車体の軸回転運動の角速度を検出するジャイロセンサ、車両の加速度を検出する加速度センサ、あるいは、アクセルペダルの操作量、ブレーキペダルの操作量、ステアリングホイールの操舵角、エンジン回転数又は車輪の回転速度等を検出するためのセンサのうちの少なくとも一つが含まれる。駆動系制御ユニット7100は、車両状態検出部7110から入力される信号を用いて演算処理を行い、内燃機関、駆動用モータ、電動パワーステアリング装置又はブレーキ装置等を制御する。 A vehicle state detection section 7110 is connected to the drive system control unit 7100. The vehicle state detection unit 7110 includes, for example, a gyro sensor that detects the angular velocity of the axial rotation movement of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or an operation amount of an accelerator pedal, an operation amount of a brake pedal, or a steering wheel. At least one sensor for detecting angle, engine rotational speed, wheel rotational speed, etc. is included. The drive system control unit 7100 performs arithmetic processing using signals input from the vehicle state detection section 7110, and controls the internal combustion engine, the drive motor, the electric power steering device, the brake device, and the like.
 ボディ系制御ユニット7200は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット7200は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット7200には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット7200は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 7200 controls the operations of various devices installed in the vehicle body according to various programs. For example, the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a turn signal, or a fog lamp. In this case, radio waves transmitted from a portable device that replaces a key or signals from various switches may be input to the body control unit 7200. The body system control unit 7200 receives input of these radio waves or signals, and controls the door lock device, power window device, lamp, etc. of the vehicle.
 バッテリ制御ユニット7300は、各種プログラムにしたがって駆動用モータの電力供給源である二次電池7310を制御する。例えば、バッテリ制御ユニット7300には、二次電池7310を備えたバッテリ装置から、バッテリ温度、バッテリ出力電圧又はバッテリの残存容量等の情報が入力される。バッテリ制御ユニット7300は、これらの信号を用いて演算処理を行い、二次電池7310の温度調節制御又はバッテリ装置に備えられた冷却装置等の制御を行う。 The battery control unit 7300 controls the secondary battery 7310, which is a power supply source for the drive motor, according to various programs. For example, information such as battery temperature, battery output voltage, or remaining battery capacity is input to the battery control unit 7300 from a battery device including a secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature adjustment of the secondary battery 7310 or the cooling device provided in the battery device.
 車外情報検出ユニット7400は、車両制御システム7000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット7400には、撮像部7410及び車外情報検出部7420のうちの少なくとも一方が接続される。撮像部7410には、ToF(Time Of Flight)カメラ、ステレオカメラ、単眼カメラ、赤外線カメラ及びその他のカメラのうちの少なくとも一つが含まれる。車外情報検出部7420には、例えば、現在の天候又は気象を検出するための環境センサ、あるいは、車両制御システム7000を搭載した車両の周囲の他の車両、障害物又は歩行者等を検出するための周囲情報検出センサのうちの少なくとも一つが含まれる。 The external information detection unit 7400 detects information external to the vehicle in which the vehicle control system 7000 is mounted. For example, at least one of an imaging section 7410 and an external information detection section 7420 is connected to the vehicle exterior information detection unit 7400. The imaging unit 7410 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The vehicle external information detection unit 7420 includes, for example, an environmental sensor for detecting the current weather or weather, or a sensor for detecting other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the surrounding information detection sensors is included.
 環境センサは、例えば、雨天を検出する雨滴センサ、霧を検出する霧センサ、日照度合いを検出する日照センサ、及び降雪を検出する雪センサのうちの少なくとも一つであってよい。周囲情報検出センサは、超音波センサ、レーダ装置及びLIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)装置のうちの少なくとも一つであってよい。これらの撮像部7410及び車外情報検出部7420は、それぞれ独立したセンサないし装置として備えられてもよいし、複数のセンサないし装置が統合された装置として備えられてもよい。 The environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunlight sensor that detects the degree of sunlight, and a snow sensor that detects snowfall. The surrounding information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device. The imaging section 7410 and the vehicle external information detection section 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
 ここで、図31は、撮像部7410及び車外情報検出部7420の設置位置の例を示す。撮像部7910,7912,7914,7916,7918は、例えば、車両7900のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部のうちの少なくとも一つの位置に設けられる。フロントノーズに備えられる撮像部7910及び車室内のフロントガラスの上部に備えられる撮像部7918は、主として車両7900の前方の画像を取得する。サイドミラーに備えられる撮像部7912,7914は、主として車両7900の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部7916は、主として車両7900の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部7918は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 Here, FIG. 31 shows an example of the installation positions of the imaging section 7410 and the vehicle external information detection section 7420. The imaging units 7910, 7912, 7914, 7916, and 7918 are provided, for example, at at least one of the front nose, side mirrors, rear bumper, back door, and upper part of the windshield inside the vehicle 7900. An imaging unit 7910 provided in the front nose and an imaging unit 7918 provided above the windshield inside the vehicle mainly acquire images in front of the vehicle 7900. Imaging units 7912 and 7914 provided in the side mirrors mainly capture images of the sides of the vehicle 7900. An imaging unit 7916 provided in the rear bumper or back door mainly acquires images of the rear of the vehicle 7900. The imaging unit 7918 provided above the windshield inside the vehicle is mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
 なお、図31には、それぞれの撮像部7910,7912,7914,7916の撮影範囲の一例が示されている。撮像範囲aは、フロントノーズに設けられた撮像部7910の撮像範囲を示し、撮像範囲b,cは、それぞれサイドミラーに設けられた撮像部7912,7914の撮像範囲を示し、撮像範囲dは、リアバンパ又はバックドアに設けられた撮像部7916の撮像範囲を示す。例えば、撮像部7910,7912,7914,7916で撮像された画像データが重ね合わせられることにより、車両7900を上方から見た俯瞰画像が得られる。 Note that FIG. 31 shows an example of the imaging range of each of the imaging units 7910, 7912, 7914, and 7916. Imaging range a indicates the imaging range of imaging unit 7910 provided on the front nose, imaging ranges b and c indicate imaging ranges of imaging units 7912 and 7914 provided on the side mirrors, respectively, and imaging range d is The imaging range of an imaging unit 7916 provided in the rear bumper or back door is shown. For example, by superimposing image data captured by imaging units 7910, 7912, 7914, and 7916, an overhead image of vehicle 7900 viewed from above can be obtained.
 車両7900のフロント、リア、サイド、コーナ及び車室内のフロントガラスの上部に設けられる車外情報検出部7920,7922,7924,7926,7928,7930は、例えば超音波センサ又はレーダ装置であってよい。車両7900のフロントノーズ、リアバンパ、バックドア及び車室内のフロントガラスの上部に設けられる車外情報検出部7920,7926,7930は、例えばLIDAR装置であってよい。これらの車外情報検出部7920~7930は、主として先行車両、歩行者又は障害物等の検出に用いられる。 The external information detection units 7920, 7922, 7924, 7926, 7928, and 7930 provided at the front, rear, sides, corners, and the upper part of the windshield inside the vehicle 7900 may be, for example, ultrasonic sensors or radar devices. External information detection units 7920, 7926, and 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield inside the vehicle 7900 may be, for example, LIDAR devices. These external information detection units 7920 to 7930 are mainly used to detect preceding vehicles, pedestrians, obstacles, and the like.
 図30に戻って説明を続ける。車外情報検出ユニット7400は、撮像部7410に車外の画像を撮像させるとともに、撮像された画像データを受信する。また、車外情報検出ユニット7400は、接続されている車外情報検出部7420から検出情報を受信する。車外情報検出部7420が超音波センサ、レーダ装置又はLIDAR装置である場合には、車外情報検出ユニット7400は、超音波又は電磁波等を発信させるとともに、受信された反射波の情報を受信する。車外情報検出ユニット7400は、受信した情報に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。車外情報検出ユニット7400は、受信した情報に基づいて、降雨、霧又は路面状況等を認識する環境認識処理を行ってもよい。車外情報検出ユニット7400は、受信した情報に基づいて、車外の物体までの距離を算出してもよい。 Returning to FIG. 30, the explanation continues. The vehicle exterior information detection unit 7400 causes the imaging unit 7410 to capture an image of the exterior of the vehicle, and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the vehicle exterior information detection section 7420 to which it is connected. When the external information detection unit 7420 is an ultrasonic sensor, a radar device, or a LIDAR device, the external information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, etc., and receives information on the received reflected waves. The external information detection unit 7400 may perform object detection processing such as a person, car, obstacle, sign, or text on the road surface or distance detection processing based on the received information. The external information detection unit 7400 may perform environment recognition processing to recognize rain, fog, road surface conditions, etc. based on the received information. The vehicle exterior information detection unit 7400 may calculate the distance to the object outside the vehicle based on the received information.
 また、車外情報検出ユニット7400は、受信した画像データに基づいて、人、車、障害物、標識又は路面上の文字等を認識する画像認識処理又は距離検出処理を行ってもよい。車外情報検出ユニット7400は、受信した画像データに対して歪補正又は位置合わせ等の処理を行うとともに、異なる撮像部7410により撮像された画像データを合成して、俯瞰画像又はパノラマ画像を生成してもよい。車外情報検出ユニット7400は、異なる撮像部7410により撮像された画像データを用いて、視点変換処理を行ってもよい。 Additionally, the outside-vehicle information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing people, cars, obstacles, signs, characters on the road, etc., based on the received image data. The outside-vehicle information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and also synthesizes image data captured by different imaging units 7410 to generate an overhead image or a panoramic image. Good too. The outside-vehicle information detection unit 7400 may perform viewpoint conversion processing using image data captured by different imaging units 7410.
 車内情報検出ユニット7500は、車内の情報を検出する。車内情報検出ユニット7500には、例えば、運転者の状態を検出する運転者状態検出部7510が接続される。運転者状態検出部7510は、運転者を撮像するカメラ、運転者の生体情報を検出する生体センサ又は車室内の音声を集音するマイク等を含んでもよい。生体センサは、例えば、座面又はステアリングホイール等に設けられ、座席に座った搭乗者又はステアリングホイールを握る運転者の生体情報を検出する。車内情報検出ユニット7500は、運転者状態検出部7510から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。車内情報検出ユニット7500は、集音された音声信号に対してノイズキャンセリング処理等の処理を行ってもよい。 The in-vehicle information detection unit 7500 detects in-vehicle information. For example, a driver condition detection section 7510 that detects the condition of the driver is connected to the in-vehicle information detection unit 7500. The driver state detection unit 7510 may include a camera that images the driver, a biosensor that detects biometric information of the driver, a microphone that collects audio inside the vehicle, or the like. The biosensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of a passenger sitting on a seat or a driver holding a steering wheel. The in-vehicle information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, or determine whether the driver is dozing off. You may. The in-vehicle information detection unit 7500 may perform processing such as noise canceling processing on the collected audio signal.
 統合制御ユニット7600は、各種プログラムにしたがって車両制御システム7000内の動作全般を制御する。統合制御ユニット7600には、入力部7800が接続されている。入力部7800は、例えば、タッチパネル、ボタン、マイクロフォン、スイッチ又はレバー等、搭乗者によって入力操作され得る装置によって実現される。統合制御ユニット7600には、マイクロフォンにより入力される音声を音声認識することにより得たデータが入力されてもよい。入力部7800は、例えば、赤外線又はその他の電波を利用したリモートコントロール装置であってもよいし、車両制御システム7000の操作に対応した携帯電話又はPDA(Personal Digital Assistant)等の外部接続機器であってもよい。入力部7800は、例えばカメラであってもよく、その場合搭乗者はジェスチャにより情報を入力することができる。あるいは、搭乗者が装着したウェアラブル装置の動きを検出することで得られたデータが入力されてもよい。さらに、入力部7800は、例えば、上記の入力部7800を用いて搭乗者等により入力された情報に基づいて入力信号を生成し、統合制御ユニット7600に出力する入力制御回路などを含んでもよい。搭乗者等は、この入力部7800を操作することにより、車両制御システム7000に対して各種のデータを入力したり処理動作を指示したりする。 The integrated control unit 7600 controls overall operations within the vehicle control system 7000 according to various programs. An input section 7800 is connected to the integrated control unit 7600. The input unit 7800 is realized by a device that can be inputted by the passenger, such as a touch panel, a button, a microphone, a switch, or a lever. The integrated control unit 7600 may be input with data obtained by voice recognition of voice input through a microphone. The input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA (Personal Digital Assistant) that is compatible with the operation of the vehicle control system 7000. It's okay. The input unit 7800 may be, for example, a camera, in which case the passenger can input information using gestures. Alternatively, data obtained by detecting the movement of a wearable device worn by a passenger may be input. Furthermore, the input section 7800 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input section 7800 described above and outputs it to the integrated control unit 7600. By operating this input unit 7800, a passenger or the like inputs various data to the vehicle control system 7000 and instructs processing operations.
 記憶部7690は、マイクロコンピュータにより実行される各種プログラムを記憶するROM(Read Only Memory)、及び各種パラメータ、演算結果又はセンサ値等を記憶するRAM(Random Access Memory)を含んでいてもよい。また、記憶部7690は、HDD(Hard Disc Drive)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス又は光磁気記憶デバイス等によって実現してもよい。 The storage unit 7690 may include a ROM (Read Only Memory) that stores various programs executed by the microcomputer, and a RAM (Random Access Memory) that stores various parameters, calculation results, sensor values, etc. Further, the storage unit 7690 may be realized by a magnetic storage device such as a HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
 汎用通信I/F7620は、外部環境7750に存在する様々な機器との間の通信を仲介する汎用的な通信I/Fである。汎用通信I/F7620は、GSM(登録商標)(Global System of Mobile communications)、WiMAX(登録商標)、LTE(登録商標)(Long Term Evolution)若しくはLTE-A(LTE-Advanced)などのセルラー通信プロトコル、又は無線LAN(Wi-Fi(登録商標)ともいう)、Bluetooth(登録商標)などのその他の無線通信プロトコルを実装してよい。汎用通信I/F7620は、例えば、基地局又はアクセスポイントを介して、外部ネットワーク(例えば、インターネット、クラウドネットワーク又は事業者固有のネットワーク)上に存在する機器(例えば、アプリケーションサーバ又は制御サーバ)へ接続してもよい。また、汎用通信I/F7620は、例えばP2P(Peer To Peer)技術を用いて、車両の近傍に存在する端末(例えば、運転者、歩行者若しくは店舗の端末、又はMTC(Machine Type Communication)端末)と接続してもよい。 The general-purpose communication I/F 7620 is a general-purpose communication I/F that mediates communication with various devices existing in the external environment 7750. The general-purpose communication I/F7620 supports cellular communication protocols such as GSM (registered trademark) (Global System of Mobile communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution), or LTE-A (LTE-Advanced). , or other wireless communication protocols such as wireless LAN (also referred to as Wi-Fi (registered trademark)) or Bluetooth (registered trademark). The general-purpose communication I/F 7620 connects to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via a base station or an access point, for example. You may. In addition, the general-purpose communication I/F 7620 uses, for example, P2P (Peer To Peer) technology to communicate with a terminal located near the vehicle (for example, a driver, a pedestrian, a store terminal, or an MTC (Machine Type Communication) terminal). You can also connect it with
 専用通信I/F7630は、車両における使用を目的として策定された通信プロトコルをサポートする通信I/Fである。専用通信I/F7630は、例えば、下位レイヤのIEEE802.11pと上位レイヤのIEEE1609との組合せであるWAVE(Wireless Access in Vehicle Environment)、DSRC(Dedicated Short Range Communications)、又はセルラー通信プロトコルといった標準プロトコルを実装してよい。専用通信I/F7630は、典型的には、車車間(Vehicle to Vehicle)通信、路車間(Vehicle to Infrastructure)通信、車両と家との間(Vehicle to Home)の通信及び歩車間(Vehicle to Pedestrian)通信のうちの1つ以上を含む概念であるV2X通信を遂行する。 The dedicated communication I/F 7630 is a communication I/F that supports communication protocols developed for use in vehicles. The dedicated communication I/F 7630 uses standard protocols such as WAVE (Wireless Access in Vehicle Environment), which is a combination of lower layer IEEE802.11p and upper layer IEEE1609, DSRC (Dedicated Short Range Communications), or cellular communication protocol. May be implemented. The dedicated communication I/F 7630 typically supports vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-pedestrian communication. ) communications, a concept that includes one or more of the following:
 測位部7640は、例えば、GNSS(Global Navigation Satellite System)衛星からのGNSS信号(例えば、GPS(Global Positioning System)衛星からのGPS信号)を受信して測位を実行し、車両の緯度、経度及び高度を含む位置情報を生成する。なお、測位部7640は、無線アクセスポイントとの信号の交換により現在位置を特定してもよく、又は測位機能を有する携帯電話、PHS若しくはスマートフォンといった端末から位置情報を取得してもよい。 The positioning unit 7640 performs positioning by receiving, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), and determines the latitude, longitude, and altitude of the vehicle. Generate location information including. Note that the positioning unit 7640 may specify the current location by exchanging signals with a wireless access point, or may acquire location information from a terminal such as a mobile phone, PHS, or smartphone that has a positioning function.
 ビーコン受信部7650は、例えば、道路上に設置された無線局等から発信される電波あるいは電磁波を受信し、現在位置、渋滞、通行止め又は所要時間等の情報を取得する。なお、ビーコン受信部7650の機能は、上述した専用通信I/F7630に含まれてもよい。 The beacon receiving unit 7650 receives, for example, radio waves or electromagnetic waves transmitted from a wireless station installed on the road, and obtains information such as the current location, traffic jams, road closures, or required travel time. Note that the function of the beacon receiving unit 7650 may be included in the dedicated communication I/F 7630 described above.
 車内機器I/F7660は、マイクロコンピュータ7610と車内に存在する様々な車内機器7760との間の接続を仲介する通信インタフェースである。車内機器I/F7660は、無線LAN、Bluetooth(登録商標)、NFC(Near Field Communication)又はWUSB(Wireless USB)といった無線通信プロトコルを用いて無線接続を確立してもよい。また、車内機器I/F7660は、図示しない接続端子(及び、必要であればケーブル)を介して、USB(Universal Serial Bus)、HDMI(登録商標)(High-Definition Multimedia Interface、又はMHL(Mobile High-definition Link)等の有線接続を確立してもよい。車内機器7760は、例えば、搭乗者が有するモバイル機器若しくはウェアラブル機器、又は車両に搬入され若しくは取り付けられる情報機器のうちの少なくとも1つを含んでいてもよい。また、車内機器7760は、任意の目的地までの経路探索を行うナビゲーション装置を含んでいてもよい。車内機器I/F7660は、これらの車内機器7760との間で、制御信号又はデータ信号を交換する。 The in-vehicle device I/F 7660 is a communication interface that mediates connections between the microcomputer 7610 and various in-vehicle devices 7760 present in the vehicle. The in-vehicle device I/F 7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB). In addition, the in-vehicle device I/F 7660 connects to USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface), or MHL (Mobile High The in-vehicle device 7760 may include, for example, at least one of a mobile device or wearable device owned by a passenger, or an information device carried into or attached to the vehicle. In addition, the in-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination. or exchange data signals.
 車載ネットワークI/F7680は、マイクロコンピュータ7610と通信ネットワーク7010との間の通信を仲介するインタフェースである。車載ネットワークI/F7680は、通信ネットワーク7010によりサポートされる所定のプロトコルに則して、信号等を送受信する。 The in-vehicle network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The in-vehicle network I/F 7680 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 7010.
 統合制御ユニット7600のマイクロコンピュータ7610は、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660及び車載ネットワークI/F7680のうちの少なくとも一つを介して取得される情報に基づき、各種プログラムにしたがって、車両制御システム7000を制御する。例えば、マイクロコンピュータ7610は、取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット7100に対して制御指令を出力してもよい。例えば、マイクロコンピュータ7610は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行ってもよい。また、マイクロコンピュータ7610は、取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行ってもよい。 The microcomputer 7610 of the integrated control unit 7600 communicates via at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680. The vehicle control system 7000 is controlled according to various programs based on the information obtained. For example, the microcomputer 7610 calculates a control target value for a driving force generating device, a steering mechanism, or a braking device based on acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. Good too. For example, the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. Coordination control may be performed for the purpose of In addition, the microcomputer 7610 controls the driving force generating device, steering mechanism, braking device, etc. based on the acquired information about the surroundings of the vehicle, so that the microcomputer 7610 can drive the vehicle autonomously without depending on the driver's operation. Cooperative control for the purpose of driving etc. may also be performed.
 マイクロコンピュータ7610は、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660及び車載ネットワークI/F7680のうちの少なくとも一つを介して取得される情報に基づき、車両と周辺の構造物や人物等の物体との間の3次元距離情報を生成し、車両の現在位置の周辺情報を含むローカル地図情報を作成してもよい。また、マイクロコンピュータ7610は、取得される情報に基づき、車両の衝突、歩行者等の近接又は通行止めの道路への進入等の危険を予測し、警告用信号を生成してもよい。警告用信号は、例えば、警告音を発生させたり、警告ランプを点灯させたりするための信号であってよい。 The microcomputer 7610 acquires information through at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680. Based on this, three-dimensional distance information between the vehicle and surrounding objects such as structures and people may be generated, and local map information including surrounding information of the current position of the vehicle may be generated. Furthermore, the microcomputer 7610 may predict dangers such as a vehicle collision, a pedestrian approaching, or entering a closed road, based on the acquired information, and generate a warning signal. The warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
 音声画像出力部7670は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図30の例では、出力装置として、オーディオスピーカ7710、表示部7720及びインストルメントパネル7730が例示されている。表示部7720は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。表示部7720は、AR(Augmented Reality)表示機能を有していてもよい。出力装置は、これらの装置以外の、ヘッドホン、搭乗者が装着する眼鏡型ディスプレイ等のウェアラブルデバイス、プロジェクタ又はランプ等の他の装置であってもよい。出力装置が表示装置の場合、表示装置は、マイクロコンピュータ7610が行った各種処理により得られた結果又は他の制御ユニットから受信された情報を、テキスト、イメージ、表、グラフ等、様々な形式で視覚的に表示する。また、出力装置が音声出力装置の場合、音声出力装置は、再生された音声データ又は音響データ等からなるオーディオ信号をアナログ信号に変換して聴覚的に出力する。 The audio and image output unit 7670 transmits an output signal of at least one of audio and images to an output device that can visually or audibly notify information to the occupants of the vehicle or to the outside of the vehicle. In the example of FIG. 30, an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as output devices. Display unit 7720 may include, for example, at least one of an on-board display and a head-up display. The display section 7720 may have an AR (Augmented Reality) display function. The output device may be other devices other than these devices, such as headphones, a wearable device such as a glasses-type display worn by the passenger, a projector, or a lamp. When the output device is a display device, the display device displays results obtained from various processes performed by the microcomputer 7610 or information received from other control units in various formats such as text, images, tables, graphs, etc. Show it visually. Further, when the output device is an audio output device, the audio output device converts an audio signal consisting of reproduced audio data or acoustic data into an analog signal and audibly outputs the analog signal.
 なお、図30に示した例において、通信ネットワーク7010を介して接続された少なくとも二つの制御ユニットが一つの制御ユニットとして一体化されてもよい。あるいは、個々の制御ユニットが、複数の制御ユニットにより構成されてもよい。さらに、車両制御システム7000が、図示されていない別の制御ユニットを備えてもよい。また、上記の説明において、いずれかの制御ユニットが担う機能の一部又は全部を、他の制御ユニットに持たせてもよい。つまり、通信ネットワーク7010を介して情報の送受信がされるようになっていれば、所定の演算処理が、いずれかの制御ユニットで行われるようになってもよい。同様に、いずれかの制御ユニットに接続されているセンサ又は装置が、他の制御ユニットに接続されるとともに、複数の制御ユニットが、通信ネットワーク7010を介して相互に検出情報を送受信してもよい。 Note that in the example shown in FIG. 30, at least two control units connected via the communication network 7010 may be integrated as one control unit. Alternatively, each control unit may be composed of a plurality of control units. Furthermore, vehicle control system 7000 may include another control unit not shown. Further, in the above description, some or all of the functions performed by one of the control units may be provided to another control unit. In other words, as long as information is transmitted and received via the communication network 7010, predetermined arithmetic processing may be performed by any one of the control units. Similarly, sensors or devices connected to any control unit may be connected to other control units, and multiple control units may send and receive detection information to and from each other via communication network 7010. .
 なお、図2等を用いて説明した本実施形態に係る情報処理装置の各機能を実現するためのコンピュータプログラムを、いずれかの制御ユニット等に実装することができる。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体を提供することもできる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信されてもよい。 Note that a computer program for realizing each function of the information processing apparatus according to the present embodiment described using FIG. 2 and the like can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Furthermore, the above computer program may be distributed, for example, via a network, without using a recording medium.
 以上説明した車両制御システム7000において、図2等を用いて説明した本実施形態に係る情報処理装置は、図30に示した応用例の統合制御ユニット7600に適用することができる。 In the vehicle control system 7000 described above, the information processing device according to the present embodiment described using FIG. 2 and the like can be applied to the integrated control unit 7600 of the application example shown in FIG. 30.
 また、図2等を用いて説明した情報処理装置の少なくとも一部の構成要素は、図30に示した統合制御ユニット7600のためのモジュール(例えば、一つのダイで構成される集積回路モジュール)において実現されてもよい。あるいは、図2等を用いて説明した情報処理装置が、図30に示した車両制御システム7000の複数の制御ユニットによって実現されてもよい。 Furthermore, at least some of the components of the information processing device described using FIG. 2 etc. are included in the module for the integrated control unit 7600 shown in FIG. May be realized. Alternatively, the information processing device described using FIG. 2 and the like may be realized by a plurality of control units of vehicle control system 7000 shown in FIG. 30.
 なお、本技術は以下のような構成も採ることができる。
(1)
 1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得する周辺情報取得部と、
 前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成する音声情報生成部と、
 前記第1の音声情報及び前記第2の音声情報をともに出力させる報知制御部と
 を具備する情報処理装置。
(2)(1)に記載の情報処理装置であって、
 前記1以上の物体検出センサは、第1の物体検出センサと、第2の物体検出センサとを含み、
 前記周辺情報取得部は、前記第1の物体検出センサの検出結果に基づいて前記第1の周辺情報を取得し、前記第2の物体検出センサの検出結果に基づいて前記第2の周辺情報を取得する
 情報処理装置。
(3)(2)に記載の情報処理装置であって、
 前記第1の物体検出センサは、第1の方式により動作する第1の測距センサであり、
 前記第2の物体検出センサは、前記第1の方式とは異なる第2の方式により動作する第2の測距センサである
 情報処理装置。
(4)(2)又は(3)に記載の情報処理装置であって、
 前記第1の物体検出センサは、第1の方向を検出方向として配置された第1の測距センサであり、
 前記第2の物体検出センサは、前記第1の方向とは異なる第2の方向を検出方向として配置された第2の測距センサである
 情報処理装置。
(5)(1)に記載の情報処理装置であって、
 前記1以上の物体検出センサは、画像情報を生成するセンサであり、
 前記周辺情報取得部は、前記画像情報のうちの一部の画素領域の情報に基づいて前記第1の周辺情報を取得し、前記画像情報のうちの他の画素領域の情報に基づいて前記第2の周辺情報を取得する
 情報処理装置。
(6)(1)に記載の情報処理装置であって、
 前記1以上の物体検出センサは、画像情報を生成するセンサであり、
 前記周辺情報取得部は、前記画像情報に基づいて検出される第1の種類の物体に関する情報を前記第1の周辺情報として取得し、前記画像情報に基づいて検出される前記第1の種類とは異なる第2の種類の物体に関する情報を前記第2の周辺情報として取得する
 情報処理装置。
(7)(2)に記載の情報処理装置であって、
 前記周辺情報取得部は、前記第1の周辺情報と、前記第2の周辺情報とに基づいて、統合周辺情報を生成する
 情報処理装置。
(8)(7)に記載の情報処理装置であって、
 前記第1の測距センサは、光学レーザ方式により動作し、
 前記第2の測距センサは、超音波方式により動作し、
 前記周辺情報取得部は、前記第1の測距センサの検出の安定性、及び前記第2の測距センサの検出の安定性に基づいて、前記統合周辺情報を生成する
 情報処理装置。
(9)(8)に記載の情報処理装置であって、
 前記周辺情報取得部は、前記第1の測距センサの検出の安定性が低く、かつ前記第2の測距センサの検出の安定性が高い場合に、周辺に光透過性部材又は光吸収性部材が存在する旨の前記統合周辺情報を生成する
 情報処理装置。
(10)(9)に記載の情報処理装置であって、
 前記周辺情報取得部は、さらに、前記第2の測距センサの検出結果に基づいて前記第2の周辺情報として取得される硬度の情報に基づいて、前記光透過性部材又は前記光吸収性部材に対する材質及び物体の種類の少なくとも一方に関する情報を、前記統合周辺情報として生成する
 情報処理装置。
(11)(1)から(10)のうちいずれか1つに記載の情報処理装置であって、
 前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の音声情報を出力するか否かを判定し、前記第1の音声情報を出力しないと判定した場合は、前記報知制御部による前記第1の音声情報の出力を規制する
 情報処理装置。
(12)(1)から(11)のうちいずれか1つに記載の情報処理装置であって、
 前記第1の音声情報は、所定の楽曲を構成する第1の楽音情報であり、
 前記第2の音声情報は、前記所定の楽曲を構成する第2の楽音情報である
 情報処理装置。
(13)(1)から(12)のうちいずれか1つに記載の情報処理装置であって、
 前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の楽音データの楽音パラメータを制御することで、前記第1の音声情報を生成する
 情報処理装置。
(14)(13)に記載の情報処理装置であって、
 前記楽音パラメータは、音量、周波数、ピッチ、速さ、BPM、又はテンポの少なくとも1つを含む
 情報処理装置。
(15)(13)又は(14)に記載の情報処理装置であって、
 前記第1の周辺情報は、距離情報を含み、
 前記音声情報生成部は、前記距離情報に基づいて前記楽音パラメータを制御することで、前記第1の音声情報を生成する
 情報処理装置。
(16)(13)又は(14)に記載の情報処理装置であって、
 前記音声情報生成部は、前記第1の測距センサの検出方向に基づいて、前記第1の音声情報の定位を制御する
 情報処理装置。
(17)
 1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得し、
 前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成し、
 前記第1の音声情報及び前記第2の音声情報をともに出力させる
 ことをコンピュータシステムが実行する情報処理方法。
(18)
 1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得するステップと、
 前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成するステップと、
 前記第1の音声情報及び前記第2の音声情報をともに出力させるステップと
 をコンピュータシステムに実行させるプログラム。
(19)
 1以上の物体検出センサと、
 前記1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得する周辺情報取得部と、
 前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成する音声情報生成部と、
 前記第1の音声情報及び前記第2の音声情報をともに出力させる報知制御部と
 を具備する情報処理システム。
(20)(19)に記載の情報処理システムであって、さらに、
 前記第1の音声情報及び前記第2の音声情報を出力する音声出力部を有し、ユーザに対して情報を出力する情報出力部を具備する
 情報処理システム。
(21)(1)から(16)のうちいずれか1つに記載の情報処理装置であって、
 前記1以上の物体検出センサは、ユーザに装着されるデバイス、又はユーザが保持するデバイスに配置される
 情報処理装置。
(22)(1)から(16)、又は(21)のうちいずれか1つに記載の情報処理装置であって、さらに、
 前記1以上の物体検出センサを具備する
 情報処理装置。
Note that the present technology can also adopt the following configuration.
(1)
a surrounding information acquisition unit that obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of one or more object detection sensors;
First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. an audio information generation unit that generates second audio information using second musical tone data for notifying surrounding information;
An information processing device comprising: a notification control unit that outputs both the first audio information and the second audio information.
(2) The information processing device according to (1),
The one or more object detection sensors include a first object detection sensor and a second object detection sensor,
The peripheral information acquisition unit acquires the first peripheral information based on the detection result of the first object detection sensor, and acquires the second peripheral information based on the detection result of the second object detection sensor. Information processing device to obtain.
(3) The information processing device according to (2),
The first object detection sensor is a first ranging sensor that operates according to a first method,
The second object detection sensor is a second distance measuring sensor that operates according to a second method different from the first method.
(4) The information processing device according to (2) or (3),
The first object detection sensor is a first ranging sensor arranged with a first direction as a detection direction,
The second object detection sensor is a second ranging sensor arranged with a second direction different from the first direction as a detection direction. Information processing device.
(5) The information processing device according to (1),
The one or more object detection sensors are sensors that generate image information,
The peripheral information acquisition unit acquires the first peripheral information based on information on some pixel regions of the image information, and acquires the first peripheral information based on information on other pixel regions of the image information. An information processing device that acquires peripheral information of No. 2.
(6) The information processing device according to (1),
The one or more object detection sensors are sensors that generate image information,
The peripheral information acquisition unit acquires information regarding a first type of object detected based on the image information as the first peripheral information, and acquires information regarding the first type of object detected based on the image information. An information processing device that acquires information regarding a different second type of object as the second peripheral information.
(7) The information processing device according to (2),
The surrounding information acquisition unit generates integrated surrounding information based on the first surrounding information and the second surrounding information. Information processing device.
(8) The information processing device according to (7),
The first ranging sensor operates by an optical laser method,
The second ranging sensor operates using an ultrasonic method,
The surrounding information acquisition unit generates the integrated surrounding information based on the stability of detection by the first ranging sensor and the stability of detection by the second ranging sensor. Information processing device.
(9) The information processing device according to (8),
When the detection stability of the first distance measurement sensor is low and the detection stability of the second distance measurement sensor is high, the peripheral information acquisition unit includes a light-transmitting member or a light-absorbing member in the periphery. An information processing device that generates the integrated peripheral information indicating that a member exists.
(10) The information processing device according to (9),
The peripheral information acquisition unit further includes hardness information acquired as the second peripheral information based on the detection result of the second distance measurement sensor, and the peripheral information acquisition unit is configured to determine whether the light transmitting member or the light absorbing member An information processing device that generates information regarding at least one of a material and a type of an object as the integrated peripheral information.
(11) The information processing device according to any one of (1) to (10),
The audio information generation unit determines whether or not to output the first audio information based on the first peripheral information, and if it is determined that the first audio information is not output, the notification control An information processing device that regulates output of the first audio information by the unit.
(12) The information processing device according to any one of (1) to (11),
The first audio information is first musical tone information constituting a predetermined song,
The second audio information is second musical tone information constituting the predetermined music piece. Information processing device.
(13) The information processing device according to any one of (1) to (12),
The audio information generation unit generates the first audio information by controlling musical tone parameters of the first musical tone data based on the first peripheral information.
(14) The information processing device according to (13),
The musical tone parameter includes at least one of volume, frequency, pitch, speed, BPM, and tempo. Information processing device.
(15) The information processing device according to (13) or (14),
The first peripheral information includes distance information,
The audio information generation unit generates the first audio information by controlling the musical tone parameters based on the distance information.
(16) The information processing device according to (13) or (14),
The audio information generation unit controls the localization of the first audio information based on the detection direction of the first ranging sensor.
(17)
acquiring first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors;
First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. generating second audio information using second musical tone data for notifying surrounding information;
An information processing method in which a computer system executes the steps of: outputting both the first audio information and the second audio information.
(18)
acquiring first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors;
First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. generating second audio information using second musical tone data for notifying surrounding information;
A program that causes a computer system to execute the steps of: outputting both the first audio information and the second audio information.
(19)
one or more object detection sensors;
a surrounding information acquisition unit that obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of the one or more object detection sensors;
First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. an audio information generation unit that generates second audio information using second musical tone data for notifying surrounding information;
An information processing system comprising: a notification control unit that outputs both the first audio information and the second audio information.
(20) The information processing system according to (19), further comprising:
An information processing system, comprising: an audio output section that outputs the first audio information and the second audio information; and an information output section that outputs information to a user.
(21) The information processing device according to any one of (1) to (16),
The one or more object detection sensors are arranged in a device worn by a user or a device held by the user. Information processing apparatus.
(22) The information processing device according to any one of (1) to (16) or (21), further comprising:
An information processing device comprising the one or more object detection sensors.
 D…ユーザから測定点までの距離
 P…測定点
 1、41、54、64、66…周辺情報報知システム
 2…ユーザ
 3…地面
 5…物体
 8…コントローラ
 10…物体検出センサ
 17…周辺情報取得部
 18…報知情報生成部
 19…報知制御部
 25…レーザ測距センサ
 26…超音波測距センサ
 27…正面側測距センサ
 28…地面側測距センサ
 30…第1の周辺情報
 31…第2の周辺情報
 32、33…音声信号処理部
 34…音声合成処理部
 35…オーディオ出力部
 38a、38b…画素領域
 42…距離情報取得部
 43…状況判定部
 44…障害物
 48…地面障害物
 50…落下危険ポイント
 58…マップ情報生成部
 60…障害物空間マップ
 63…サーバ装置
 68…オペレータ
 69…現実空間危険物マップ
 70…コンピュータ
D... Distance from user to measurement point P... Measurement point 1, 41, 54, 64, 66... Surrounding information notification system 2... User 3... Ground 5... Object 8... Controller 10... Object detection sensor 17... Surrounding information acquisition unit 18... Notification information generation section 19... Notification control section 25... Laser ranging sensor 26... Ultrasonic ranging sensor 27... Front side ranging sensor 28... Ground side ranging sensor 30... First peripheral information 31... Second peripheral information Surrounding information 32, 33...Audio signal processing unit 34...Speech synthesis processing unit 35... Audio output unit 38a, 38b...Pixel area 42...Distance information acquisition unit 43...Situation determination unit 44...Obstacle 48...Ground obstacle 50...Falling Danger point 58...Map information generation unit 60...Obstacle space map 63...Server device 68...Operator 69...Real space dangerous object map 70...Computer

Claims (20)

  1.  1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得する周辺情報取得部と、
     前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成する音声情報生成部と、
     前記第1の音声情報及び前記第2の音声情報をともに出力させる報知制御部と
     を具備する情報処理装置。
    a surrounding information acquisition unit that obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of one or more object detection sensors;
    First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. an audio information generation unit that generates second audio information using second musical tone data for notifying surrounding information;
    An information processing device comprising: a notification control unit that outputs both the first audio information and the second audio information.
  2.  請求項1に記載の情報処理装置であって、
     前記1以上の物体検出センサは、第1の物体検出センサと、第2の物体検出センサとを含み、
     前記周辺情報取得部は、前記第1の物体検出センサの検出結果に基づいて前記第1の周辺情報を取得し、前記第2の物体検出センサの検出結果に基づいて前記第2の周辺情報を取得する
     情報処理装置。
    The information processing device according to claim 1,
    The one or more object detection sensors include a first object detection sensor and a second object detection sensor,
    The peripheral information acquisition unit acquires the first peripheral information based on the detection result of the first object detection sensor, and acquires the second peripheral information based on the detection result of the second object detection sensor. Information processing device to obtain.
  3.  請求項2に記載の情報処理装置であって、
     前記第1の物体検出センサは、第1の方式により動作する第1の測距センサであり、
     前記第2の物体検出センサは、前記第1の方式とは異なる第2の方式により動作する第2の測距センサである
     情報処理装置。
    The information processing device according to claim 2,
    The first object detection sensor is a first ranging sensor that operates according to a first method,
    The second object detection sensor is a second distance measuring sensor that operates according to a second method different from the first method.
  4.  請求項2に記載の情報処理装置であって、
     前記第1の物体検出センサは、第1の方向を検出方向として配置された第1の測距センサであり、
     前記第2の物体検出センサは、前記第1の方向とは異なる第2の方向を検出方向として配置された第2の測距センサである
     情報処理装置。
    The information processing device according to claim 2,
    The first object detection sensor is a first ranging sensor arranged with a first direction as a detection direction,
    The second object detection sensor is a second ranging sensor arranged with a second direction different from the first direction as a detection direction. Information processing device.
  5.  請求項1に記載の情報処理装置であって、
     前記1以上の物体検出センサは、画像情報を生成するセンサであり、
     前記周辺情報取得部は、前記画像情報のうちの一部の画素領域の情報に基づいて前記第1の周辺情報を取得し、前記画像情報のうちの他の画素領域の情報に基づいて前記第2の周辺情報を取得する
     情報処理装置。
    The information processing device according to claim 1,
    The one or more object detection sensors are sensors that generate image information,
    The peripheral information acquisition unit acquires the first peripheral information based on information on some pixel regions of the image information, and acquires the first peripheral information based on information on other pixel regions of the image information. An information processing device that acquires peripheral information of No. 2.
  6.  請求項1に記載の情報処理装置であって、
     前記1以上の物体検出センサは、画像情報を生成するセンサであり、
     前記周辺情報取得部は、前記画像情報に基づいて検出される第1の種類の物体に関する情報を前記第1の周辺情報として取得し、前記画像情報に基づいて検出される前記第1の種類とは異なる第2の種類の物体に関する情報を前記第2の周辺情報として取得する
     情報処理装置。
    The information processing device according to claim 1,
    The one or more object detection sensors are sensors that generate image information,
    The peripheral information acquisition unit acquires information regarding a first type of object detected based on the image information as the first peripheral information, and acquires information regarding the first type of object detected based on the image information. An information processing device that acquires information regarding a different second type of object as the second peripheral information.
  7.  請求項2に記載の情報処理装置であって、
     前記周辺情報取得部は、前記第1の周辺情報と、前記第2の周辺情報とに基づいて、統合周辺情報を生成する
     情報処理装置。
    The information processing device according to claim 2,
    The surrounding information acquisition unit generates integrated surrounding information based on the first surrounding information and the second surrounding information. Information processing device.
  8.  請求項7に記載の情報処理装置であって、
     前記第1の測距センサは、光学レーザ方式により動作し、
     前記第2の測距センサは、超音波方式により動作し、
     前記周辺情報取得部は、前記第1の測距センサの検出の安定性、及び前記第2の測距センサの検出の安定性に基づいて、前記統合周辺情報を生成する
     情報処理装置。
    The information processing device according to claim 7,
    The first ranging sensor operates by an optical laser method,
    The second ranging sensor operates using an ultrasonic method,
    The surrounding information acquisition unit generates the integrated surrounding information based on the stability of detection by the first ranging sensor and the stability of detection by the second ranging sensor. Information processing device.
  9.  請求項8に記載の情報処理装置であって、
     前記周辺情報取得部は、前記第1の測距センサの検出の安定性が低く、かつ前記第2の測距センサの検出の安定性が高い場合に、周辺に光透過性部材又は光吸収性部材が存在する旨の前記統合周辺情報を生成する
     情報処理装置。
    The information processing device according to claim 8,
    When the detection stability of the first distance measurement sensor is low and the detection stability of the second distance measurement sensor is high, the peripheral information acquisition unit includes a light-transmitting member or a light-absorbing member in the periphery. An information processing device that generates the integrated peripheral information indicating that a member exists.
  10.  請求項9に記載の情報処理装置であって、
     前記周辺情報取得部は、さらに、前記第2の測距センサの検出結果に基づいて前記第2の周辺情報として取得される硬度の情報に基づいて、前記光透過性部材又は前記光吸収性部材に対する材質及び物体の種類の少なくとも一方に関する情報を、前記統合周辺情報として生成する
     情報処理装置。
    The information processing device according to claim 9,
    The peripheral information acquisition unit further includes hardness information acquired as the second peripheral information based on the detection result of the second distance measurement sensor, and the peripheral information acquisition unit is configured to determine whether the light transmitting member or the light absorbing member An information processing device that generates information regarding at least one of a material and a type of an object as the integrated peripheral information.
  11.  請求項1に記載の情報処理装置であって、
     前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の音声情報を出力するか否かを判定し、前記第1の音声情報を出力しないと判定した場合は、前記報知制御部による前記第1の音声情報の出力を規制する
     情報処理装置。
    The information processing device according to claim 1,
    The audio information generation unit determines whether or not to output the first audio information based on the first peripheral information, and if it is determined that the first audio information is not output, the notification control An information processing device that regulates output of the first audio information by the unit.
  12.  請求項1に記載の情報処理装置であって、
     前記第1の音声情報は、所定の楽曲を構成する第1の楽音情報であり、
     前記第2の音声情報は、前記所定の楽曲を構成する第2の楽音情報である
     情報処理装置。
    The information processing device according to claim 1,
    The first audio information is first musical tone information constituting a predetermined song,
    The second audio information is second musical tone information constituting the predetermined music piece. Information processing device.
  13.  請求項1に記載の情報処理装置であって、
     前記音声情報生成部は、前記第1の周辺情報に基づいて前記第1の楽音データの楽音パラメータを制御することで、前記第1の音声情報を生成する
     情報処理装置。
    The information processing device according to claim 1,
    The audio information generation unit generates the first audio information by controlling musical tone parameters of the first musical tone data based on the first peripheral information.
  14.  請求項13に記載の情報処理装置であって、
     前記楽音パラメータは、音量、周波数、ピッチ、速さ、BPM、又はテンポの少なくとも1つを含む
     情報処理装置。
    The information processing device according to claim 13,
    The musical tone parameter includes at least one of volume, frequency, pitch, speed, BPM, and tempo. Information processing device.
  15.  請求項13に記載の情報処理装置であって、
     前記第1の周辺情報は、距離情報を含み、
     前記音声情報生成部は、前記距離情報に基づいて前記楽音パラメータを制御することで、前記第1の音声情報を生成する
     情報処理装置。
    The information processing device according to claim 13,
    The first peripheral information includes distance information,
    The audio information generation unit generates the first audio information by controlling the musical tone parameters based on the distance information.
  16.  請求項13に記載の情報処理装置であって、
     前記音声情報生成部は、前記第1の測距センサの検出方向に基づいて、前記第1の音声情報の定位を制御する
     情報処理装置。
    The information processing device according to claim 13,
    The audio information generation unit controls the localization of the first audio information based on the detection direction of the first ranging sensor.
  17.  1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得し、
     前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成し、
     前記第1の音声情報及び前記第2の音声情報をともに出力させる
     ことをコンピュータシステムが実行する情報処理方法。
    acquiring first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors;
    First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. generating second audio information using second musical tone data for notifying surrounding information;
    An information processing method in which a computer system executes the steps of: outputting both the first audio information and the second audio information.
  18.  1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得するステップと、
     前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成するステップと、
     前記第1の音声情報及び前記第2の音声情報をともに出力させるステップと
     をコンピュータシステムに実行させるプログラム。
    acquiring first peripheral information and second peripheral information regarding the surrounding environment based on detection results of one or more object detection sensors;
    First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. generating second audio information using second musical tone data for notifying surrounding information;
    A program that causes a computer system to execute the steps of: outputting both the first audio information and the second audio information.
  19.  1以上の物体検出センサと、
     前記1以上の物体検出センサの検出結果に基づいて周辺環境に関する第1の周辺情報と第2の周辺情報とを取得する周辺情報取得部と、
     前記第1の周辺情報に基づいて前記第1の周辺情報を報知するための第1の楽音データを用いて第1の音声情報を生成し、前記第2の周辺情報に基づいて前記第2の周辺情報を報知するための第2の楽音データを用いて第2の音声情報を生成する音声情報生成部と、
     前記第1の音声情報及び前記第2の音声情報をともに出力させる報知制御部と
     を具備する情報処理システム。
    one or more object detection sensors;
    a surrounding information acquisition unit that obtains first surrounding information and second surrounding information regarding the surrounding environment based on detection results of the one or more object detection sensors;
    First audio information is generated using first musical tone data for notifying the first peripheral information based on the first peripheral information, and first audio information is generated based on the second peripheral information. an audio information generation unit that generates second audio information using second musical tone data for notifying surrounding information;
    An information processing system comprising: a notification control unit that outputs both the first audio information and the second audio information.
  20.  請求項19に記載の情報処理システムであって、さらに、
     前記第1の音声情報及び前記第2の音声情報を出力する音声出力部を有し、ユーザに対して情報を出力する情報出力部を具備する
     情報処理システム。
    The information processing system according to claim 19, further comprising:
    An information processing system, comprising: an audio output section that outputs the first audio information and the second audio information; and an information output section that outputs information to a user.
PCT/JP2023/019250 2022-06-15 2023-05-24 Information processing device, information processing method, program, and information processing system WO2023243338A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022096704 2022-06-15
JP2022-096704 2022-06-15

Publications (1)

Publication Number Publication Date
WO2023243338A1 true WO2023243338A1 (en) 2023-12-21

Family

ID=89191174

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/019250 WO2023243338A1 (en) 2022-06-15 2023-05-24 Information processing device, information processing method, program, and information processing system

Country Status (1)

Country Link
WO (1) WO2023243338A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014113410A (en) * 2012-12-12 2014-06-26 Yamaguchi Univ Road surface state determination and notification device
EP3157233A1 (en) * 2015-10-13 2017-04-19 Thomson Licensing Handheld device, method for operating the handheld device and computer program
US20190282433A1 (en) * 2016-10-14 2019-09-19 United States Government As Represented By The Department Of Veterans Affairs Sensor based clear path robot guide
WO2019225192A1 (en) * 2018-05-24 2019-11-28 ソニー株式会社 Information processing device and information processing method
JP2020508440A (en) * 2017-02-21 2020-03-19 ブラスウェイト ヘイリーBRATHWAITE,Haley Personal navigation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014113410A (en) * 2012-12-12 2014-06-26 Yamaguchi Univ Road surface state determination and notification device
EP3157233A1 (en) * 2015-10-13 2017-04-19 Thomson Licensing Handheld device, method for operating the handheld device and computer program
US20190282433A1 (en) * 2016-10-14 2019-09-19 United States Government As Represented By The Department Of Veterans Affairs Sensor based clear path robot guide
JP2020508440A (en) * 2017-02-21 2020-03-19 ブラスウェイト ヘイリーBRATHWAITE,Haley Personal navigation system
WO2019225192A1 (en) * 2018-05-24 2019-11-28 ソニー株式会社 Information processing device and information processing method

Similar Documents

Publication Publication Date Title
US7650001B2 (en) Dummy sound generating apparatus and dummy sound generating method and computer product
US9159236B2 (en) Presentation of shared threat information in a transportation-related context
US11237241B2 (en) Microphone array for sound source detection and location
WO2020203657A1 (en) Information processing device, information processing method, and information processing program
US11590985B2 (en) Information processing device, moving body, information processing method, and program
JPWO2020100585A1 (en) Information processing equipment, information processing methods, and programs
US20200385025A1 (en) Information processing apparatus, mobile apparatus, information processing method, and program
US20220017093A1 (en) Vehicle control device, vehicle control method, program, and vehicle
WO2019039281A1 (en) Information processing device, information processing method, program and mobile body
JP2011162055A (en) False running noise generator and false running noise generation system
US20220018932A1 (en) Calibration apparatus, calibration method, program, and calibration system and calibration target
JP2019045364A (en) Information processing apparatus, self-position estimation method, and program
WO2020189156A1 (en) Information processing device, information processing method, movement control device, and movement control method
WO2021187039A1 (en) Information processing device, information processing method, and computer program
US20210279477A1 (en) Image processing apparatus, image processing method, and image processing system
US11904893B2 (en) Operating a vehicle
WO2021070768A1 (en) Information processing device, information processing system, and information processing method
US20210055116A1 (en) Get-off point guidance method and vehicular electronic device for the guidance
WO2023243338A1 (en) Information processing device, information processing method, program, and information processing system
WO2023243339A1 (en) Information processing device, information processing method, program, and information processing system
WO2019117104A1 (en) Information processing device and information processing method
WO2019012017A1 (en) Apparatus, system, method and computer program
JP7302477B2 (en) Information processing device, information processing method and information processing program
JP7469358B2 (en) Traffic Safety Support System
EP4171021A1 (en) Control device, projection system, control method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23823638

Country of ref document: EP

Kind code of ref document: A1