WO2017056774A1 - Information processing device, information processing method and computer program - Google Patents

Information processing device, information processing method and computer program Download PDF

Info

Publication number
WO2017056774A1
WO2017056774A1 PCT/JP2016/074157 JP2016074157W WO2017056774A1 WO 2017056774 A1 WO2017056774 A1 WO 2017056774A1 JP 2016074157 W JP2016074157 W JP 2016074157W WO 2017056774 A1 WO2017056774 A1 WO 2017056774A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
unit
moving body
sensor
terminal device
Prior art date
Application number
PCT/JP2016/074157
Other languages
French (fr)
Japanese (ja)
Inventor
呂尚 高岡
倉田 雅友
由幸 小林
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2017056774A1 publication Critical patent/WO2017056774A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/005Traffic control systems for road vehicles including pedestrian guidance indicator

Definitions

  • the present disclosure relates to an information processing apparatus, an information processing method, and a computer program.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • the first sensing data provided by one or more sensors carried or worn by the user is converted into the second sensing data provided by one or more sensors provided in the moving body on which the user is riding.
  • an information processing apparatus comprising: a correction unit that corrects using a correction unit; and a processing unit that executes a process for obtaining the position of the user in the moving body using a correction result of the correction unit.
  • the first sensing data provided by one or more sensors carried or worn by the user is changed to the second sensing data provided by one or more sensors provided on the moving body on which the user is riding.
  • An information processing method is provided that includes: correcting using the correction result; and executing a process for determining the position of the user in the moving body using the correction result.
  • the first sensing data provided by the one or more sensors carried or attached to the computer by the user is provided to the computer by the one or more sensors provided by the moving body on which the user is riding.
  • a computer program for executing correction using two sensing data and executing a process for obtaining the position of the user in the moving body using a result of the correction is provided.
  • a processing device, an information processing method, and a computer program can be provided.
  • FIG. 3 is an explanatory diagram illustrating a configuration example of a terminal device 200 according to the first embodiment of the present disclosure.
  • FIG. 3 is an explanatory diagram illustrating a configuration example of a server device 100 according to the first embodiment of the present disclosure.
  • FIG. 3 is an explanatory diagram illustrating a configuration example of a server device 100 according to the first embodiment of the present disclosure.
  • It is explanatory drawing which shows the example of the moving body context map 120 and the moving body POI map 145.
  • FIG. It is explanatory drawing which shows the structural example of the server apparatus 100 which concerns on 2nd Embodiment of this indication.
  • FIG. 1 is an explanatory diagram illustrating a configuration example of a positioning system according to the first embodiment of the present disclosure.
  • the positioning system includes a server device 100 and a terminal device 200.
  • the terminal device 200 is a device that is carried or worn by a user who rides on a movable body that can freely move inside, such as a ship (particularly a large passenger ship) or a railway.
  • the terminal device 200 can perform wireless communication with the server device 100.
  • the terminal device 200 transmits the sensor data acquired by the sensor provided therein to the server device 100. Further, the terminal device 200 receives a positioning result using the sensor data sent from the server device 100.
  • the sensor included in the terminal device 200 includes an acceleration sensor, a gyro sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, an atmospheric pressure sensor, and the like. Detects angular velocity, azimuth, illuminance, temperature, pressure, etc.
  • the various sensors described above can detect various information as information related to the user, for example, information indicating the user's movement or orientation.
  • the sensor may include a sensor that detects user's biological information such as pulse, sweat, brain wave, touch, smell, and taste.
  • the terminal device 200 includes a processing circuit that acquires information indicating the user's emotions by analyzing information detected by these sensors and / or image or sound data detected by a camera or microphone, which will be described later. May be.
  • the senor may acquire an image or sound near the user or the device as data using a camera, a microphone, the various sensors described above, or the like.
  • the sensor may include a position detection function for detecting an indoor or outdoor position.
  • the position detection function may include a GNSS (Global Navigation Satellite System) receiver and / or a communication device.
  • the GNSS can include, for example, GPS (Global Positioning System), GLONASS (Global Navigation Satellite System), BDS (BeiDou Navigation Satellite System), QZSS (Quasi-Zenith Satellite Systems), or Galileo.
  • GPS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • BDS BeiDou Navigation Satellite System
  • QZSS Quadasi-Zenith Satellite Systems
  • Galileo Galileo
  • Communication devices include, for example, Wi-fi, MIMO (Multi-Input Multi-Output), cellular communication (for example, position detection using a mobile base station, femtocell), or short-range wireless communication (for example, BLE (Bluetooth Low Energy), The position is detected using a technique such as Bluetooth (registered trademark).
  • MIMO Multi-Input Multi-Output
  • cellular communication for example, position detection using a mobile base station, femtocell
  • short-range wireless communication for example, BLE (Bluetooth Low Energy)
  • BLE Bluetooth Low Energy
  • the device including the sensor When the sensor as described above detects a user's position and situation (including biological information), the device including the sensor is carried or worn by the user, for example. Alternatively, even when a device including a sensor is installed in the user's living environment, it may be possible to detect the user's position and situation (including biological information). For example, the user's pulse can be detected by analyzing an image including the user's face acquired by a camera fixed in a room or the like.
  • the server apparatus 100 is a server apparatus provided inside a moving body such as an automobile, a ship, or a railway, or outside the moving body.
  • the server device 100 is a device that measures the current position of the terminal device 200 located inside the mobile body and distributes the positioning result to the terminal device 200.
  • fingerprinting positioning is performed using sensor data output from multiple sensors in a moving body such as a ship or train.
  • the terminal device 200 is carried in the moving body due to acceleration / deceleration of the moving body itself, change in the direction of movement, change in direction, change in altitude, etc.
  • a component different from the movement or action of the user can be output from the terminal device 200.
  • a component different from the movement or behavior of the user who carries the terminal device 200 in the moving body becomes noise. Therefore, unless the noise is removed, the accuracy of the user carrying the terminal device 200 in the moving body within the moving body is correct. It is not possible to grasp the correct position and behavior.
  • a person when a person carries a gyro sensor in a moving object, when the gyro sensor detects rotation, it can be distinguished whether the moving object is turning around a curve or a person is walking around. Absent. For example, when a person carries an acceleration sensor in a moving body, when the acceleration sensor detects a change in acceleration, it cannot be distinguished whether the moving body has accelerated or decelerated or the person has accelerated or decelerated. . Also, for example, when a person carries a barometric sensor in a moving body, whether the altitude of the moving body has changed when the barometric sensor detects a change in barometric pressure, or has the person's altitude changed in the moving body? Can not be distinguished.
  • the server device 100 corrects the sensor data transmitted from the terminal device 200 using the sensor data measured by the moving body. And the server apparatus 100 performs the process which estimates the action of the user who carries the terminal device 200, or measures the position of the user who carries the terminal apparatus 200 using the corrected sensor data.
  • the server device 100 corrects the sensor data transmitted from the terminal device 200 using the sensor data measured by the moving body, thereby more accurately estimating the behavior and position of the user carrying the terminal device 200. Can do. Further, the server device 100 corrects the sensor data transmitted from the terminal device 200 using the sensor data measured by the moving body, thereby estimating the behavior of the user carrying the terminal device 200 even inside the moving body.
  • a map (context map) in which the result and the estimation result of the current position are linked can be generated.
  • FIG. 2 is an explanatory diagram illustrating a configuration example of the terminal device 200 according to the first embodiment of the present disclosure.
  • a configuration example of the terminal device 200 according to the first embodiment of the present disclosure will be described with reference to FIG.
  • the terminal device 200 includes a sensor unit 210, an input unit 220, a control unit 230, and an output unit 240.
  • the sensor unit 210 is a device that senses the state of the terminal device 200.
  • the sensor unit 210 outputs sensor data to the input unit 220.
  • the sensor unit 210 includes a geomagnetic sensor 211, an acceleration sensor 212, a gyro sensor 213, an atmospheric pressure sensor 214, a communication device 215, a microphone 216, and a camera 217.
  • the communication device 215 is originally a communication device, in the present embodiment, it is used as a sensor that detects a radio wave reception state.
  • the microphone 216 and the camera 217 are also used as sensors for detecting ambient sounds and environments.
  • the geomagnetic sensor 211 is a sensor that outputs the magnitude and direction of a magnetic field (magnetic field) as sensor data.
  • the acceleration sensor 212 is a sensor that outputs acceleration information as sensor data.
  • the gyro sensor 213 is a sensor that outputs angular velocity information as sensor data.
  • the atmospheric pressure sensor 214 is a sensor that outputs atmospheric pressure information as sensor data.
  • the sensors constituting the sensor unit 210 are not limited to those shown in FIG.
  • the input unit 220 receives sensor data output from the sensor unit 210 and data transmitted from another device such as the server device 100.
  • the input unit 220 passes the sensor data output from the sensor unit 210 and data transmitted from another device such as the server device 100 to the control unit 230.
  • the control unit 230 executes various processes for controlling the operation of the terminal device 200.
  • the control unit 230 includes a processor or processing circuit such as a CPU (Central Processing Unit), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array).
  • the control unit 230 may include a memory or a storage device that temporarily or permanently stores a program executed in the processor or the processing circuit and data read / written in the processing.
  • the control unit 230 executes processing for causing the output unit 240 to output the current location of the terminal device 200, or performs processing for outputting the sensor data output by the sensor unit 210 to the server device 100 through the output unit 240.
  • the output unit 240 outputs the information provided from the control unit 230 to a user (may be the same user as the user of the terminal device 200 or a different user), an external device, or another service. To do.
  • the output unit 240 may include an output device, a control device, or software that provides information to an external service.
  • the output device uses the information provided from the control unit 230 as the visual (visual), auditory, tactile, olfactory, and taste of the user (may be the same user as the user of the terminal device 200 or a different user).
  • the output device is a display and outputs information as an image.
  • the display is not limited to a reflective or self-luminous display such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) display, and an image is displayed on the user's eye as used in a wearable device.
  • a combination of a light guide member that guides light and a light source is also included.
  • the output device may include a speaker and output information by voice.
  • the output device may include a projector, a vibrator, and the like.
  • the control device controls the device based on the information provided from the control unit 230.
  • the controlled device may be included in a device that realizes the output unit 240 or may be an external device. More specifically, for example, the control device includes a processor or a processing circuit that generates a control command.
  • the output unit 240 may further include a communication device that transmits a control command to the external device.
  • the control device controls a printer that outputs information provided from the control unit 230 as a printed matter.
  • the control device may include a driver that controls writing of information provided from the control unit 230 to the storage device or the removable recording medium.
  • the control device may control a device other than the device that outputs or records the information provided from the control unit 230.
  • the control device controls the lighting device to turn on the illumination, controls the television to erase the image, controls the audio device to adjust the volume, controls the robot to control its movement, etc. You may do it.
  • the software that provides information to the external service provides the information provided from the control unit 230 to the external service by using an API of the external service, for example.
  • the software may provide information to a server of an external service, or may provide information to application software of a service executed on the client device.
  • the provided information does not necessarily have to be immediately reflected in the external service, and may be provided as a candidate for a user to post or transmit to the external service, for example.
  • the software may provide text used as a candidate for a search keyword or URL (Uniform Resource Locator) input by the user in browser software executed on the client device.
  • the software may post text, images, videos, sounds, and the like on an external service such as social media on behalf of the user.
  • the configuration example of the terminal device 200 according to the first embodiment of the present disclosure has been described above with reference to FIG. Subsequently, a configuration example of the server device 100 according to the first embodiment of the present disclosure will be described.
  • FIG. 3 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the first embodiment of the present disclosure.
  • FIG. 3 shows a configuration example of the server device 100 when generating a context map.
  • the server device 100 includes a processing unit 110.
  • the processing unit 110 When generating a context map, the processing unit 110 includes a sensor correction unit 111 and a context map generation unit 112.
  • the sensor correction unit 111 corrects the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture movement measurement unit 130.
  • the sensor data sent from the terminal device 200 can include acceleration data, angular velocity data, geomagnetic data, atmospheric pressure data, and the like.
  • the sensor correction unit 111 outputs the corrected sensor data to the context map generation unit 112.
  • the moving body posture movement measuring unit 130 measures the posture and movement state of the moving body itself, and includes, for example, various sensors such as an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, and an atmospheric pressure sensor. Composed.
  • the sensor data sent from the moving body posture movement measuring unit 130 indicates the posture and the amount of movement of the moving body.
  • the sensor data X includes the posture / motion change amount Y of the terminal device 200 itself and the sensor data sent from the moving body posture motion measuring unit 130.
  • Z can be included. Therefore, the sensor correction unit 111, as an example of correcting the sensor data sent from the terminal device 200, sensor data X sent from the terminal device 200 and sensor data sent from the moving body posture motion measurement unit 130. The difference with Z is obtained, and the change amount Y of the posture and movement of the terminal device 200 itself is obtained.
  • the sensor correction unit 111 sends the sensor data X sent from the terminal device 200 for each sensor data and the mobile body posture movement measurement unit 130 when obtaining the change amount Y of the posture and movement of the terminal device 200 itself.
  • the difference from the incoming sensor data Z is obtained.
  • the sensor correction unit 111 obtains the sensor data of the gyro sensor 213 sent from the terminal device 200 and the gyro sent from the moving body posture motion measurement unit 130 in order to obtain a change in the angular velocity of the terminal device 200 itself.
  • the difference from the sensor data of the sensor is obtained.
  • the moving body posture movement measuring unit 130 is configured by a sensor that measures a posture change and a movement change of a moving body on which a user who carries or wears the terminal device 200 is riding. If the server apparatus 100 is provided in the mobile body, the mobile body posture movement measurement unit 130 may be provided in the server apparatus 100 or may be connected to the server apparatus 100 by wire or wirelessly. . If the server device 100 is not provided in the moving body, the moving body posture movement measuring unit 130 is provided in the moving body and transmits sensor data to the server device 100 wirelessly.
  • the context map generation unit 112 generates or updates the moving body context map 120 using the sensor data corrected by the sensor correction unit 111.
  • “generation or update” is sometimes simply described as “generation”.
  • the context map generator 112 uses the positioning data in the moving body relative positioning unit 140 when generating the moving body context map 120.
  • the in-mobile relative positioning unit 140 is configured by a device that measures a relative position of the terminal device 200 in the mobile body.
  • the mobile relative positioning unit 140 transmits, for example, a predetermined radio wave, and the position of the terminal device 200 (relative relative to the mobile relative positioning unit 140 from the radio wave intensity when the terminal device 200 receives the radio wave). Measure position.
  • the server device 100 has a configuration as shown in FIG. 3, thereby correcting the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture motion measuring unit 130. I can do it.
  • the server device 100 corrects the sensor data sent from the terminal device 200 by correcting the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture movement measuring unit 130. It is possible to generate a moving body context map 120 with high accuracy.
  • FIG. 4 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the first embodiment of the present disclosure.
  • FIG. 4 shows a configuration example of the server device 100 when the position of the terminal device 200 in the moving body is measured using the context map.
  • the server device 100 includes a sensor correction unit 111, a context estimation unit 113, a moving body positioning unit 114, a POI extraction unit 115, and a positioning result distribution unit 116.
  • the sensor correction unit 111 includes a sensor correction unit 111, a context estimation unit 113, a moving body positioning unit 114, a POI extraction unit 115, and a positioning result distribution unit 116.
  • the sensor correction unit 111 corrects the sensor data sent from the terminal device 200 using the sensor data sent from the mobile body posture motion measurement unit 130, as in the sensor correction unit 111 shown in FIG.
  • the sensor data sent from the terminal device 200 or the moving body posture motion measuring unit 130 can include acceleration data, angular velocity data, geomagnetic data, atmospheric pressure data, and the like.
  • the sensor correction unit 111 outputs the corrected sensor data to the context estimation unit 113.
  • the context estimation unit 113 uses the sensor data corrected by the sensor correction unit 111 to estimate the context of the user who carries or wears the terminal device 200.
  • the context estimation unit 113 estimates the user's context
  • the context estimation unit 113 outputs the estimation result to the in-car positioning unit 114.
  • the context estimation unit 113 recognizes an action label such as stay, walk, run, sit, meal, sleep, jump, stairs, elevator, escalator, bicycle, bus, train, car, ship, or airplane by action recognition. be able to. Since the action recognition method is described in many documents such as Japanese Patent Application Laid-Open No. 2012-8771, for example, detailed description thereof is omitted.
  • the moving body positioning unit 114 measures the position of the terminal device 200 in the moving body using the user context estimated by the context estimation unit 113 and the moving body context map 120.
  • the mobile body positioning unit 114 measures the position of the terminal device 200 in the mobile body, the mobile body positioning unit 114 outputs the positioning result to the POI extraction unit 115.
  • the POI extracting unit 115 extracts a POI (Point Of Interest) in the moving body as facility information in the moving body from the moving body POI map 145 using the positioning result of the terminal device 200 in the moving body positioning unit 114.
  • a POI Point Of Interest
  • the POI extraction unit 115 measures the positioning result in the moving body of the terminal device 200 and the extracted POI. The result is output to the result distribution unit 116.
  • the positioning result distribution unit 116 distributes the positioning result output from the POI extraction unit 115 in the moving body of the terminal device 200 and the extracted POI to the terminal device 200.
  • the terminal device 200 presents the current position in the moving body to the user of the terminal device 200 by outputting the positioning result in the moving body of the terminal device 200 and the extracted POI distributed from the positioning result distribution unit 116. can do.
  • FIG. 5 is an explanatory diagram showing examples of the in-mobile context map 120 and the mobile POI map 145.
  • the server apparatus 100 can generate the mobile in-context context map 120 as shown in FIG. 5, for example, by having the configuration as shown in FIG.
  • Each ellipse in the moving body context map 120 shown in FIG. 5 indicates a place where the terminal device 200 is likely to be located.
  • Each ellipse is associated with the action of the user who is carrying or wearing the terminal device 200.
  • the user's action to be linked is not limited to one. For example, information such as the probability of sitting, the probability of standing, and the probability of walking may be linked to each ellipse.
  • the server device 100 can grasp the behavior of the user who is carrying or wearing the terminal device 200 at each location by referring to the in-mobile context map 120. However, the server device 100 cannot grasp the location of each location only by referring to the mobile context map 120.
  • the server device 100 can grasp what each location is by comparing with the mobile POI map 145 shown in FIG.
  • the moving body POI map 145 represents the seating chart of the Shinkansen
  • the POI extraction unit 115 determines, for example, the terminal device 200 from the positioning result of the terminal device 200 in the moving body positioning unit 114. Can extract information that is in the seat of 1E.
  • Second Embodiment> [2.1. Server device configuration example 1] Next, a second embodiment of the present disclosure will be described.
  • the server apparatus 100 which correct
  • 2nd Embodiment demonstrates the server apparatus 100 which estimates what kind of moving state the user who carries or wears the terminal device 200 from the sensor data transmitted from the terminal device 200.
  • FIG. The server device 100 estimates the moving state of the user, and generates context maps and positions within the moving body from the sensor data transmitted from the terminal device 200 according to the moving state.
  • the sensor data to be used can be selected.
  • FIG. 6 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure.
  • FIG. 6 shows a configuration example of the server apparatus 100 when generating a context map.
  • the server device 100 includes a movement mode estimation unit 117, a use sensor selection unit 118, and a context map generation unit 112. .
  • the movement mode estimation unit 117 performs behavior recognition using the sensor data transmitted from the terminal device 200, and estimates the movement state of the user who carries or wears the terminal device 200.
  • the sensor data sent from the terminal device 200 can include acceleration data, angular velocity data, geomagnetic data, atmospheric pressure data, and the like.
  • the movement state of the user is also referred to as a movement mode.
  • the movement mode estimation unit 117 estimates the movement mode of the user, the movement mode estimation unit 117 outputs the estimated result to the use sensor selection unit 118.
  • the movement mode estimation unit 117 is configured to recognize, for example, stay, walk, run, sit, meal, sleep, jump, stairs, elevator, escalator, bicycle, bus, train, car, ship, or airplane by action recognition using sensor data. Can be recognized.
  • the action recognition method using the sensor data is described in many documents such as Japanese Patent Application Laid-Open No. 2012-8771, and detailed description thereof is omitted.
  • the sensor selection unit 118 uses the mobile mode context from the sensor data sent from the terminal device 200 based on the mobile mode estimation result of the user carrying or wearing the terminal device 200 by the travel mode estimation unit 117. Sensor data used to generate the map 120 is selected.
  • the use sensor selecting unit 118 may be determined not to be used.
  • the sensor selection unit 118 uses the sensor data output by the geomagnetic sensor 211. You may decide not to use.
  • the use sensor selection unit 118 uses the sensor data output from the acceleration sensor 212. You may decide not to use it.
  • the use sensor selection unit 118 may perform weighting when using the sensor data, instead of determining whether the sensor data is used or not. For example, when the movement mode estimation unit 117 estimates a movement mode in which the terminal device 200 is in a ship that sails in a sea with high waves, the sensor selection unit 118 uses the weight of the sensor data output from the acceleration sensor 212. You may decide to lighten.
  • the use sensor selection unit 118 may use whether or not the estimated movement mode causes a magnetic disturbance as a reference for determining whether to perform weighting. For example, when the estimated movement mode is one that causes magnetic disturbance such as when in a train, the sensor selection unit 118 does not weight the sensor data output by the geomagnetic sensor 211. Use or non-use may be determined. In addition, even when the passengers are on the same railroad, if the vehicle is not affected by the motor, such as in a trolley train, the sensor selection unit 118 weights the sensor data output from the geomagnetic sensor 211. May be performed.
  • the context map generation unit 112 generates or updates the moving body context map 120 using the sensor data selected or weighted by the use sensor selection unit 118. As in the first embodiment, the context map generation unit 112 uses the positioning data in the moving body relative positioning unit 140 when generating or updating the moving body context map 120.
  • the in-mobile relative positioning unit 140 is configured by a device that measures a relative position of the terminal device 200 in the mobile body.
  • the mobile relative positioning unit 140 transmits, for example, a predetermined radio wave, and the position of the terminal device 200 (relative relative to the mobile relative positioning unit 140 from the radio wave intensity when the terminal device 200 receives the radio wave). Measure position.
  • the weight can be used for determination at the time of positioning with reference to a moving body context map 120 described later.
  • the server apparatus 100 can select the sensor data sent from the terminal apparatus 200 based on the movement mode by having the configuration as shown in FIG.
  • the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, thereby selecting highly accurate sensor data from the sensor data sent from the terminal device 200.
  • a high mobile context map 120 can be generated.
  • FIG. 7 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure.
  • FIG. 7 shows a configuration example of the server device 100 when the position of the terminal device 200 in the moving body is measured using the context map.
  • the server device 100 includes a movement mode estimation unit 117, a use sensor selection unit 118, and a moving body positioning unit 114. .
  • the functions of the movement mode estimation unit 117 and the used sensor selection unit 118 are the same as those described with reference to FIG.
  • the use sensor selection unit 118 outputs the sensor data selected or weighted by the use sensor selection unit 118 to the in-mobile positioning unit 114.
  • the moving body positioning unit 114 measures the position of the terminal device 200 in the moving body using the sensor data selected by the use sensor selection unit 118 or the weighted sensor data and the moving body context map 120.
  • the mobile positioning unit 114 measures the position of the terminal device 200 in the mobile body and outputs the positioning result to the terminal device 200.
  • the in-mobile positioning unit 114 may extract POI in the mobile body as facility information in the mobile body. As described in the first embodiment, the moving body positioning unit 114 may extract the POI in the moving body using the positioning result of the terminal device 200 in the moving body positioning unit 114.
  • the server apparatus 100 can select the sensor data sent from the terminal apparatus 200 based on the movement mode by having the configuration as shown in FIG.
  • the server device 100 can measure the position of the terminal device 200 in the moving body based on the moving body context map 120 by selecting the sensor data sent from the terminal device 200 based on the moving mode.
  • Server device configuration example 2 In the configuration example 1 described above, the server device 100 that estimates the movement mode of the user who carries or wears the terminal device 200 from the sensor data and selects or weights the sensor data to be used based on the result of the movement mode estimation is shown. It was. Continuing configuration example 2 shows server apparatus 100 that not only selects or weights sensor data to be used, but also selects a context map based on the estimation result of the movement mode.
  • FIG. 8 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure.
  • FIG. 8 shows a configuration example of the server device 100 when generating a context map.
  • the server device 100 includes a movement mode estimation unit 117, a use sensor selection unit 118, a context map selection unit 119, and a context map generation unit 112. , Including.
  • the functions of the movement mode estimation unit 117 and the used sensor selection unit 118 are the same as those described with reference to FIG.
  • the context map selection unit 119 selects a context map to be generated based on the estimation result of the movement mode of the user who carries or wears the terminal device 200 by the movement mode estimation unit 117.
  • the context map selection unit 119 selects a context map to be generated, the context map selection unit 119 outputs the selection result to the context map generation unit 112.
  • FIG. 8 shows mobile context maps 120a and 120b and a global context map 121 as context maps to be selected.
  • the moving body context map 120a is a context map that is selected when a user who carries or wears the terminal device 200 is in a train
  • the moving body context map 120b carries or wears the terminal device 200, for example. It is assumed that the context map is selected when the user is in the ship.
  • the global context map 121 is a context map that is selected when a user who carries or wears the terminal device 200 is not in a moving body.
  • the context map selection unit 119 may use position information measured by the absolute positioning unit 150 or information provided from the moving body identification unit 160.
  • the absolute positioning unit 150 measures the absolute position of the moving body and can be configured by a GNSS receiver or the like.
  • the mobile object identification unit 160 provides information for identifying which part (floor, vehicle, or the like) in the mobile object, and can be provided by, for example, sound waves or electromagnetic waves.
  • the context map selection unit 119 can determine whether the user is in a mobile object (such as a train or a passenger ship) that can move inside.
  • the context map selection unit 119 determines whether the global context map 121 is not used and the moving body context maps 120a and 120b. You can choose. Further, when the context map selection unit 119 determines that the user is not in the moving body from the estimation result of the movement mode of the user, the context map selection unit 119 can select use of the global context map 121 and non-use of the moving body context maps 120a and 120.
  • the context map selection unit 119 can identify that the user is on the track from the position information measured by the absolute positioning unit 150, and determines that the user is on the train from the estimation result of the user's movement mode. In such a case, the in-mobile context map 120a can be selected.
  • the context map selection unit 119 can identify that it is at sea from the position information measured by the absolute positioning unit 150, and can identify that it is inside the passenger ship from the information provided from the moving body identification unit 160.
  • the mobile context map 120b can be selected.
  • the context map generation unit 112 generates or updates the context map selected by the context map selection unit 119 using the sensor data selected by the use sensor selection unit 118 or weighted.
  • the server apparatus 100 can select the sensor data sent from the terminal apparatus 200 based on the movement mode by having the configuration as shown in FIG. Further, the server apparatus 100 has the configuration as shown in FIG. 8 so that the context map to be generated is selected based on the movement mode, the positioning information, and the information for identifying the location inside the moving body. I can do it.
  • the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, thereby selecting highly accurate sensor data from the sensor data sent from the terminal device 200, and has high accuracy.
  • a context map can be generated. Further, the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, so that when a plurality of context maps are prepared according to the type of the moving body, an appropriate context map is displayed. You can choose.
  • FIG. 9 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure.
  • FIG. 9 shows a configuration example of the server device 100 when the position of the terminal device 200 in the moving body is measured using the context map.
  • the server device 100 includes a movement mode estimation unit 117, a use sensor selection unit 118, and a moving body positioning unit 114. .
  • the functions of the movement mode estimation unit 117, the used sensor selection unit 118, and the context map selection unit 119 are the same as those described with reference to FIG.
  • the use sensor selection unit 118 outputs the sensor data selected or weighted by the use sensor selection unit 118 to the in-mobile positioning unit 114.
  • the context map selection unit 119 selects a context map to be used at the time of positioning, the context map selection unit 119 outputs the selection result to the in-mobile positioning unit 114.
  • the in-moving body positioning unit 114 measures the position of the terminal device 200 in the moving body using the sensor map selected by the use sensor selecting unit 118 or the weighted sensor data and the context map selected by the context map selecting unit 119.
  • the mobile positioning unit 114 measures the position of the terminal device 200 in the mobile body and outputs the positioning result to the terminal device 200.
  • the server apparatus 100 can select the sensor data transmitted from the terminal apparatus 200 based on the movement mode by having the configuration shown in FIG. Further, the server apparatus 100 has a configuration as shown in FIG. 9 so that the context map used for positioning can be selected based on the movement mode, positioning information, and information for identifying the location inside the moving body. I can do it.
  • the server device 100 can measure the position of the terminal device 200 in the moving body based on the moving body context map 120 by selecting the sensor data sent from the terminal device 200 based on the moving mode. Further, the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, so that when a plurality of context maps are prepared according to the type of the moving body, an appropriate context map is displayed. You can choose.
  • the server device 100 can select the global context map 121. it can. Thereafter, if the absolute position measurement indicates that the user carrying or wearing the terminal device 200 is at sea, the server device 100 can select the in-mobile context map 120b.
  • the server apparatus 100 combining the first embodiment and the second embodiment described above can also be realized. That is, the sensor data transmitted from the terminal device 200 is corrected using the sensor data measured by the moving body, and then the user who carries or wears the terminal device 200 from the sensor data transmitted from the terminal device 200. It is also possible to realize the server device 100 that estimates whether the mobile device is in such a moving state.
  • the server apparatus 100 can measure the position of the terminal apparatus 200 in the moving body and provide the terminal apparatus 200 with information on the position. For example, when the user who carries the terminal device 200 is on the bullet train, the server device 100 detects the number and seat position of the vehicle on the bullet train by measuring the position of the terminal device 200 in the bullet train. I can do it.
  • the server device 100 measures the position of the terminal device 200 in the passenger ship, so that the detailed position (for example, restaurant, pool) in the passenger ship is measured. , Casino, etc.).
  • the server device 100 may change the information provided to the terminal device 200 according to the position of the user carrying the terminal device 200. For example, when a user who carries the terminal device 200 is on a sightseeing trolley train, the server device 100 determines whether the user is sitting on the right side of the train or the user is sitting on the left side. Information to be provided, for example, information on a landscape seen from a car window may be changed.
  • the position information in the moving body detected by the server apparatus 100 may be shared with another terminal apparatus. In that case, the server apparatus 100 moves to another terminal apparatus of the user carrying the terminal apparatus 200 in the moving body. Location information can be provided.
  • FIG. 10 is a block diagram illustrating a hardware configuration example of the information processing apparatus according to the embodiment of the present disclosure.
  • the information processing apparatus 900 includes a CPU (Central Processing unit) 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905.
  • the information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925.
  • the information processing apparatus 900 may include an imaging device 933 and a sensor 935 as necessary.
  • the information processing apparatus 900 may include a processing circuit such as a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array) instead of or in addition to the CPU 901.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the CPU 901 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the information processing device 900 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927.
  • the ROM 903 stores programs and calculation parameters used by the CPU 901.
  • the RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
  • the CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
  • PCI Peripheral Component Interconnect / Interface
  • the input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever.
  • the input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900.
  • the input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
  • the output device 917 is configured by a device capable of notifying the acquired information to the user using a sense such as vision, hearing, or touch.
  • the output device 917 can be, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) display, an audio output device such as a speaker or headphones, or a vibrator.
  • the output device 917 outputs the result obtained by the processing of the information processing device 900 as video such as text or image, sound such as sound or sound, or vibration.
  • the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900.
  • the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
  • the storage device 919 stores, for example, programs executed by the CPU 901 and various data, and various data acquired from the outside.
  • the drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900.
  • the drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905.
  • the drive 921 writes a record in the attached removable recording medium 927.
  • the connection port 923 is a port for connecting a device to the information processing apparatus 900.
  • the connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like.
  • the connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like.
  • the communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931.
  • the communication device 925 can be, for example, a communication card for LAN (Local Area Network), Bluetooth (registered trademark), Wi-Fi, or WUSB (Wireless USB).
  • the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
  • the communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example.
  • the communication network 931 connected to the communication device 925 is a network connected by wire or wireless, and may include, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, or the like.
  • the imaging device 933 uses various members such as an imaging element such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and a lens for controlling the formation of a subject image on the imaging element. It is an apparatus that images a real space and generates a captured image.
  • the imaging device 933 may capture a still image or may capture a moving image.
  • the sensor 935 is various sensors such as an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, an atmospheric pressure sensor, or a sound sensor (microphone).
  • the sensor 935 acquires information about the state of the information processing apparatus 900 itself, such as the posture of the information processing apparatus 900, and information about the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do.
  • the sensor 935 may include a GPS receiver that receives a GPS (Global Positioning System) signal and measures the latitude, longitude, and altitude of the device.
  • GPS Global Positioning System
  • Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
  • the sensor data sent from the terminal device 200 can be corrected using the sensor data sent from the moving body posture motion measuring unit 130.
  • a server device 100 that can be used is provided.
  • the server device 100 corrects the sensor data sent from the terminal device 200 by correcting the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture movement measuring unit 130.
  • it is possible to generate a highly accurate moving body context map 120, or to perform highly accurate positioning in the moving body by referring to the moving body context map 120.
  • the server device 100 that can select sensor data transmitted from the terminal device 200 based on the movement mode is provided.
  • the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, thereby selecting highly accurate sensor data from the sensor data sent from the terminal device 200. It is possible to generate a high moving body context map 120 and to perform highly accurate positioning in the moving body by referring to the moving body context map 120.
  • Embodiments of the present disclosure include, for example, an information processing apparatus, system, information processing method executed by the information processing apparatus or system, a computer program for causing the information processing apparatus to function, and a computer program recorded And non-transitory tangible media.
  • the software that implements the user interface and application shown in the above embodiment may be realized as a web application used via a network such as the Internet.
  • the web application may be realized by, for example, a markup language such as HTML (HyperText Markup Language), SGML (Standard Generalized Markup Language), or XML (Extensible Markup Language).
  • a correction unit that corrects first sensing data provided by one or more sensors carried or worn by the user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding.
  • a processing unit that executes a process for obtaining the position of the user in the movable body using the correction result of the correction unit;
  • An information processing apparatus comprising: (2) The information processing apparatus according to (1), wherein the correction unit calculates a difference between the first sensing data and the second sensing data.
  • the processing unit executes a positioning process in the moving body of the user who provides the first sensing data using map information generated using a correction result of the correction unit.
  • the information processing apparatus according to any one of (3).
  • the first sensing data includes any of acceleration data, angular velocity data, and geomagnetic data.
  • the moving body is any one of a train, an automobile, and a ship.

Abstract

[Problem] To provide an information processing device which, when a position measurement technology that employs sensors is applied to a large mobile body, can improve accuracy in detecting a position in the mobile body. [Solution] An information processing device which is provided with: a correction unit which uses second sensing data, which is provided from one or more sensors disposed in the mobile body in which a user is riding, to correct first sensing data, which is provided from the user by mobile phone or one or more mounted sensors; and a processing unit which uses the correction result of the correction unit to perform processing for calculating the position of the user in the mobile body.

Description

情報処理装置、情報処理方法およびコンピュータプログラムInformation processing apparatus, information processing method, and computer program
 本開示は、情報処理装置、情報処理方法およびコンピュータプログラムに関する。 The present disclosure relates to an information processing apparatus, an information processing method, and a computer program.
 ユーザの位置を検出する方法としては、GPS(Global Positioning System)に代表されるGNSS(Global Navigation Satellite System)が広く用いられている。しかし、GNSSの場合、衛星からの電波が受信しにくい、屋内や建物密集地域では十分な位置検出の精度が得られない場合がある。そのような場合、例えば通信可能なWi-Fiなどのアクセスポイント、およびその電波強度に基づいてユーザの位置を推定する方法もあるが、位置が特定されたアクセスポイントが限られていたり、電波強度が様々な影響を受けたりするため、精度の向上は難しかった。特許文献1には、これらの場合の解決策として用いられる自律測位に関する技術が記載されている。 GNSS (Global Navigation Satellite System) typified by GPS (Global Positioning System) is widely used as a method for detecting the user's position. However, in the case of GNSS, there are cases where sufficient position detection accuracy cannot be obtained indoors or in densely built-up areas where it is difficult to receive radio waves from satellites. In such a case, for example, there is an access point such as Wi-Fi that can be communicated and a method of estimating the position of the user based on the radio field intensity. However, the access point whose position is specified is limited, However, it was difficult to improve accuracy. Patent Document 1 describes a technology related to autonomous positioning used as a solution in these cases.
特開2013-210300号公報JP 2013-210300 A
 しかし、センサを用いた測位技術を船舶(特に大型の客船)や鉄道のような大きな移動体に適用することを考えた場合、ユーザが保持するセンサが出力するセンサデータに、移動体自体の運動がノイズとして乗ってしまう。 However, when applying positioning technology using sensors to large moving bodies such as ships (especially large passenger ships) and railways, the movement of the moving body itself is included in the sensor data output by the sensor held by the user. Gets on as noise.
 そこで、本開示では、センサを用いた測位技術を大きな移動体に適用した際に、その移動体内での位置の検出精度を向上させることが可能な、新規かつ改良された情報処理装置、情報処理方法およびコンピュータプログラムを提案する。 Therefore, in the present disclosure, when a positioning technique using a sensor is applied to a large moving body, a new and improved information processing apparatus and information processing capable of improving the position detection accuracy in the moving body A method and a computer program are proposed.
 本開示によれば、ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正する補正部と、前記補正部の補正結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行する処理部と、を備える、情報処理装置が提供される。 According to the present disclosure, the first sensing data provided by one or more sensors carried or worn by the user is converted into the second sensing data provided by one or more sensors provided in the moving body on which the user is riding. There is provided an information processing apparatus comprising: a correction unit that corrects using a correction unit; and a processing unit that executes a process for obtaining the position of the user in the moving body using a correction result of the correction unit.
 また本開示によれば、ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正することと、前記補正の結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行することと、を含む、情報処理方法が提供される。 Further, according to the present disclosure, the first sensing data provided by one or more sensors carried or worn by the user is changed to the second sensing data provided by one or more sensors provided on the moving body on which the user is riding. An information processing method is provided that includes: correcting using the correction result; and executing a process for determining the position of the user in the moving body using the correction result.
 また本開示によれば、コンピュータに、ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正することと、前記補正の結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行することと、を実行させる、コンピュータプログラムが提供される。 According to the present disclosure, the first sensing data provided by the one or more sensors carried or attached to the computer by the user is provided to the computer by the one or more sensors provided by the moving body on which the user is riding. There is provided a computer program for executing correction using two sensing data and executing a process for obtaining the position of the user in the moving body using a result of the correction.
 以上説明したように本開示によれば、センサを用いた測位技術を大きな移動体に適用した際に、その移動体内での位置の検出精度を向上させることが可能な、新規かつ改良された情報処理装置、情報処理方法およびコンピュータプログラムを提供することが出来る。 As described above, according to the present disclosure, when a positioning technique using a sensor is applied to a large moving body, new and improved information that can improve the accuracy of position detection in the moving body. A processing device, an information processing method, and a computer program can be provided.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本開示の第1の実施形態に係る測位システムの構成例を示す説明図である。It is explanatory drawing which shows the structural example of the positioning system which concerns on 1st Embodiment of this indication. 本開示の第1の実施形態に係る端末装置200の構成例を示す説明図である。FIG. 3 is an explanatory diagram illustrating a configuration example of a terminal device 200 according to the first embodiment of the present disclosure. 本開示の第1の実施形態に係るサーバ装置100の構成例を示す説明図である。FIG. 3 is an explanatory diagram illustrating a configuration example of a server device 100 according to the first embodiment of the present disclosure. 本開示の第1の実施形態に係るサーバ装置100の構成例を示す説明図である。FIG. 3 is an explanatory diagram illustrating a configuration example of a server device 100 according to the first embodiment of the present disclosure. 移動体内コンテクストマップ120及び移動体内POIマップ145の例を示す説明図である。It is explanatory drawing which shows the example of the moving body context map 120 and the moving body POI map 145. FIG. 本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。It is explanatory drawing which shows the structural example of the server apparatus 100 which concerns on 2nd Embodiment of this indication. 本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。It is explanatory drawing which shows the structural example of the server apparatus 100 which concerns on 2nd Embodiment of this indication. 本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。It is explanatory drawing which shows the structural example of the server apparatus 100 which concerns on 2nd Embodiment of this indication. 本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。It is explanatory drawing which shows the structural example of the server apparatus 100 which concerns on 2nd Embodiment of this indication. ハードウェア構成例を示す説明図である。It is explanatory drawing which shows the hardware structural example.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 1.第1の実施形態
  1.1.システム構成例
  1.2.端末装置の構成例
  1.3.サーバ装置の構成例
   1.3.1.コンテクストマップ生成時
   1.3.2.測位時
 2.第2の実施形態
  2.1.サーバ装置の構成例1
   2.1.1.コンテクストマップ生成時
   2.1.2.測位時
  2.2.サーバ装置の構成例2
   2.2.1.コンテクストマップ生成時
   2.2.2.測位時
 3.ハードウェア構成例
 4.まとめ
The description will be made in the following order.
1. 1. First embodiment 1.1. System configuration example 1.2. Configuration example of terminal device 1.3. Configuration example of server device 1.3.1. When generating a context map 1.3.2. When positioning 2. Second Embodiment 2.1. Server device configuration example 1
2.1.1. When generating a context map 2.1.2. During positioning 2.2. Server device configuration example 2
2.2.1. When generating a context map 2.2.2. 2. When positioning 3. Hardware configuration example Summary
 <1.第1の実施形態>
 [1.1.システム構成例]
 まず、図1を用いて、本開示の第1の実施形態に係る測位システムの構成例を説明する。図1は、本開示の第1の実施形態に係る測位システムの構成例を示す説明図である。
<1. First Embodiment>
[1.1. System configuration example]
First, a configuration example of a positioning system according to the first embodiment of the present disclosure will be described using FIG. FIG. 1 is an explanatory diagram illustrating a configuration example of a positioning system according to the first embodiment of the present disclosure.
 図1に示したように、本開示の第1の実施形態に係る測位システムは、サーバ装置100と、端末装置200と、を含んで構成される。 As illustrated in FIG. 1, the positioning system according to the first embodiment of the present disclosure includes a server device 100 and a terminal device 200.
 端末装置200は、船舶(特に大型の客船)や鉄道等の、内部で自由に移動可能な移動体に乗るユーザが携帯または装着する装置である。端末装置200は、サーバ装置100との間で無線通信を行いうる。端末装置200は、内部に備えるセンサが取得したセンサデータをサーバ装置100に送信する。また端末装置200は、サーバ装置100から送られてくる、上記センサデータを用いた測位結果を受信する。 The terminal device 200 is a device that is carried or worn by a user who rides on a movable body that can freely move inside, such as a ship (particularly a large passenger ship) or a railway. The terminal device 200 can perform wireless communication with the server device 100. The terminal device 200 transmits the sensor data acquired by the sensor provided therein to the server device 100. Further, the terminal device 200 receives a positioning result using the sensor data sent from the server device 100.
 端末装置200の構成例は後に詳述するが、端末装置200が備えるセンサは、加速度センサ、ジャイロセンサ、地磁気センサ、照度センサ、温度センサ、または気圧センサなどを含み、端末装置200にかかる加速度や角速度、方位、照度、温度、気圧などを検出する。上記の各種センサは、例えばセンサを含む端末装置200がユーザによって携帯または装着されている場合に、各種情報をユーザに関する情報、例えばユーザの運動や向きなどを示す情報として検出することができる。また、センサは、他にも、脈拍、発汗、脳波、触覚、嗅覚、味覚など、ユーザの生体情報を検出するセンサを含んでもよい。端末装置200は、これらのセンサによって検出された情報、および/または後述するカメラやマイクによって検出された画像または音声のデータを解析することによってユーザの感情を示す情報を取得する処理回路が含まれてもよい。 Although a configuration example of the terminal device 200 will be described in detail later, the sensor included in the terminal device 200 includes an acceleration sensor, a gyro sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, an atmospheric pressure sensor, and the like. Detects angular velocity, azimuth, illuminance, temperature, pressure, etc. For example, when the terminal device 200 including the sensor is carried or worn by the user, the various sensors described above can detect various information as information related to the user, for example, information indicating the user's movement or orientation. In addition, the sensor may include a sensor that detects user's biological information such as pulse, sweat, brain wave, touch, smell, and taste. The terminal device 200 includes a processing circuit that acquires information indicating the user's emotions by analyzing information detected by these sensors and / or image or sound data detected by a camera or microphone, which will be described later. May be.
 さらに、センサは、カメラ、マイク、上述した各種センサなどにより、ユーザまたは装置の近傍の画像または音声をデータとして取得してもよい。また、センサは、屋内または屋外の位置を検出する位置検出機能を含んでもよい。位置検出機能は、具体的には、GNSS(Global Navigation Satellite System)受信機、および/または通信装置などを含みうる。GNSSは、例えばGPS(Global Positioning System)、GLONASS(Global Navigation Satellite System)、BDS(BeiDou Navigation Satellite System)、QZSS(Quasi-Zenith Satellites System)、またはGalileoなどを含みうる。以下の説明では、例としてGPSが利用される場合について説明するが、同様に他のGNSSが利用されてもよい。通信装置は、例えばWi-fi、MIMO(Multi-Input Multi-Output)、セルラー通信(例えば携帯基地局を使った位置検出、フェムトセル)、または近距離無線通信(例えばBLE(Bluetooth Low Energy)、Bluetooth(登録商標))などの技術を利用して位置を検出する。 Furthermore, the sensor may acquire an image or sound near the user or the device as data using a camera, a microphone, the various sensors described above, or the like. Further, the sensor may include a position detection function for detecting an indoor or outdoor position. Specifically, the position detection function may include a GNSS (Global Navigation Satellite System) receiver and / or a communication device. The GNSS can include, for example, GPS (Global Positioning System), GLONASS (Global Navigation Satellite System), BDS (BeiDou Navigation Satellite System), QZSS (Quasi-Zenith Satellite Systems), or Galileo. In the following description, a case where GPS is used will be described as an example, but other GNSS may be used similarly. Communication devices include, for example, Wi-fi, MIMO (Multi-Input Multi-Output), cellular communication (for example, position detection using a mobile base station, femtocell), or short-range wireless communication (for example, BLE (Bluetooth Low Energy), The position is detected using a technique such as Bluetooth (registered trademark).
 上記のようなセンサがユーザの位置や状況(生体情報を含む)を検出する場合、センサを含む装置は、例えばユーザによって携帯または装着されている。あるいは、センサを含む装置がユーザの生活環境に設置されているような場合にも、ユーザの位置や状況(生体情報を含む)を検出することが可能でありうる。例えば、室内などに固定して設置されたカメラによって取得されたユーザの顔を含む画像を解析することによって、ユーザの脈拍を検出することができる。 When the sensor as described above detects a user's position and situation (including biological information), the device including the sensor is carried or worn by the user, for example. Alternatively, even when a device including a sensor is installed in the user's living environment, it may be possible to detect the user's position and situation (including biological information). For example, the user's pulse can be detected by analyzing an image including the user's face acquired by a camera fixed in a room or the like.
 サーバ装置100は、自動車、船舶や鉄道等の移動体の内部、または移動体の外部に設けられるサーバ装置である。サーバ装置100は、移動体の内部に位置する端末装置200の現在位置を測位するとともに、その測位の結果を端末装置200に配信する装置である。 The server apparatus 100 is a server apparatus provided inside a moving body such as an automobile, a ship, or a railway, or outside the moving body. The server device 100 is a device that measures the current position of the terminal device 200 located inside the mobile body and distributes the positioning result to the terminal device 200.
 船舶や電車などの移動体内で、複数のセンサが出力するセンサデータを用いてフィンガープリンティング測位を行う場合を考える。地磁気、加速度、ジャイロ、気圧などを用いたフィンガープリンティング測位を移動体内で用いるにあたり、移動体自体の加減速や運動方向の変化、方位変化、高度変化等により、移動体内で端末装置200を携帯するユーザの移動や行動とは異なる成分が端末装置200から出力されうる。移動体内で端末装置200を携帯するユーザの移動や行動とは異なる成分はノイズとなるので、そのノイズを除去しなければ、移動体内で端末装置200を携帯するユーザの、当該移動体内での正確な位置や行動を把握することは出来ない。 Consider a case where fingerprinting positioning is performed using sensor data output from multiple sensors in a moving body such as a ship or train. When fingerprinting positioning using geomagnetism, acceleration, gyroscope, atmospheric pressure, etc. is used in the moving body, the terminal device 200 is carried in the moving body due to acceleration / deceleration of the moving body itself, change in the direction of movement, change in direction, change in altitude, etc. A component different from the movement or action of the user can be output from the terminal device 200. A component different from the movement or behavior of the user who carries the terminal device 200 in the moving body becomes noise. Therefore, unless the noise is removed, the accuracy of the user carrying the terminal device 200 in the moving body within the moving body is correct. It is not possible to grasp the correct position and behavior.
 例えば、移動体の中で人がジャイロセンサを携帯した場合に、ジャイロセンサが回転を検出したとき、移動体がカーブを曲がっているのか、人が曲がって歩いているのかを区別する事は出来ない。また例えば、移動体の中で人が加速度センサを携帯した場合に、加速度センサが加速度の変化を検出した時、移動体が加減速したのか、人が加減速したのかを区別する事は出来ない。また例えば、移動体の中で人が気圧センサを携帯した場合に、気圧センサが気圧の変化を検出した時、移動体の高度が変化したのか、移動体の中で人の高度が変化したのかを区別する事は出来ない。また例えば、移動体の中で人が地磁気センサを携帯した場合に、地磁気センサが地磁気の方向の変化を検出した時、移動体の方向が変化したのか、移動体の中にいる人の方向が変化したのかを区別する事は出来ない。 For example, when a person carries a gyro sensor in a moving object, when the gyro sensor detects rotation, it can be distinguished whether the moving object is turning around a curve or a person is walking around. Absent. For example, when a person carries an acceleration sensor in a moving body, when the acceleration sensor detects a change in acceleration, it cannot be distinguished whether the moving body has accelerated or decelerated or the person has accelerated or decelerated. . Also, for example, when a person carries a barometric sensor in a moving body, whether the altitude of the moving body has changed when the barometric sensor detects a change in barometric pressure, or has the person's altitude changed in the moving body? Can not be distinguished. Also, for example, when a person carries a geomagnetic sensor in a moving body, when the geomagnetic sensor detects a change in the direction of geomagnetism, the direction of the moving body has changed, or the direction of the person in the moving body has changed. It cannot be distinguished whether it has changed.
 そこでサーバ装置100は、移動体で測定されたセンサデータを用いて、端末装置200から送信されるセンサデータを補正する。そしてサーバ装置100は、補正したセンサデータを用いて、端末装置200を携帯するユーザの行動を推定したり、端末装置200を携帯するユーザの位置を測位したりする処理を実行する。 Therefore, the server device 100 corrects the sensor data transmitted from the terminal device 200 using the sensor data measured by the moving body. And the server apparatus 100 performs the process which estimates the action of the user who carries the terminal device 200, or measures the position of the user who carries the terminal apparatus 200 using the corrected sensor data.
 サーバ装置100は、移動体で測定されたセンサデータを用いて、端末装置200から送信されるセンサデータを補正することで、端末装置200を携帯するユーザの行動や位置をより正確に推定することができる。またサーバ装置100は、移動体で測定されたセンサデータを用いて、端末装置200から送信されるセンサデータを補正することで、移動体の内部でも、端末装置200を携帯するユーザの行動の推定結果と、現在位置の推定結果とを紐付けたマップ(コンテクストマップ)を生成することが出来る。 The server device 100 corrects the sensor data transmitted from the terminal device 200 using the sensor data measured by the moving body, thereby more accurately estimating the behavior and position of the user carrying the terminal device 200. Can do. Further, the server device 100 corrects the sensor data transmitted from the terminal device 200 using the sensor data measured by the moving body, thereby estimating the behavior of the user carrying the terminal device 200 even inside the moving body. A map (context map) in which the result and the estimation result of the current position are linked can be generated.
 以上、図1を用いて本開示の第1の実施形態に係る測位システムの構成例について説明した。続いて、端末装置200の構成例について説明する。 The configuration example of the positioning system according to the first embodiment of the present disclosure has been described above with reference to FIG. Next, a configuration example of the terminal device 200 will be described.
 [1.2.端末装置の構成例]
 図2は、本開示の第1の実施形態に係る端末装置200の構成例を示す説明図である。以下、図2を用いて本開示の第1の実施形態に係る端末装置200の構成例について説明する。
[1.2. Example of terminal device configuration]
FIG. 2 is an explanatory diagram illustrating a configuration example of the terminal device 200 according to the first embodiment of the present disclosure. Hereinafter, a configuration example of the terminal device 200 according to the first embodiment of the present disclosure will be described with reference to FIG.
 図2に示したように、本開示の第1の実施形態に係る端末装置200は、センサ部210と、入力部220と、制御部230と、出力部240と、を含んで構成される。 As shown in FIG. 2, the terminal device 200 according to the first embodiment of the present disclosure includes a sensor unit 210, an input unit 220, a control unit 230, and an output unit 240.
 センサ部210は、端末装置200の状態をセンシングするデバイスで構成される。センサ部210は、センサデータを入力部220に出力する。センサ部210は、本実施形態では、図2に示したように、地磁気センサ211と、加速度センサ212と、ジャイロセンサ213と、気圧センサ214と、通信装置215と、マイク216と、カメラ217と、を含んで構成される。通信装置215は本来的には通信装置であるが、本実施形態では電波の受信状態を検出するセンサとして利用されている。マイク216やカメラ217も、本実施形態では周囲の音や環境を検出するセンサとして利用されている。 The sensor unit 210 is a device that senses the state of the terminal device 200. The sensor unit 210 outputs sensor data to the input unit 220. In this embodiment, as shown in FIG. 2, the sensor unit 210 includes a geomagnetic sensor 211, an acceleration sensor 212, a gyro sensor 213, an atmospheric pressure sensor 214, a communication device 215, a microphone 216, and a camera 217. , Including. Although the communication device 215 is originally a communication device, in the present embodiment, it is used as a sensor that detects a radio wave reception state. In this embodiment, the microphone 216 and the camera 217 are also used as sensors for detecting ambient sounds and environments.
 地磁気センサ211は、センサデータとして磁場(磁界)の大きさや方向を出力するセンサである。加速度センサ212は、センサデータとして加速度の情報を出力するセンサである。ジャイロセンサ213は、センサデータとして角速度の情報を出力するセンサである。気圧センサ214は、センサデータとして気圧の情報を出力するセンサである。 The geomagnetic sensor 211 is a sensor that outputs the magnitude and direction of a magnetic field (magnetic field) as sensor data. The acceleration sensor 212 is a sensor that outputs acceleration information as sensor data. The gyro sensor 213 is a sensor that outputs angular velocity information as sensor data. The atmospheric pressure sensor 214 is a sensor that outputs atmospheric pressure information as sensor data.
 もちろん、センサ部210を構成するセンサは図2に示したものに限定されるものではない。 Of course, the sensors constituting the sensor unit 210 are not limited to those shown in FIG.
 入力部220は、センサ部210が出力したセンサデータや、他の装置、例えばサーバ装置100から送信されるデータが入力される。入力部220は、センサ部210が出力したセンサデータや、他の装置、例えばサーバ装置100から送信されるデータを制御部230に渡す。 The input unit 220 receives sensor data output from the sensor unit 210 and data transmitted from another device such as the server device 100. The input unit 220 passes the sensor data output from the sensor unit 210 and data transmitted from another device such as the server device 100 to the control unit 230.
 制御部230は、端末装置200の動作を制御する各種処理を実行する。制御部230は、CPU(Central Processing Unit)、DSP(Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、またはFPGA(Field-Programmable Gate Array)などのプロセッサまたは処理回路を含む。また、制御部230は、プロセッサまたは処理回路において実行されるプログラム、および処理において読み書きされるデータを一時的または永続的に格納するメモリまたはストレージ装置を含んでもよい。制御部230は、例えば端末装置200の現在地を出力部240に出力させる処理を実行したり、センサ部210が出力したセンサデータをサーバ装置100へ出力部240を通じて出力したりする処理を実行する。 The control unit 230 executes various processes for controlling the operation of the terminal device 200. The control unit 230 includes a processor or processing circuit such as a CPU (Central Processing Unit), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array). In addition, the control unit 230 may include a memory or a storage device that temporarily or permanently stores a program executed in the processor or the processing circuit and data read / written in the processing. For example, the control unit 230 executes processing for causing the output unit 240 to output the current location of the terminal device 200, or performs processing for outputting the sensor data output by the sensor unit 210 to the server device 100 through the output unit 240.
 出力部240は、制御部230から提供された情報を、ユーザ(端末装置200のユーザと同じユーザであってもよいし、異なるユーザであってもよい)、外部装置、または他のサービスに出力する。例えば、出力部240は、出力装置、制御装置、または外部サービスに情報を提供するソフトウェアなどを含みうる。 The output unit 240 outputs the information provided from the control unit 230 to a user (may be the same user as the user of the terminal device 200 or a different user), an external device, or another service. To do. For example, the output unit 240 may include an output device, a control device, or software that provides information to an external service.
 出力装置は、制御部230から提供された情報を、ユーザ(端末装置200のユーザと同じユーザであってもよいし、異なるユーザであってもよい)の視覚や聴覚、触覚、嗅覚、味覚などの感覚によって知覚される形式で出力する。例えば、出力装置はディスプレイであり、情報を画像によって出力する。なお、ディスプレイは、LCD(Liquid Crystal Display)や有機EL(Electro-Luminescence)ディスプレイなどの反射型または自発光型のディスプレイには限らず、ウェアラブル装置などで用いられるような、ユーザの眼に画像表示光を導光する導光部材と光源との組み合わせをも含む。また、出力装置はスピーカを含み、情報を音声によって出力してもよい。その他にも、出力装置は、プロジェクタやバイブレータなどを含んでもよい。 The output device uses the information provided from the control unit 230 as the visual (visual), auditory, tactile, olfactory, and taste of the user (may be the same user as the user of the terminal device 200 or a different user). Output in a form perceived by the sense of For example, the output device is a display and outputs information as an image. The display is not limited to a reflective or self-luminous display such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) display, and an image is displayed on the user's eye as used in a wearable device. A combination of a light guide member that guides light and a light source is also included. The output device may include a speaker and output information by voice. In addition, the output device may include a projector, a vibrator, and the like.
 制御装置は、制御部230から提供された情報に基づいて装置を制御する。制御される装置は、出力部240を実現する装置に含まれてもよいし、外部装置であってもよい。より具体的には、例えば、制御装置は制御コマンドを生成するプロセッサまたは処理回路を含む。外部装置が制御される場合、出力部240は、さらに、制御コマンドを外部装置に送信する通信装置を含みうる。制御装置は、例えば、制御部230から提供された情報を印刷物として出力するプリンタを制御する。制御装置は、制御部230から提供された情報の、ストレージ装置またはリムーバブル記録媒体への書き込みを制御するドライバを含んでもよい。あるいは、制御装置は、制御部230から提供された情報を出力または記録する装置以外の装置を制御してもよい。例えば、制御装置は、照明装置を制御して照明を点灯させたり、テレビを制御して画像を消したり、オーディオ装置を制御して音量を調節したり、ロボットを制御してその動き等を制御したりしてもよい。 The control device controls the device based on the information provided from the control unit 230. The controlled device may be included in a device that realizes the output unit 240 or may be an external device. More specifically, for example, the control device includes a processor or a processing circuit that generates a control command. When the external device is controlled, the output unit 240 may further include a communication device that transmits a control command to the external device. For example, the control device controls a printer that outputs information provided from the control unit 230 as a printed matter. The control device may include a driver that controls writing of information provided from the control unit 230 to the storage device or the removable recording medium. Alternatively, the control device may control a device other than the device that outputs or records the information provided from the control unit 230. For example, the control device controls the lighting device to turn on the illumination, controls the television to erase the image, controls the audio device to adjust the volume, controls the robot to control its movement, etc. You may do it.
 外部サービスに情報を提供するソフトウェアは、例えば、外部サービスのAPIを利用して、制御部230から提供された情報を外部サービスに提供する。ソフトウェアは、例えば外部サービスのサーバに情報を提供してもよいし、クライアント装置で実行されているサービスのアプリケーションソフトウェアに情報を提供してもよい。提供される情報は、必ずしもすぐに外部サービスに反映されるものでなくてよく、例えばユーザが外部サービスに投稿または送信するための候補として提供されてもよい。より具体的には、例えば、ソフトウェアは、クライアント装置で実行されているブラウザソフトウェアにおいて、ユーザが入力する検索キーワードやURL(Uniform Resource Locator)の候補として用いられるテキストを提供してもよい。また、例えば、ソフトウェアは、ユーザに代わって、ソーシャルメディアなどの外部サービスに、テキスト、画像、動画、音声などを投稿してもよい。 The software that provides information to the external service provides the information provided from the control unit 230 to the external service by using an API of the external service, for example. For example, the software may provide information to a server of an external service, or may provide information to application software of a service executed on the client device. The provided information does not necessarily have to be immediately reflected in the external service, and may be provided as a candidate for a user to post or transmit to the external service, for example. More specifically, for example, the software may provide text used as a candidate for a search keyword or URL (Uniform Resource Locator) input by the user in browser software executed on the client device. In addition, for example, the software may post text, images, videos, sounds, and the like on an external service such as social media on behalf of the user.
 以上、図2を用いて本開示の第1の実施形態に係る端末装置200の構成例について説明した。続いて、本開示の第1の実施形態に係るサーバ装置100の構成例について説明する。 The configuration example of the terminal device 200 according to the first embodiment of the present disclosure has been described above with reference to FIG. Subsequently, a configuration example of the server device 100 according to the first embodiment of the present disclosure will be described.
 [1.3.サーバ装置の構成例]
  (1.3.1.コンテクストマップ生成時)
 図3は、本開示の第1の実施形態に係るサーバ装置100の構成例を示す説明図である。図3に示したのは、コンテクストマップを生成する際のサーバ装置100の構成例である。
[1.3. Server device configuration example]
(1.3.1. When generating a context map)
FIG. 3 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the first embodiment of the present disclosure. FIG. 3 shows a configuration example of the server device 100 when generating a context map.
 図3に示したように、サーバ装置100は、処理部110を含んで構成される。そしてコンテクストマップを生成する際に、処理部110は、センサ補正部111と、コンテクストマップ生成部112と、を含んで構成される。 As shown in FIG. 3, the server device 100 includes a processing unit 110. When generating a context map, the processing unit 110 includes a sensor correction unit 111 and a context map generation unit 112.
 センサ補正部111は、端末装置200から送られてくるセンサデータを、移動体姿勢運動計測部130から送られてくるセンサデータを用いて補正する。端末装置200から送られてくるセンサデータには、加速度データ、角速度データ、地磁気データ、気圧データ等が含まれうる。センサ補正部111は、補正後のセンサデータをコンテクストマップ生成部112に出力する。移動体姿勢運動計測部130は、移動体そのものの姿勢や運動の状態を計測するものであり、例えば、加速度センサ、角速度センサ、地磁気センサ、照度センサ、温度センサ、気圧センサなどの各種のセンサで構成される。移動体姿勢運動計測部130から送られてくるセンサデータは、移動体の姿勢や運動量を示している。 The sensor correction unit 111 corrects the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture movement measurement unit 130. The sensor data sent from the terminal device 200 can include acceleration data, angular velocity data, geomagnetic data, atmospheric pressure data, and the like. The sensor correction unit 111 outputs the corrected sensor data to the context map generation unit 112. The moving body posture movement measuring unit 130 measures the posture and movement state of the moving body itself, and includes, for example, various sensors such as an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, and an atmospheric pressure sensor. Composed. The sensor data sent from the moving body posture movement measuring unit 130 indicates the posture and the amount of movement of the moving body.
 端末装置200から送られてくるセンサデータをXとすると、そのセンサデータXには、端末装置200そのものの姿勢や運動の変化量Yと、移動体姿勢運動計測部130から送られてくるセンサデータZと、が含まれうる。従ってセンサ補正部111は、端末装置200から送られてくるセンサデータの補正の一例として、端末装置200から送られてくるセンサデータXと、移動体姿勢運動計測部130から送られてくるセンサデータZとの差分を求めて、端末装置200そのものの姿勢や運動の変化量Yを求める。 Assuming that the sensor data sent from the terminal device 200 is X, the sensor data X includes the posture / motion change amount Y of the terminal device 200 itself and the sensor data sent from the moving body posture motion measuring unit 130. Z can be included. Therefore, the sensor correction unit 111, as an example of correcting the sensor data sent from the terminal device 200, sensor data X sent from the terminal device 200 and sensor data sent from the moving body posture motion measurement unit 130. The difference with Z is obtained, and the change amount Y of the posture and movement of the terminal device 200 itself is obtained.
 センサ補正部111は、端末装置200そのものの姿勢や運動の変化量Yを求める際に、センサデータ毎に端末装置200から送られてくるセンサデータXと、移動体姿勢運動計測部130から送られてくるセンサデータZとの差分を求める。例えば、センサ補正部111は、端末装置200そのものの角速度の変化を求めるために、端末装置200から送られてくるジャイロセンサ213のセンサデータと、移動体姿勢運動計測部130から送られてくるジャイロセンサのセンサデータとの差分を求める。 The sensor correction unit 111 sends the sensor data X sent from the terminal device 200 for each sensor data and the mobile body posture movement measurement unit 130 when obtaining the change amount Y of the posture and movement of the terminal device 200 itself. The difference from the incoming sensor data Z is obtained. For example, the sensor correction unit 111 obtains the sensor data of the gyro sensor 213 sent from the terminal device 200 and the gyro sent from the moving body posture motion measurement unit 130 in order to obtain a change in the angular velocity of the terminal device 200 itself. The difference from the sensor data of the sensor is obtained.
 移動体姿勢運動計測部130は、端末装置200を携帯または装着するユーザが乗っている移動体の姿勢変化や運動変化を計測するセンサで構成される。サーバ装置100が当該移動体に設けられていれば、移動体姿勢運動計測部130は、サーバ装置100の内部に設けられていても、またサーバ装置100と有線または無線で接続されていてもよい。サーバ装置100が当該移動体に設けられていなければ、移動体姿勢運動計測部130は当該移動体に設けられ、サーバ装置100へセンサデータを無線で送信する。 The moving body posture movement measuring unit 130 is configured by a sensor that measures a posture change and a movement change of a moving body on which a user who carries or wears the terminal device 200 is riding. If the server apparatus 100 is provided in the mobile body, the mobile body posture movement measurement unit 130 may be provided in the server apparatus 100 or may be connected to the server apparatus 100 by wire or wirelessly. . If the server device 100 is not provided in the moving body, the moving body posture movement measuring unit 130 is provided in the moving body and transmits sensor data to the server device 100 wirelessly.
 コンテクストマップ生成部112は、センサ補正部111によって補正されたセンサデータを用いて、移動体内コンテクストマップ120を生成、または更新する。なお、「生成、または更新」を、以下の説明では単に「生成」とまとめて記載する事もある。 The context map generation unit 112 generates or updates the moving body context map 120 using the sensor data corrected by the sensor correction unit 111. In the following description, “generation or update” is sometimes simply described as “generation”.
 コンテクストマップ生成部112は、移動体内コンテクストマップ120の生成時に、移動体内相対測位部140での測位データを用いる。移動体内相対測位部140は、端末装置200の移動体内での相対的な位置を測るデバイスで構成される。移動体内相対測位部140は、例えば所定の電波を発信し、その電波を端末装置200が受信した際の電波強度から、その端末装置200の位置(移動体内相対測位部140から見た相対的な位置)を測る。 The context map generator 112 uses the positioning data in the moving body relative positioning unit 140 when generating the moving body context map 120. The in-mobile relative positioning unit 140 is configured by a device that measures a relative position of the terminal device 200 in the mobile body. The mobile relative positioning unit 140 transmits, for example, a predetermined radio wave, and the position of the terminal device 200 (relative relative to the mobile relative positioning unit 140 from the radio wave intensity when the terminal device 200 receives the radio wave). Measure position.
 サーバ装置100は、図3に示したような構成を有する事で、端末装置200から送られてくるセンサデータを、移動体姿勢運動計測部130から送られてくるセンサデータを用いて補正する事が出来る。サーバ装置100は、端末装置200から送られてくるセンサデータを、移動体姿勢運動計測部130から送られてくるセンサデータを用いて補正する事で、端末装置200から送られてくるセンサデータを用いて精度の高い移動体内コンテクストマップ120を生成する事が出来る。 The server device 100 has a configuration as shown in FIG. 3, thereby correcting the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture motion measuring unit 130. I can do it. The server device 100 corrects the sensor data sent from the terminal device 200 by correcting the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture movement measuring unit 130. It is possible to generate a moving body context map 120 with high accuracy.
  (1.3.2.測位時)
 図4は、本開示の第1の実施形態に係るサーバ装置100の構成例を示す説明図である。図4に示したのは、コンテクストマップを用いて端末装置200の移動体内での位置を測位する際のサーバ装置100の構成例である。
(1.3.2. During positioning)
FIG. 4 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the first embodiment of the present disclosure. FIG. 4 shows a configuration example of the server device 100 when the position of the terminal device 200 in the moving body is measured using the context map.
 図4に示したように、サーバ装置100は、センサ補正部111と、コンテクスト推定部113と、移動体内測位部114と、POI抽出部115と、測位結果配信部116と、を含んで構成される。 As illustrated in FIG. 4, the server device 100 includes a sensor correction unit 111, a context estimation unit 113, a moving body positioning unit 114, a POI extraction unit 115, and a positioning result distribution unit 116. The
 センサ補正部111は、図3に示したセンサ補正部111と同じく、端末装置200から送られてくるセンサデータを、移動体姿勢運動計測部130から送られてくるセンサデータを用いて補正する。端末装置200や移動体姿勢運動計測部130から送られてくるセンサデータには、加速度データ、角速度データ、地磁気データ、気圧データ等が含まれうる。センサ補正部111は、補正後のセンサデータをコンテクスト推定部113に出力する。 The sensor correction unit 111 corrects the sensor data sent from the terminal device 200 using the sensor data sent from the mobile body posture motion measurement unit 130, as in the sensor correction unit 111 shown in FIG. The sensor data sent from the terminal device 200 or the moving body posture motion measuring unit 130 can include acceleration data, angular velocity data, geomagnetic data, atmospheric pressure data, and the like. The sensor correction unit 111 outputs the corrected sensor data to the context estimation unit 113.
 コンテクスト推定部113は、センサ補正部111で補正されたセンサデータを用いて、端末装置200を携帯または装着したユーザのコンテクストを推定する。コンテクスト推定部113は、ユーザのコンテクストを推定すると、その推定の結果を移動体内測位部114に出力する。コンテクスト推定部113は、行動認識によって、例えば、滞在、徒歩、走り、座り、食事、睡眠、ジャンプ、階段、エレベータ、エスカレータ、自転車、バス、列車、自動車、船、または飛行機といった行動ラベルを認識することができる。なお、行動認識の手法については、例えば特開2012-8771号公報など多くの文献に記載されているため、詳細な説明は省略する。 The context estimation unit 113 uses the sensor data corrected by the sensor correction unit 111 to estimate the context of the user who carries or wears the terminal device 200. When the context estimation unit 113 estimates the user's context, the context estimation unit 113 outputs the estimation result to the in-car positioning unit 114. The context estimation unit 113 recognizes an action label such as stay, walk, run, sit, meal, sleep, jump, stairs, elevator, escalator, bicycle, bus, train, car, ship, or airplane by action recognition. be able to. Since the action recognition method is described in many documents such as Japanese Patent Application Laid-Open No. 2012-8771, for example, detailed description thereof is omitted.
 移動体内測位部114は、コンテクスト推定部113が推定したユーザのコンテクストと、移動体内コンテクストマップ120とを用いて、端末装置200の移動体内の位置を測る。移動体内測位部114は、端末装置200の移動体内の位置を測ると、測位の結果をPOI抽出部115に出力する。 The moving body positioning unit 114 measures the position of the terminal device 200 in the moving body using the user context estimated by the context estimation unit 113 and the moving body context map 120. When the mobile body positioning unit 114 measures the position of the terminal device 200 in the mobile body, the mobile body positioning unit 114 outputs the positioning result to the POI extraction unit 115.
 POI抽出部115は、移動体内測位部114での端末装置200の測位の結果を用いて、移動体内POIマップ145から、移動体内の施設情報として移動体内のPOI(Point Of Interest)を抽出する。POI抽出部115は、移動体内測位部114での端末装置200の測位の結果を用いて、移動体内POIマップ145からPOIを抽出すると、端末装置200の移動体内の測位結果及び抽出したPOIを測位結果配信部116に出力する。 The POI extracting unit 115 extracts a POI (Point Of Interest) in the moving body as facility information in the moving body from the moving body POI map 145 using the positioning result of the terminal device 200 in the moving body positioning unit 114. When the POI extraction unit 115 extracts the POI from the moving body POI map 145 using the positioning result of the terminal device 200 in the moving body positioning unit 114, the POI extraction unit 115 measures the positioning result in the moving body of the terminal device 200 and the extracted POI. The result is output to the result distribution unit 116.
 測位結果配信部116は、POI抽出部115から出力された、端末装置200の移動体内の測位結果及び抽出されたPOIを端末装置200に配信する。端末装置200は、測位結果配信部116から配信された、端末装置200の移動体内の測位結果及び抽出されたPOIを出力する事で、端末装置200のユーザに、移動体内での現在位置を提示することができる。 The positioning result distribution unit 116 distributes the positioning result output from the POI extraction unit 115 in the moving body of the terminal device 200 and the extracted POI to the terminal device 200. The terminal device 200 presents the current position in the moving body to the user of the terminal device 200 by outputting the positioning result in the moving body of the terminal device 200 and the extracted POI distributed from the positioning result distribution unit 116. can do.
 図5は、移動体内コンテクストマップ120及び移動体内POIマップ145の例を示す説明図である。サーバ装置100は、図3に示したような構成を有することにより、例えば図5に示したような移動体内コンテクストマップ120を生成することが出来る。この図5に示した移動体内コンテクストマップ120における一つ一つの楕円は、端末装置200が位置している可能性の高い場所を示している。それぞれの楕円には、端末装置200を携帯または装着しているユーザの行動が紐付けられる。紐付けられるユーザの行動は一つに限られず、例えば座っている確率、立っている確率、歩いている確率などの情報がそれぞれの楕円に紐付けられても良い。 FIG. 5 is an explanatory diagram showing examples of the in-mobile context map 120 and the mobile POI map 145. The server apparatus 100 can generate the mobile in-context context map 120 as shown in FIG. 5, for example, by having the configuration as shown in FIG. Each ellipse in the moving body context map 120 shown in FIG. 5 indicates a place where the terminal device 200 is likely to be located. Each ellipse is associated with the action of the user who is carrying or wearing the terminal device 200. The user's action to be linked is not limited to one. For example, information such as the probability of sitting, the probability of standing, and the probability of walking may be linked to each ellipse.
 サーバ装置100は、移動体内コンテクストマップ120を参照すれば、それぞれの場所で端末装置200を携帯または装着しているユーザの行動の把握は可能である。しかし、移動体内コンテクストマップ120を参照するだけでは、サーバ装置100は、それぞれの場所がどのような場所なのかまでは把握出来ない。 The server device 100 can grasp the behavior of the user who is carrying or wearing the terminal device 200 at each location by referring to the in-mobile context map 120. However, the server device 100 cannot grasp the location of each location only by referring to the mobile context map 120.
 そこでサーバ装置100は、図5に示した移動体内POIマップ145と照らし合わせることで、それぞれの場所がどのような場所であるかを把握する事が出来る。図5に示した例は、移動体内POIマップ145が新幹線の座席表を表しており、POI抽出部115は、移動体内測位部114での端末装置200の測位の結果から、例えばその端末装置200が1Eの座席にいるという情報を抽出する事が出来る。 Therefore, the server device 100 can grasp what each location is by comparing with the mobile POI map 145 shown in FIG. In the example shown in FIG. 5, the moving body POI map 145 represents the seating chart of the Shinkansen, and the POI extraction unit 115 determines, for example, the terminal device 200 from the positioning result of the terminal device 200 in the moving body positioning unit 114. Can extract information that is in the seat of 1E.
 <2.第2の実施形態>
 [2.1.サーバ装置の構成例1]
 続いて本開示の第2の実施形態について説明する。第1の実施形態では、移動体で測定されたセンサデータを用いて、端末装置200から送信されるセンサデータを補正するサーバ装置100について説明した。第2の実施形態では、端末装置200から送信されるセンサデータから端末装置200を携帯または装着するユーザがどのような移動状態にあるかを推定するサーバ装置100について説明する。サーバ装置100は、ユーザがどのような移動状態にあるかを推定することで、その移動状態に応じて端末装置200から送信されるセンサデータの中から、コンテクストマップの生成や移動体内の測位に使用するセンサデータを選択する事が出来る。
<2. Second Embodiment>
[2.1. Server device configuration example 1]
Next, a second embodiment of the present disclosure will be described. In 1st Embodiment, the server apparatus 100 which correct | amends the sensor data transmitted from the terminal device 200 using the sensor data measured with the mobile body was demonstrated. 2nd Embodiment demonstrates the server apparatus 100 which estimates what kind of moving state the user who carries or wears the terminal device 200 from the sensor data transmitted from the terminal device 200. FIG. The server device 100 estimates the moving state of the user, and generates context maps and positions within the moving body from the sensor data transmitted from the terminal device 200 according to the moving state. The sensor data to be used can be selected.
  (2.1.1.コンテクストマップ生成時)
 まず、コンテクストマップ生成時におけるサーバ装置100の構成例について説明する。図6は、本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。図6に示したのは、コンテクストマップを生成する際のサーバ装置100の構成例である。
(2.1.1. When generating a context map)
First, a configuration example of the server device 100 when generating a context map will be described. FIG. 6 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure. FIG. 6 shows a configuration example of the server apparatus 100 when generating a context map.
 図6に示したように、本開示の第2の実施形態に係るサーバ装置100は、移動モード推定部117と、使用センサ選択部118と、コンテクストマップ生成部112と、を含んで構成される。 As illustrated in FIG. 6, the server device 100 according to the second embodiment of the present disclosure includes a movement mode estimation unit 117, a use sensor selection unit 118, and a context map generation unit 112. .
 移動モード推定部117は、端末装置200から送信されるセンサデータを用いて行動認識を行って、端末装置200を携帯または装着するユーザがどのような移動状態にあるかを推定する。端末装置200から送られてくるセンサデータには、加速度データ、角速度データ、地磁気データ、気圧データ等が含まれうる。以下では、ユーザの移動状態のことを移動モードとも称する。移動モード推定部117は、ユーザの移動モードを推定すると、推定した結果を使用センサ選択部118に出力する。 The movement mode estimation unit 117 performs behavior recognition using the sensor data transmitted from the terminal device 200, and estimates the movement state of the user who carries or wears the terminal device 200. The sensor data sent from the terminal device 200 can include acceleration data, angular velocity data, geomagnetic data, atmospheric pressure data, and the like. Hereinafter, the movement state of the user is also referred to as a movement mode. When the movement mode estimation unit 117 estimates the movement mode of the user, the movement mode estimation unit 117 outputs the estimated result to the use sensor selection unit 118.
 移動モード推定部117は、センサデータを用いた行動認識によって、例えば、滞在、徒歩、走り、座り、食事、睡眠、ジャンプ、階段、エレベータ、エスカレータ、自転車、バス、列車、自動車、船、または飛行機といった行動ラベルを認識することができる。なお、センサデータを用いた行動認識の手法については、例えば特開2012-8771号公報など多くの文献に記載されているため、詳細な説明は省略する。 The movement mode estimation unit 117 is configured to recognize, for example, stay, walk, run, sit, meal, sleep, jump, stairs, elevator, escalator, bicycle, bus, train, car, ship, or airplane by action recognition using sensor data. Can be recognized. The action recognition method using the sensor data is described in many documents such as Japanese Patent Application Laid-Open No. 2012-8771, and detailed description thereof is omitted.
 使用センサ選択部118は、移動モード推定部117による、端末装置200を携帯または装着するユーザの移動モードの推定結果に基づいて、端末装置200から送られてくるセンサデータの中から、移動体内コンテクストマップ120の生成に用いられるセンサデータを選択する。 The sensor selection unit 118 uses the mobile mode context from the sensor data sent from the terminal device 200 based on the mobile mode estimation result of the user carrying or wearing the terminal device 200 by the travel mode estimation unit 117. Sensor data used to generate the map 120 is selected.
 例えば、移動モード推定部117によって、端末装置200が加減速中の電車内にいるという移動モードが推定された場合、加減速中の電車内はモータによって地磁気が乱れるため、使用センサ選択部118は、地磁気センサ211が出力したセンサデータを使用しないと判断しても良い。 For example, when the movement mode estimation unit 117 estimates that the terminal device 200 is in an accelerating / decelerating train, since the geomagnetism is disturbed by the motor in the accelerating / decelerating train, the use sensor selecting unit 118 The sensor data output from the geomagnetic sensor 211 may be determined not to be used.
 また例えば、移動モード推定部117によって、端末装置200が自動車内にいるという移動モードが推定された場合、自動車内は地磁気が乱れるため、使用センサ選択部118は、地磁気センサ211が出力したセンサデータを使用しないと判断しても良い。 Further, for example, when the movement mode estimation unit 117 estimates the movement mode that the terminal device 200 is in the automobile, since the geomagnetism is disturbed in the automobile, the sensor selection unit 118 uses the sensor data output by the geomagnetic sensor 211. You may decide not to use.
 また例えば、移動モード推定部117によって、波が高い海を航行する船舶内に端末装置200がいるという移動モードが推定された場合、使用センサ選択部118は、加速度センサ212が出力したセンサデータを使用しないと判断しても良い。 In addition, for example, when the movement mode estimation unit 117 estimates a movement mode in which the terminal device 200 is in a ship sailing in a sea where waves are high, the use sensor selection unit 118 uses the sensor data output from the acceleration sensor 212. You may decide not to use it.
 使用センサ選択部118は、センサデータの使用または不使用を決定するのでは無く、センサデータの使用に際して重み付けを行っても良い。例えば、移動モード推定部117によって、波が高い海を航行する船舶内に端末装置200がいるという移動モードが推定された場合、使用センサ選択部118は、加速度センサ212が出力したセンサデータの重みを軽くすると判断しても良い。 The use sensor selection unit 118 may perform weighting when using the sensor data, instead of determining whether the sensor data is used or not. For example, when the movement mode estimation unit 117 estimates a movement mode in which the terminal device 200 is in a ship that sails in a sea with high waves, the sensor selection unit 118 uses the weight of the sensor data output from the acceleration sensor 212. You may decide to lighten.
 使用センサ選択部118は、重み付けを行うかどうかを判断する基準として、推定された移動モードが、磁気の乱れを生じさせるものかどうかを用いても良い。例えば、推定された移動モードが、電車の中にいる場合など、磁気の乱れを生じさせるものであった場合、使用センサ選択部118は、地磁気センサ211が出力したセンサデータは重み付けするのでは無く、使用または不使用を決定しても良い。なお、同じ鉄道に乗車している場合であっても、トロッコ列車の中など、モータの影響を受けない車内にいる場合は、使用センサ選択部118は、地磁気センサ211が出力したセンサデータに重み付けを行っても良い。 The use sensor selection unit 118 may use whether or not the estimated movement mode causes a magnetic disturbance as a reference for determining whether to perform weighting. For example, when the estimated movement mode is one that causes magnetic disturbance such as when in a train, the sensor selection unit 118 does not weight the sensor data output by the geomagnetic sensor 211. Use or non-use may be determined. In addition, even when the passengers are on the same railroad, if the vehicle is not affected by the motor, such as in a trolley train, the sensor selection unit 118 weights the sensor data output from the geomagnetic sensor 211. May be performed.
 コンテクストマップ生成部112は、使用センサ選択部118によって選択された、または重み付けされたセンサデータを用いて、移動体内コンテクストマップ120を生成、または更新する。コンテクストマップ生成部112は、第1の実施形態と同様に、移動体内コンテクストマップ120の生成、または更新時に、移動体内相対測位部140での測位データを用いる。移動体内相対測位部140は、端末装置200の移動体内での相対的な位置を測るデバイスで構成される。移動体内相対測位部140は、例えば所定の電波を発信し、その電波を端末装置200が受信した際の電波強度から、その端末装置200の位置(移動体内相対測位部140から見た相対的な位置)を測る。 The context map generation unit 112 generates or updates the moving body context map 120 using the sensor data selected or weighted by the use sensor selection unit 118. As in the first embodiment, the context map generation unit 112 uses the positioning data in the moving body relative positioning unit 140 when generating or updating the moving body context map 120. The in-mobile relative positioning unit 140 is configured by a device that measures a relative position of the terminal device 200 in the mobile body. The mobile relative positioning unit 140 transmits, for example, a predetermined radio wave, and the position of the terminal device 200 (relative relative to the mobile relative positioning unit 140 from the radio wave intensity when the terminal device 200 receives the radio wave). Measure position.
 移動体内コンテクストマップ120には、ある場所でのデータがどんどん蓄積される。従って、センサデータに重み付けがなされていることで、後述の移動体内コンテクストマップ120を参照した測位時の判断に、その重みが用いられうる。 In the moving body context map 120, data at a certain location is accumulated more and more. Therefore, since the sensor data is weighted, the weight can be used for determination at the time of positioning with reference to a moving body context map 120 described later.
 サーバ装置100は、図6に示したような構成を有する事で、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事が出来る。サーバ装置100は、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事で、端末装置200から送られてくるセンサデータの中から精度の高いセンサデータを選び、精度の高い移動体内コンテクストマップ120を生成する事が出来る。 The server apparatus 100 can select the sensor data sent from the terminal apparatus 200 based on the movement mode by having the configuration as shown in FIG. The server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, thereby selecting highly accurate sensor data from the sensor data sent from the terminal device 200. A high mobile context map 120 can be generated.
  (2.1.2.測位時)
 図7は、本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。図7に示したのは、コンテクストマップを用いて端末装置200の移動体内での位置を測位する際のサーバ装置100の構成例である。
(2.1.2. During positioning)
FIG. 7 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure. FIG. 7 shows a configuration example of the server device 100 when the position of the terminal device 200 in the moving body is measured using the context map.
 図7に示したように、本開示の第2の実施形態に係るサーバ装置100は、移動モード推定部117と、使用センサ選択部118と、移動体内測位部114と、を含んで構成される。 As illustrated in FIG. 7, the server device 100 according to the second embodiment of the present disclosure includes a movement mode estimation unit 117, a use sensor selection unit 118, and a moving body positioning unit 114. .
 移動モード推定部117及び使用センサ選択部118の機能は、図6を用いて説明したものと同様であるので、詳細な説明は省略する。使用センサ選択部118は、使用センサ選択部118によって選択された、または重み付けされたセンサデータを移動体内測位部114に出力する。 The functions of the movement mode estimation unit 117 and the used sensor selection unit 118 are the same as those described with reference to FIG. The use sensor selection unit 118 outputs the sensor data selected or weighted by the use sensor selection unit 118 to the in-mobile positioning unit 114.
 移動体内測位部114は、使用センサ選択部118によって選択された、または重み付けされたセンサデータ及び移動体内コンテクストマップ120を用いて、端末装置200の移動体内の位置を測る。移動体内測位部114は、端末装置200の移動体内の位置を測ると、測位の結果を端末装置200に出力する。移動体内測位部114は、移動体内の施設情報として移動体内のPOIを抽出してもよい。移動体内測位部114は、第1の実施形態で説明したように、移動体内測位部114での端末装置200の測位の結果を用いて移動体内のPOIを抽出してもよい。 The moving body positioning unit 114 measures the position of the terminal device 200 in the moving body using the sensor data selected by the use sensor selection unit 118 or the weighted sensor data and the moving body context map 120. The mobile positioning unit 114 measures the position of the terminal device 200 in the mobile body and outputs the positioning result to the terminal device 200. The in-mobile positioning unit 114 may extract POI in the mobile body as facility information in the mobile body. As described in the first embodiment, the moving body positioning unit 114 may extract the POI in the moving body using the positioning result of the terminal device 200 in the moving body positioning unit 114.
 サーバ装置100は、図7に示したような構成を有する事で、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事が出来る。サーバ装置100は、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事で、端末装置200の移動体内における位置を、移動体内コンテクストマップ120に基づいて測る事が出来る。 The server apparatus 100 can select the sensor data sent from the terminal apparatus 200 based on the movement mode by having the configuration as shown in FIG. The server device 100 can measure the position of the terminal device 200 in the moving body based on the moving body context map 120 by selecting the sensor data sent from the terminal device 200 based on the moving mode.
 [2.2.サーバ装置の構成例2]
 上述した構成例1では、センサデータから端末装置200を携帯または装着するユーザの移動モードを推定し、移動モードの推定の結果に基づいて使用するセンサデータを選択、または重み付けするサーバ装置100を示した。続く構成例2では、移動モードの推定の結果に基づいて、使用するセンサデータを選択、または重み付けするだけでなく、コンテクストマップを選択するサーバ装置100を示す。
[2.2. Server device configuration example 2]
In the configuration example 1 described above, the server device 100 that estimates the movement mode of the user who carries or wears the terminal device 200 from the sensor data and selects or weights the sensor data to be used based on the result of the movement mode estimation is shown. It was. Continuing configuration example 2 shows server apparatus 100 that not only selects or weights sensor data to be used, but also selects a context map based on the estimation result of the movement mode.
 図8は、本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。図8に示したのは、コンテクストマップを生成する際のサーバ装置100の構成例である。 FIG. 8 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure. FIG. 8 shows a configuration example of the server device 100 when generating a context map.
 図8に示したように、本開示の第2の実施形態に係るサーバ装置100は、移動モード推定部117と、使用センサ選択部118と、コンテクストマップ選択部119と、コンテクストマップ生成部112と、を含んで構成される。 As illustrated in FIG. 8, the server device 100 according to the second embodiment of the present disclosure includes a movement mode estimation unit 117, a use sensor selection unit 118, a context map selection unit 119, and a context map generation unit 112. , Including.
 移動モード推定部117及び使用センサ選択部118の機能は、図6を用いて説明したものと同様であるので、詳細な説明は省略する。 The functions of the movement mode estimation unit 117 and the used sensor selection unit 118 are the same as those described with reference to FIG.
 コンテクストマップ選択部119は、移動モード推定部117による、端末装置200を携帯または装着するユーザの移動モードの推定結果に基づいて、生成の対象となるコンテクストマップを選択する。コンテクストマップ選択部119は、生成の対象となるコンテクストマップを選択すると、その選択の結果をコンテクストマップ生成部112に出力する。 The context map selection unit 119 selects a context map to be generated based on the estimation result of the movement mode of the user who carries or wears the terminal device 200 by the movement mode estimation unit 117. When the context map selection unit 119 selects a context map to be generated, the context map selection unit 119 outputs the selection result to the context map generation unit 112.
 図8には、選択対象のコンテクストマップとして、移動体内コンテクストマップ120a、120bと、グローバルコンテクストマップ121と、が示されている。移動体内コンテクストマップ120aは、例えば、端末装置200を携帯または装着するユーザが電車の中にいる場合に選択されるコンテクストマップであり、移動体内コンテクストマップ120bは、例えば端末装置200を携帯または装着するユーザが船舶の中にいる場合に選択されるコンテクストマップであるとする。またグローバルコンテクストマップ121は、端末装置200を携帯または装着するユーザが移動体の中にいない場合に選択されるコンテクストマップである。 FIG. 8 shows mobile context maps 120a and 120b and a global context map 121 as context maps to be selected. For example, the moving body context map 120a is a context map that is selected when a user who carries or wears the terminal device 200 is in a train, and the moving body context map 120b carries or wears the terminal device 200, for example. It is assumed that the context map is selected when the user is in the ship. The global context map 121 is a context map that is selected when a user who carries or wears the terminal device 200 is not in a moving body.
 コンテクストマップ選択部119は、絶対測位部150で測位された位置情報や、移動体識別部160から提供される情報を用いても良い。絶対測位部150は、移動体の絶対位置を測位するものであり、GNSS受信機等で構成されうる。移動体識別部160は、移動体の中のどの部分(フロアや車両等)にいるかを識別する情報を提供するものであり、例えば音波や電磁波などで提供されうる。移動体識別部160が情報を提供する事で、コンテクストマップ選択部119は、ユーザが内部で移動が可能な移動体(電車や客船など)にいるかどうかを判断できる。 The context map selection unit 119 may use position information measured by the absolute positioning unit 150 or information provided from the moving body identification unit 160. The absolute positioning unit 150 measures the absolute position of the moving body and can be configured by a GNSS receiver or the like. The mobile object identification unit 160 provides information for identifying which part (floor, vehicle, or the like) in the mobile object, and can be provided by, for example, sound waves or electromagnetic waves. When the mobile object identification unit 160 provides the information, the context map selection unit 119 can determine whether the user is in a mobile object (such as a train or a passenger ship) that can move inside.
 コンテクストマップ選択部119は、移動体識別部160から提供される情報から、ユーザが移動体の中にいると判断すると、グローバルコンテクストマップ121の不使用及び移動体内コンテクストマップ120a、120bのいずれかを選択できる。またコンテクストマップ選択部119は、ユーザの移動モードの推定結果から、ユーザが移動体の中にいないと判断すると、グローバルコンテクストマップ121の使用及び移動体内コンテクストマップ120a、120の不使用を選択できる。 When the context map selection unit 119 determines from the information provided from the moving body identification unit 160 that the user is in the moving body, the context map selection unit 119 determines whether the global context map 121 is not used and the moving body context maps 120a and 120b. You can choose. Further, when the context map selection unit 119 determines that the user is not in the moving body from the estimation result of the movement mode of the user, the context map selection unit 119 can select use of the global context map 121 and non-use of the moving body context maps 120a and 120.
 また例えば、コンテクストマップ選択部119は、絶対測位部150で測位された位置情報から線路上にいることが識別でき、かつ、ユーザの移動モードの推定結果から、ユーザが電車の中にいると判断した場合は、移動体内コンテクストマップ120aを選択できる。 Further, for example, the context map selection unit 119 can identify that the user is on the track from the position information measured by the absolute positioning unit 150, and determines that the user is on the train from the estimation result of the user's movement mode. In such a case, the in-mobile context map 120a can be selected.
 また例えば、コンテクストマップ選択部119は、絶対測位部150で測位された位置情報から海上にいることが識別でき、移動体識別部160から提供される情報から客船の内部にいることが識別できた場合は、移動体内コンテクストマップ120bを選択できる。 Further, for example, the context map selection unit 119 can identify that it is at sea from the position information measured by the absolute positioning unit 150, and can identify that it is inside the passenger ship from the information provided from the moving body identification unit 160. In this case, the mobile context map 120b can be selected.
 コンテクストマップ生成部112は、使用センサ選択部118によって選択された、または重み付けされたセンサデータを用いて、コンテクストマップ選択部119が選択したコンテクストマップを生成、または更新する。 The context map generation unit 112 generates or updates the context map selected by the context map selection unit 119 using the sensor data selected by the use sensor selection unit 118 or weighted.
 サーバ装置100は、図8に示したような構成を有する事で、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事が出来る。またサーバ装置100は、図8に示したような構成を有する事で、生成の対象となるコンテクストマップを、移動モードや、測位情報、移動体の内部の場所を識別する情報に基づいて選択する事が出来る。 The server apparatus 100 can select the sensor data sent from the terminal apparatus 200 based on the movement mode by having the configuration as shown in FIG. Further, the server apparatus 100 has the configuration as shown in FIG. 8 so that the context map to be generated is selected based on the movement mode, the positioning information, and the information for identifying the location inside the moving body. I can do it.
 サーバ装置100は、移動モードに基づいて端末装置200から送られてくるセンサデータを選択する事で、端末装置200から送られてくるセンサデータの中から精度の高いセンサデータを選び、精度の高いコンテクストマップを生成する事が出来る。またサーバ装置100は、移動モードに基づいて端末装置200から送られてくるセンサデータを選択する事で、コンテクストマップが移動体の種類に応じて複数用意されている場合に、適切なコンテクストマップを選択する事が出来る。 The server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, thereby selecting highly accurate sensor data from the sensor data sent from the terminal device 200, and has high accuracy. A context map can be generated. Further, the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, so that when a plurality of context maps are prepared according to the type of the moving body, an appropriate context map is displayed. You can choose.
  (2.2.2.測位時)
 図9は、本開示の第2の実施形態に係るサーバ装置100の構成例を示す説明図である。図9に示したのは、コンテクストマップを用いて端末装置200の移動体内での位置を測位する際のサーバ装置100の構成例である。
(2.2.2. During positioning)
FIG. 9 is an explanatory diagram illustrating a configuration example of the server apparatus 100 according to the second embodiment of the present disclosure. FIG. 9 shows a configuration example of the server device 100 when the position of the terminal device 200 in the moving body is measured using the context map.
 図9に示したように、本開示の第2の実施形態に係るサーバ装置100は、移動モード推定部117と、使用センサ選択部118と、移動体内測位部114と、を含んで構成される。 As illustrated in FIG. 9, the server device 100 according to the second embodiment of the present disclosure includes a movement mode estimation unit 117, a use sensor selection unit 118, and a moving body positioning unit 114. .
 移動モード推定部117、使用センサ選択部118及びコンテクストマップ選択部119の機能は、図8を用いて説明したものと同様であるので、詳細な説明は省略する。使用センサ選択部118は、使用センサ選択部118によって選択された、または重み付けされたセンサデータを移動体内測位部114に出力する。コンテクストマップ選択部119は、測位時に使用するコンテクストマップを選択すると、その選択の結果を移動体内測位部114出力する。 The functions of the movement mode estimation unit 117, the used sensor selection unit 118, and the context map selection unit 119 are the same as those described with reference to FIG. The use sensor selection unit 118 outputs the sensor data selected or weighted by the use sensor selection unit 118 to the in-mobile positioning unit 114. When the context map selection unit 119 selects a context map to be used at the time of positioning, the context map selection unit 119 outputs the selection result to the in-mobile positioning unit 114.
 移動体内測位部114は、使用センサ選択部118によって選択された、または重み付けされたセンサデータ及びをコンテクストマップ選択部119が選択したコンテクストマップを用いて、端末装置200の移動体内の位置を測る。移動体内測位部114は、端末装置200の移動体内の位置を測ると、測位の結果を端末装置200に出力する。 The in-moving body positioning unit 114 measures the position of the terminal device 200 in the moving body using the sensor map selected by the use sensor selecting unit 118 or the weighted sensor data and the context map selected by the context map selecting unit 119. The mobile positioning unit 114 measures the position of the terminal device 200 in the mobile body and outputs the positioning result to the terminal device 200.
 サーバ装置100は、図9に示したような構成を有する事で、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事が出来る。またサーバ装置100は、図9に示したような構成を有する事で、測位時に使用するコンテクストマップを、移動モードや、測位情報、移動体の内部の場所を識別する情報に基づいて選択する事が出来る。 The server apparatus 100 can select the sensor data transmitted from the terminal apparatus 200 based on the movement mode by having the configuration shown in FIG. Further, the server apparatus 100 has a configuration as shown in FIG. 9 so that the context map used for positioning can be selected based on the movement mode, positioning information, and information for identifying the location inside the moving body. I can do it.
 サーバ装置100は、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事で、端末装置200の移動体内における位置を、移動体内コンテクストマップ120に基づいて測る事が出来る。またサーバ装置100は、移動モードに基づいて端末装置200から送られてくるセンサデータを選択する事で、コンテクストマップが移動体の種類に応じて複数用意されている場合に、適切なコンテクストマップを選択する事が出来る。 The server device 100 can measure the position of the terminal device 200 in the moving body based on the moving body context map 120 by selecting the sensor data sent from the terminal device 200 based on the moving mode. Further, the server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, so that when a plurality of context maps are prepared according to the type of the moving body, an appropriate context map is displayed. You can choose.
 例えば、端末装置200を携帯または装着するユーザが徒歩で港まで移動している事が端末装置200から送られてくるセンサデータから分かれば、サーバ装置100は、グローバルコンテクストマップ121を選択することができる。その後、端末装置200を携帯または装着するユーザが海上にいることが絶対位置の測位によって分かれば、サーバ装置100は、移動体内コンテクストマップ120bを選択することができる。 For example, if it is found from sensor data sent from the terminal device 200 that the user who carries or wears the terminal device 200 is walking to the port, the server device 100 can select the global context map 121. it can. Thereafter, if the absolute position measurement indicates that the user carrying or wearing the terminal device 200 is at sea, the server device 100 can select the in-mobile context map 120b.
 もちろん、上述した第1の実施形態と第2の実施形態とを組み合わせたサーバ装置100も実現可能である。すなわち、移動体で測定されたセンサデータを用いて、端末装置200から送信されるセンサデータを補正した上で、端末装置200から送信されるセンサデータから端末装置200を携帯または装着するユーザがどのような移動状態にあるかを推定するサーバ装置100も実現することが可能である。 Of course, the server apparatus 100 combining the first embodiment and the second embodiment described above can also be realized. That is, the sensor data transmitted from the terminal device 200 is corrected using the sensor data measured by the moving body, and then the user who carries or wears the terminal device 200 from the sensor data transmitted from the terminal device 200. It is also possible to realize the server device 100 that estimates whether the mobile device is in such a moving state.
 上述した第1の実施形態または第2の実施形態により、サーバ装置100は、端末装置200の移動体内の位置を測定して、端末装置200にその位置の情報を提供することが出来る。例えば、端末装置200を携帯するユーザが新幹線に乗車している場合に、サーバ装置100は、端末装置200の新幹線内の位置を測定することで、その新幹線における車両の号数及び座席位置を検出することが出来る。 According to the first embodiment or the second embodiment described above, the server apparatus 100 can measure the position of the terminal apparatus 200 in the moving body and provide the terminal apparatus 200 with information on the position. For example, when the user who carries the terminal device 200 is on the bullet train, the server device 100 detects the number and seat position of the vehicle on the bullet train by measuring the position of the terminal device 200 in the bullet train. I can do it.
 また例えば、端末装置200を携帯するユーザが客船に乗船している場合に、サーバ装置100は、端末装置200の客船内の位置を測定することで、その客船における詳細な位置(例えばレストラン、プール、カジノ等)を検出することが出来る。 Further, for example, when a user carrying the terminal device 200 is on board the passenger ship, the server device 100 measures the position of the terminal device 200 in the passenger ship, so that the detailed position (for example, restaurant, pool) in the passenger ship is measured. , Casino, etc.).
 また例えば、サーバ装置100は、端末装置200を携帯するユーザの位置に応じて、端末装置200に提供する情報を変化させても良い。例えば端末装置200を携帯するユーザが観光用のトロッコ列車に乗車している場合、サーバ装置100は、ユーザが列車の右側に座っている場合と左側に座っている場合とで、端末装置200に提供する情報、例えば、車窓から見える風景の情報を変化させても良い。 For example, the server device 100 may change the information provided to the terminal device 200 according to the position of the user carrying the terminal device 200. For example, when a user who carries the terminal device 200 is on a sightseeing trolley train, the server device 100 determines whether the user is sitting on the right side of the train or the user is sitting on the left side. Information to be provided, for example, information on a landscape seen from a car window may be changed.
 サーバ装置100が検出した移動体内の位置情報は、別の端末装置と共有させても良く、その場合に、サーバ装置100は別の端末装置へ、端末装置200を携帯するユーザの、移動体内の位置情報を提供することが出来る。 The position information in the moving body detected by the server apparatus 100 may be shared with another terminal apparatus. In that case, the server apparatus 100 moves to another terminal apparatus of the user carrying the terminal apparatus 200 in the moving body. Location information can be provided.
 <3.ハードウェア構成例>
 次に、図10を参照して、本開示の実施形態に係る情報処理装置のハードウェア構成について説明する。図10は、本開示の実施形態に係る情報処理装置のハードウェア構成例を示すブロック図である。
<3. Hardware configuration example>
Next, a hardware configuration of the information processing apparatus according to the embodiment of the present disclosure will be described with reference to FIG. FIG. 10 is a block diagram illustrating a hardware configuration example of the information processing apparatus according to the embodiment of the present disclosure.
 情報処理装置900は、CPU(Central Processing unit)901、ROM(Read Only Memory)903、およびRAM(Random Access Memory)905を含む。また、情報処理装置900は、ホストバス907、ブリッジ909、外部バス911、インターフェース913、入力装置915、出力装置917、ストレージ装置919、ドライブ921、接続ポート923、通信装置925を含んでもよい。さらに、情報処理装置900は、必要に応じて、撮像装置933、およびセンサ935を含んでもよい。情報処理装置900は、CPU901に代えて、またはこれとともに、DSP(Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、またはFPGA(Field-Programmable Gate Array)などの処理回路を有してもよい。 The information processing apparatus 900 includes a CPU (Central Processing unit) 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905. The information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925. Furthermore, the information processing apparatus 900 may include an imaging device 933 and a sensor 935 as necessary. The information processing apparatus 900 may include a processing circuit such as a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array) instead of or in addition to the CPU 901.
 CPU901は、演算処理装置および制御装置として機能し、ROM903、RAM905、ストレージ装置919、またはリムーバブル記録媒体927に記録された各種プログラムに従って、情報処理装置900内の動作全般またはその一部を制御する。ROM903は、CPU901が使用するプログラムや演算パラメータなどを記憶する。RAM905は、CPU901の実行において使用するプログラムや、その実行において適宜変化するパラメータなどを一次記憶する。CPU901、ROM903、およびRAM905は、CPUバスなどの内部バスにより構成されるホストバス907により相互に接続されている。さらに、ホストバス907は、ブリッジ909を介して、PCI(Peripheral Component Interconnect/Interface)バスなどの外部バス911に接続されている。 The CPU 901 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the information processing device 900 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927. The ROM 903 stores programs and calculation parameters used by the CPU 901. The RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. The CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
 入力装置915は、例えば、マウス、キーボード、タッチパネル、ボタン、スイッチおよびレバーなど、ユーザによって操作される装置である。入力装置915は、例えば、赤外線やその他の電波を利用したリモートコントロール装置であってもよいし、情報処理装置900の操作に対応した携帯電話などの外部接続機器929であってもよい。入力装置915は、ユーザが入力した情報に基づいて入力信号を生成してCPU901に出力する入力制御回路を含む。ユーザは、この入力装置915を操作することによって、情報処理装置900に対して各種のデータを入力したり処理動作を指示したりする。 The input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever. The input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900. The input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
 出力装置917は、取得した情報をユーザに対して視覚や聴覚、触覚などの感覚を用いて通知することが可能な装置で構成される。出力装置917は、例えば、LCD(Liquid Crystal Display)または有機EL(Electro-Luminescence)ディスプレイなどの表示装置、スピーカまたはヘッドフォンなどの音声出力装置、もしくはバイブレータなどでありうる。出力装置917は、情報処理装置900の処理により得られた結果を、テキストもしくは画像などの映像、音声もしくは音響などの音声、またはバイブレーションなどとして出力する。 The output device 917 is configured by a device capable of notifying the acquired information to the user using a sense such as vision, hearing, or touch. The output device 917 can be, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) display, an audio output device such as a speaker or headphones, or a vibrator. The output device 917 outputs the result obtained by the processing of the information processing device 900 as video such as text or image, sound such as sound or sound, or vibration.
 ストレージ装置919は、情報処理装置900の記憶部の一例として構成されたデータ格納用の装置である。ストレージ装置919は、例えば、HDD(Hard Disk Drive)などの磁気記憶部デバイス、半導体記憶デバイス、光記憶デバイス、または光磁気記憶デバイスなどにより構成される。ストレージ装置919は、例えばCPU901が実行するプログラムや各種データ、および外部から取得した各種のデータなどを格納する。 The storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900. The storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device. The storage device 919 stores, for example, programs executed by the CPU 901 and various data, and various data acquired from the outside.
 ドライブ921は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリなどのリムーバブル記録媒体927のためのリーダライタであり、情報処理装置900に内蔵、あるいは外付けされる。ドライブ921は、装着されているリムーバブル記録媒体927に記録されている情報を読み出して、RAM905に出力する。また、ドライブ921は、装着されているリムーバブル記録媒体927に記録を書き込む。 The drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900. The drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905. In addition, the drive 921 writes a record in the attached removable recording medium 927.
 接続ポート923は、機器を情報処理装置900に接続するためのポートである。接続ポート923は、例えば、USB(Universal Serial Bus)ポート、IEEE1394ポート、SCSI(Small Computer System Interface)ポートなどでありうる。また、接続ポート923は、RS-232Cポート、光オーディオ端子、HDMI(登録商標)(High-Definition Multimedia Interface)ポートなどであってもよい。接続ポート923に外部接続機器929を接続することで、情報処理装置900と外部接続機器929との間で各種のデータが交換されうる。 The connection port 923 is a port for connecting a device to the information processing apparatus 900. The connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like. The connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like. By connecting the external connection device 929 to the connection port 923, various types of data can be exchanged between the information processing apparatus 900 and the external connection device 929.
 通信装置925は、例えば、通信ネットワーク931に接続するための通信デバイスなどで構成された通信インターフェースである。通信装置925は、例えば、LAN(Local Area Network)、Bluetooth(登録商標)、Wi-Fi、またはWUSB(Wireless USB)用の通信カードなどでありうる。また、通信装置925は、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ、または、各種通信用のモデムなどであってもよい。通信装置925は、例えば、インターネットや他の通信機器との間で、TCP/IPなどの所定のプロトコルを用いて信号などを送受信する。また、通信装置925に接続される通信ネットワーク931は、有線または無線によって接続されたネットワークであり、例えば、インターネット、家庭内LAN、赤外線通信、ラジオ波通信または衛星通信などを含みうる。 The communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931. The communication device 925 can be, for example, a communication card for LAN (Local Area Network), Bluetooth (registered trademark), Wi-Fi, or WUSB (Wireless USB). The communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication. The communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example. The communication network 931 connected to the communication device 925 is a network connected by wire or wireless, and may include, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, or the like.
 撮像装置933は、例えば、CMOS(Complementary Metal Oxide Semiconductor)またはCCD(Charge Coupled Device)などの撮像素子、および撮像素子への被写体像の結像を制御するためのレンズなどの各種の部材を用いて実空間を撮像し、撮像画像を生成する装置である。撮像装置933は、静止画を撮像するものであってもよいし、また動画を撮像するものであってもよい。 The imaging device 933 uses various members such as an imaging element such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and a lens for controlling the formation of a subject image on the imaging element. It is an apparatus that images a real space and generates a captured image. The imaging device 933 may capture a still image or may capture a moving image.
 センサ935は、例えば、加速度センサ、角速度センサ、地磁気センサ、照度センサ、温度センサ、気圧センサ、または音センサ(マイクロフォン)などの各種のセンサである。センサ935は、例えば情報処理装置900の筐体の姿勢など、情報処理装置900自体の状態に関する情報や、情報処理装置900の周辺の明るさや騒音など、情報処理装置900の周辺環境に関する情報を取得する。また、センサ935は、GPS(Global Positioning System)信号を受信して装置の緯度、経度および高度を測定するGPS受信機を含んでもよい。 The sensor 935 is various sensors such as an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, an atmospheric pressure sensor, or a sound sensor (microphone). The sensor 935 acquires information about the state of the information processing apparatus 900 itself, such as the posture of the information processing apparatus 900, and information about the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do. The sensor 935 may include a GPS receiver that receives a GPS (Global Positioning System) signal and measures the latitude, longitude, and altitude of the device.
 以上、情報処理装置900のハードウェア構成の一例を示した。上記の各構成要素は、汎用的な部材を用いて構成されていてもよいし、各構成要素の機能に特化したハードウェアにより構成されていてもよい。かかる構成は、実施する時々の技術レベルに応じて適宜変更されうる。 Heretofore, an example of the hardware configuration of the information processing apparatus 900 has been shown. Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
 <3.まとめ>
 以上説明したように本開示の第1の実施形態によれば、端末装置200から送られてくるセンサデータを、移動体姿勢運動計測部130から送られてくるセンサデータを用いて補正する事が出来るサーバ装置100が提供される。サーバ装置100は、端末装置200から送られてくるセンサデータを、移動体姿勢運動計測部130から送られてくるセンサデータを用いて補正する事で、端末装置200から送られてくるセンサデータを用いて精度の高い移動体内コンテクストマップ120を生成したり、移動体内コンテクストマップ120を参照する事による移動体内での精度の高い測位を行ったり出来る。
<3. Summary>
As described above, according to the first embodiment of the present disclosure, the sensor data sent from the terminal device 200 can be corrected using the sensor data sent from the moving body posture motion measuring unit 130. A server device 100 that can be used is provided. The server device 100 corrects the sensor data sent from the terminal device 200 by correcting the sensor data sent from the terminal device 200 using the sensor data sent from the moving body posture movement measuring unit 130. By using this, it is possible to generate a highly accurate moving body context map 120, or to perform highly accurate positioning in the moving body by referring to the moving body context map 120.
 また、以上説明したように本開示の第2の実施形態によれば、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事が出来るサーバ装置100が提供される。サーバ装置100は、端末装置200から送られてくるセンサデータを、移動モードに基づいて選択する事で、端末装置200から送られてくるセンサデータの中から精度の高いセンサデータを選び、精度の高い移動体内コンテクストマップ120を生成したり、移動体内コンテクストマップ120を参照する事による移動体内での精度の高い測位を行ったり出来る。 Also, as described above, according to the second embodiment of the present disclosure, the server device 100 that can select sensor data transmitted from the terminal device 200 based on the movement mode is provided. The server device 100 selects sensor data sent from the terminal device 200 based on the movement mode, thereby selecting highly accurate sensor data from the sensor data sent from the terminal device 200. It is possible to generate a high moving body context map 120 and to perform highly accurate positioning in the moving body by referring to the moving body context map 120.
 本開示の実施形態は、例えば、上記で説明したような情報処理装置、システム、情報処理装置またはシステムで実行される情報処理方法、情報処理装置を機能させるためのコンピュータプログラム、およびコンピュータプログラムが記録された一時的でない有形の媒体を含みうる。 Embodiments of the present disclosure include, for example, an information processing apparatus, system, information processing method executed by the information processing apparatus or system, a computer program for causing the information processing apparatus to function, and a computer program recorded And non-transitory tangible media.
 なお、上記実施形態で示したユーザインタフェースやアプリケーションを実現するソフトウェアは、インターネット等のネットワークを介して使用するウェブアプリケーションとして実現されてもよい。ウェブアプリケーションは、例えば、HTML(HyperText Markup Language)、SGML(Standard Generalized Markup Language)、XML(Extensible Markup Language)などのマークアップ言語により実現されてもよい。 Note that the software that implements the user interface and application shown in the above embodiment may be realized as a web application used via a network such as the Internet. The web application may be realized by, for example, a markup language such as HTML (HyperText Markup Language), SGML (Standard Generalized Markup Language), or XML (Extensible Markup Language).
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正する補正部と、
 前記補正部の補正結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行する処理部と、
を備える、情報処理装置。
(2)
 前記補正部は、前記第1センシングデータと、前記第2センシングデータとの差分を算出する、前記(1)に記載の情報処理装置。
(3)
 前記処理部は、前記補正部の補正結果を用いてマップ情報を生成する処理を実行する、前記(1)または(2)に記載の情報処理装置。
(4)
 前記処理部は、前記補正部の補正結果を用いて生成されるマップ情報を用いた、前記第1センシングデータを提供する前記ユーザの前記移動体内での測位処理を実行する、前記(1)~(3)のいずれかに記載の情報処理装置。
(5)
 前記処理部は、前記測位処理の結果に基づいて施設情報を抽出する、前記(4)に記載の情報処理装置。
(6)
 前記第1センシングデータは、加速度データ、角速度データ、または地磁気データのいずれかを含む、前記(1)~(5)のいずれかに記載の情報処理装置。
(7)
 前記移動体は、電車、自動車、または船舶のいずれかである、前記(1)~(6)のいずれかに記載の情報処理装置。
(8)
 ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正することと、
 前記補正の結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行することと、
を含む、情報処理方法。
(9)
 コンピュータに、
 ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正することと、
 前記補正の結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行することと、
を実行させる、コンピュータプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
A correction unit that corrects first sensing data provided by one or more sensors carried or worn by the user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding. When,
A processing unit that executes a process for obtaining the position of the user in the movable body using the correction result of the correction unit;
An information processing apparatus comprising:
(2)
The information processing apparatus according to (1), wherein the correction unit calculates a difference between the first sensing data and the second sensing data.
(3)
The information processing apparatus according to (1) or (2), wherein the processing unit executes a process of generating map information using a correction result of the correction unit.
(4)
The processing unit executes a positioning process in the moving body of the user who provides the first sensing data using map information generated using a correction result of the correction unit. The information processing apparatus according to any one of (3).
(5)
The information processing apparatus according to (4), wherein the processing unit extracts facility information based on a result of the positioning process.
(6)
The information processing apparatus according to any one of (1) to (5), wherein the first sensing data includes any of acceleration data, angular velocity data, and geomagnetic data.
(7)
The information processing apparatus according to any one of (1) to (6), wherein the moving body is any one of a train, an automobile, and a ship.
(8)
Correcting first sensing data provided by one or more sensors carried or worn by a user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding; ,
Executing a process for determining the position of the user in the mobile body using the correction result;
Including an information processing method.
(9)
On the computer,
Correcting first sensing data provided by one or more sensors carried or worn by a user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding; ,
Executing a process for determining the position of the user in the mobile body using the correction result;
A computer program that executes
 100  サーバ装置
 200  端末装置
100 server device 200 terminal device

Claims (9)

  1.  ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正する補正部と、
     前記補正部の補正結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行する処理部と、
    を備える、情報処理装置。
    A correction unit that corrects first sensing data provided by one or more sensors carried or worn by the user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding. When,
    A processing unit that executes a process for obtaining the position of the user in the movable body using the correction result of the correction unit;
    An information processing apparatus comprising:
  2.  前記補正部は、前記第1センシングデータと、前記第2センシングデータとの差分を算出する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the correction unit calculates a difference between the first sensing data and the second sensing data.
  3.  前記処理部は、前記補正部の補正結果を用いてマップ情報を生成する処理を実行する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing unit executes a process of generating map information using a correction result of the correction unit.
  4.  前記処理部は、前記補正部の補正結果を用いて生成されるマップ情報を用いた、前記第1センシングデータを提供する前記ユーザの前記移動体内での測位処理を実行する、請求項1に記載の情報処理装置。 The said process part performs the positioning process in the said mobile body of the said user who provides the said 1st sensing data using the map information produced | generated using the correction result of the said correction | amendment part. Information processing device.
  5.  前記処理部は、前記測位処理の結果に基づいて施設情報を抽出する、請求項4に記載の情報処理装置。 The information processing apparatus according to claim 4, wherein the processing unit extracts facility information based on a result of the positioning process.
  6.  前記第1センシングデータは、加速度データ、角速度データ、または地磁気データのいずれかを含む、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the first sensing data includes any of acceleration data, angular velocity data, and geomagnetic data.
  7.  前記移動体は、電車、自動車、または船舶のいずれかである、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the moving body is one of a train, a car, and a ship.
  8.  ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正することと、
     前記補正の結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行することと、
    を含む、情報処理方法。
    Correcting first sensing data provided by one or more sensors carried or worn by a user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding; ,
    Executing a process for determining the position of the user in the mobile body using the correction result;
    Including an information processing method.
  9.  コンピュータに、
     ユーザによって携帯または装着される1以上のセンサによって提供される第1センシングデータを、ユーザが乗っている移動体に設けられる1以上のセンサによって提供される第2センシングデータを用いて補正することと、
     前記補正の結果を用いて前記移動体内の前記ユーザの位置を求めるための処理を実行することと、
    を実行させる、コンピュータプログラム。
    On the computer,
    Correcting first sensing data provided by one or more sensors carried or worn by a user using second sensing data provided by one or more sensors provided on a moving body on which the user is riding; ,
    Executing a process for determining the position of the user in the mobile body using the correction result;
    A computer program that executes
PCT/JP2016/074157 2015-09-28 2016-08-18 Information processing device, information processing method and computer program WO2017056774A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-189383 2015-09-28
JP2015189383A JP2017067468A (en) 2015-09-28 2015-09-28 Information processing device, information processing method, and computer program

Publications (1)

Publication Number Publication Date
WO2017056774A1 true WO2017056774A1 (en) 2017-04-06

Family

ID=58427459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/074157 WO2017056774A1 (en) 2015-09-28 2016-08-18 Information processing device, information processing method and computer program

Country Status (2)

Country Link
JP (1) JP2017067468A (en)
WO (1) WO2017056774A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502980A (en) * 2019-07-11 2019-11-26 武汉大学 A kind of pedestrian plays the recognition methods of mobile phone scene behavior when going across the road

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6776287B2 (en) * 2018-02-16 2020-10-28 Kddi株式会社 Devices that can be used in mobiles and control programs and methods for such devices

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002148071A (en) * 2000-11-10 2002-05-22 Fuji Xerox Co Ltd Vehicle
JP2014142345A (en) * 2010-04-05 2014-08-07 Qualcomm Inc Methods for displaying associated information, and portable communication terminals

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002148071A (en) * 2000-11-10 2002-05-22 Fuji Xerox Co Ltd Vehicle
JP2014142345A (en) * 2010-04-05 2014-08-07 Qualcomm Inc Methods for displaying associated information, and portable communication terminals

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502980A (en) * 2019-07-11 2019-11-26 武汉大学 A kind of pedestrian plays the recognition methods of mobile phone scene behavior when going across the road
CN110502980B (en) * 2019-07-11 2021-12-03 武汉大学 Method for identifying scene behaviors of pedestrians playing mobile phones while crossing roads

Also Published As

Publication number Publication date
JP2017067468A (en) 2017-04-06

Similar Documents

Publication Publication Date Title
WO2017056777A1 (en) Information processing device, information processing method and computer program
WO2016098457A1 (en) Information processing device, information processing method, and program
US10867195B2 (en) Systems and methods for monitoring driver state
US20190383620A1 (en) Information processing apparatus, information processing method, and program
EP3252432A1 (en) Information-attainment system based on monitoring an occupant
KR101730534B1 (en) Camera enabled headset for navigation
US10504031B2 (en) Method and apparatus for determining probabilistic context awareness of a mobile device user using a single sensor and/or multi-sensor data fusion
JP6311478B2 (en) Information processing apparatus, information processing method, and program
US20150354951A1 (en) Method and Apparatus for Determination of Misalignment Between Device and Pedestrian
US11181376B2 (en) Information processing device and information processing method
JP2007164441A (en) Mobile object bearing determination device, mobile object bearing determination method, navigation device, and mobile terminal device
JP5870817B2 (en) Information processing apparatus, information processing method, and program
JP2017520762A (en) Uncertainty in mobile device position based on estimated trajectory potential disturbance
WO2017056774A1 (en) Information processing device, information processing method and computer program
CN114175114A (en) System and method for identifying points of interest from inside an autonomous vehicle
Zaib et al. Smartphone based indoor navigation for blind persons using user profile and simplified building information model
WO2018163560A1 (en) Information processing device, information processing method, and program
Mahida et al. Indoor positioning framework for visually impaired people using Internet of Things
WO2015194270A1 (en) Information-processing device, information processing method, and program
US20200292349A1 (en) Audio information providing system, control method, and non-transitory computer readable medium
US20190205580A1 (en) Information processing apparatus, information processing method, and computer program
US20220146662A1 (en) Information processing apparatus and information processing method
WO2022029894A1 (en) Information processing device, information processing system, information processing method, and program
WO2015194269A1 (en) Information-processing device, information processing method, and program
JP2014115769A (en) Information providing device, information providing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850950

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850950

Country of ref document: EP

Kind code of ref document: A1