WO2015162949A1 - 通信システム、制御方法、および記憶媒体 - Google Patents
通信システム、制御方法、および記憶媒体 Download PDFInfo
- Publication number
- WO2015162949A1 WO2015162949A1 PCT/JP2015/051411 JP2015051411W WO2015162949A1 WO 2015162949 A1 WO2015162949 A1 WO 2015162949A1 JP 2015051411 W JP2015051411 W JP 2015051411W WO 2015162949 A1 WO2015162949 A1 WO 2015162949A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- emotion
- information
- user
- data
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3453—Special cost functions, i.e. other than distance or default speed limit of road segments
- G01C21/3484—Personalized, e.g. from learned user behaviour or user-defined profiles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/265—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network constructional aspects of navigation devices, e.g. housings, mountings, displays
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3617—Destination input or retrieval using user history, behaviour, conditions or preferences, e.g. predicted or inferred from previous use or current movement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3641—Personalized guidance, e.g. limited guidance on previously travelled routes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3679—Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities
- G01C21/3682—Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities output of POI information on a road map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/80—Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
- G06F16/84—Mapping; Conversion
- G06F16/86—Mapping to a database
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
Definitions
- the present disclosure relates to a communication system, a control method, and a storage medium.
- emotion data information on psychological state
- emotions psychoological states such as discomfort, tension, joy and excitement
- biological information such as sweat volume and heart rate
- Patent Document 1 has been proposed as a technique for utilizing such an emotion estimation technique and a technique related to a user's photographs and action logs acquired in large quantities.
- Japanese Patent Application Laid-Open No. 2004-151561 discloses a technique for storing biometric information and action information at the time of photographing by a photographer in association with a photographed image. This makes it possible to retrieve important images (images taken in a special psychological state) from a huge number of images.
- the present disclosure proposes a communication system, a control method, and a storage medium that can guide a user about an action to become a predetermined emotion according to an emotion map.
- an acquisition unit that acquires user's current position information
- a guide information generation unit that generates guide information according to the current position information and map information to which emotion data is mapped
- the guidance A communication system including a providing unit that provides information to a user is proposed.
- obtaining current location information of a user from a client generating guidance information according to the current location information and map information to which emotion data is mapped, and A control method is proposed, including controlling to provide to a user.
- the computer includes an acquisition unit that acquires the current position information of the user, a guide information generation unit that generates guide information according to the current position information and map information to which emotion data is mapped, A storage medium storing a program for functioning as a providing unit that provides the guidance information to a user is proposed.
- FIG. 1 is a diagram illustrating an overview of a communication system according to an embodiment of the present disclosure.
- FIG. It is a block diagram which shows an example of the basic composition of the server by this embodiment. It is a block diagram which shows an example of the internal structure of the client which performs the data collection by this embodiment. It is a block diagram which shows an example of the internal structure of the client which presents guidance information by this embodiment.
- FIG. 11 is a sequence diagram showing an operation process when creating content-emotion DB data. It is a sequence diagram which shows the operation
- FIG. 11 is a sequence diagram showing an operation process when creating environment-emotion DB data. It is a sequence diagram which shows the operation
- FIG. 1 is a diagram illustrating an overview of a communication system according to an embodiment of the present disclosure.
- the communication system according to the present embodiment includes a client 3 for data collection, a server 2, and a client 1 for presenting guidance information.
- the data collection client 3 includes, for example, a client 3a realized by a camera device, a client 3b realized by a smartphone, and a client 3c realized by a glasses-type HMD (Head Mounted Display). It is a processing terminal.
- Each of the clients 3a to 3c acquires data (for example, biometric information, face image, voice, etc.) for estimating the emotion of the owning user, current location information / content data being viewed / environmental environment information, etc. Upload to the server 2 via 5.
- the server 2 estimates the emotion of the owning user based on the data for estimating the emotion collected from each client 3, and correlates the estimated emotion with the collected positional information / content data / environment information. Generate data to show. For example, the server 2 generates an emotion map in which the estimated emotion is mapped on the map based on the position information. The server 2 can also generate a database (DB) indicating the correlation between the estimated emotion and the content data, and a database indicating the correlation between the estimated emotion and the environment information.
- DB database
- FIG. 1 only three clients 3a to 3c are shown as an example of the data collection client 3. However, a large number of data collection clients 3 exist on the network, and the server 2 includes a large number of clients 3a. A large amount of data can be collected, and an emotion map, content-emotion DB, and environment-emotion DB can be generated on a regional, national, or global scale.
- the server 2 responds to a request from the client 1 for presenting guidance information with an action (for example, a traveling direction, a direction for guiding the user to a predetermined emotion based on the emotion map, content-emotion DB, or environment-emotion DB).
- Guidance information that guides viewing recommendation of predetermined content and proposal of environmental improvement is generated and returned to the client 1.
- the client 1 for presenting guidance information requests the server 2 to transmit guidance information for guiding an action for guiding the user to a predetermined emotion, and displays the guidance information returned from the server 2 in response to the request. Output / voice output and present to the user. For example, the client 1 obtains current position information / content data being viewed / environmental environment information, transmits it to the server 2, and makes a transmission request for guidance information.
- the guide information presentation client 1 is realized by the glasses-type HMD shown in FIG. 1, and can be connected to a network such as a smartphone, a tablet terminal, a mobile phone terminal, a camera device, a game machine, or a music player. It can be realized by a terminal.
- the action for becoming a predetermined emotion can be presented to the user.
- a happiness level map can be generated based on the correlation between happy emotions and position information, and the user can be guided along a path that leads to happy emotions using the happiness level map.
- the data collection client 3 and the guidance information presentation client 1 are illustrated as separate devices, but may be the same device.
- FIG. 2 is a block diagram illustrating an example of a basic configuration of the server 2 according to the present embodiment.
- the server 2 includes a control unit 20, a communication unit 21, an environment-emotion DB 22, a content-emotion DB 23, and an emotion map DB 24.
- the control unit 20 includes, for example, a microcomputer including a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), a nonvolatile memory, and an interface unit, and controls each component of the server 2. .
- a microcomputer including a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), a nonvolatile memory, and an interface unit, and controls each component of the server 2. .
- the control unit 20 functions as a map information generation unit 20a, a guidance information generation unit 20b, an emotion estimation unit 20c, and a provision unit 20d.
- the map information generation unit 20a generates map information obtained by mapping emotion data, and stores the generated map information (also referred to as an emotion map) in the emotion map DB 24. More specifically, the map information generation unit 20a uses the emotion data estimated based on the data for estimating the emotion of the user transmitted from the client 3 that performs data collection, based on the positional information transmitted together. And place it on the map.
- the map information generation unit 20a may generate map information after statistically analyzing the correspondence between a large amount of emotion data transmitted from a large number of clients 3 and position information.
- the data for estimating the user's emotion is, for example, imaging of biological information such as sweat, pulse, heartbeat, body vibration, electroencephalogram, myoelectricity, blood flow change, and the user's face (facial expression). These are captured images, sounds, and the like.
- the map information generation unit 20a estimates the user's emotion based on the value of the biological information or the analysis result of the captured image. Emotions generally have eight basics: joy, sadness, trust, disgust, fear, anger, surprise, anticipation Defined by any or a combination of emotions. Among such various emotions, the map information generation unit 20a estimates, for example, the happiness level (degree that the user feels happy).
- the map information generation unit 20a can estimate the happiness level according to the smile level. Further, the map information generation unit 20a can also analyze the user's voice and estimate the happiness level based on the conversation content, voice pitch, intonation, laughter, and the like.
- the map information generation unit 20a may generate an emotion map for each user attribute.
- User attributes include, for example, age, age, gender, hobbies / preferences, birthplace, and the like.
- the estimation of the user's emotion by the map information generation unit 20a described above may be performed in advance on the client 3 side.
- the estimated emotion data of the user is transmitted from the client 3.
- the guidance information generation unit 20b generates guidance information that guides an action for guiding the user to a predetermined emotion in response to a request from the client 1 that presents the guidance information. More specifically, for example, the guidance information generation unit 20b, based on the current position information transmitted from the client 1 and the emotion map stored in the emotion map DB 24, (for example, a place where a predetermined emotion is associated (or Guide information for guiding the user to the route) is generated.
- the guidance information includes information that instructs the user in the traveling direction toward a place (or route) associated with a predetermined emotion.
- the guidance information generation unit 20b determines the user according to the current emotion of the user estimated by the emotion estimation unit 20c described below based on the content data currently being viewed by the user or the environmental information around the user.
- the recommendation information of the content data or the improvement plan of the environment information for leading to a predetermined emotion is generated.
- the guidance information generation unit 20b uses the data stored in the content-emotion DB 23 to generate guidance information for guiding (recommending) content associated with a predetermined emotion to the user.
- the guide information generation unit 20b can provide content data (photo, Guide information for guiding images, music, etc.) is generated.
- the guidance information generation unit 20b uses the data stored in the environment-emotion DB 22 to generate guidance information for guiding the user to improve the environment information associated with a predetermined emotion.
- the providing unit 20d provides an environment improvement measure (in order to become an exhilarating and happy emotion opposite to the depressing emotion) ( Opening windows, playing music, turning on indoor lights, etc.) are generated.
- the guidance information generating unit 20b outputs the generated guidance information to the providing unit 20d.
- the emotion estimation unit 20c has a function of estimating a user's current emotion based on content data being viewed by the user 1 that is transmitted from the client 1 that presents guidance information or environmental information around the user (first, Second emotion estimation unit).
- the content data being viewed by the user is, for example, identification information such as photos, videos, music, and feature quantities.
- the environment information around the user includes, for example, time, position, atmospheric pressure, temperature, humidity, an object existing in the vicinity, a state of the object, and the like.
- the emotion estimation unit 20c is based on data (learning result) indicating the correlation between content data and emotion data stored in the content-emotion DB 23, and content that is the same as or similar to the content data currently being viewed by the user Emotion data associated with the data is extracted and estimated as the user's current emotion. Also, the emotion estimation unit 20c is the same / similar to the user's current surrounding environment information based on the data (learning result) indicating the correlation between the environment information and the emotion data stored in the environment-emotion DB 22 Emotion data associated with the environmental information to be extracted is extracted and estimated as the user's current emotion.
- the emotion estimation unit 20c outputs the current emotion data of the user estimated based on the content data or the environment information to the guidance information generation unit 20b.
- the providing unit 20d performs control so as to provide the user with guidance information that guides the user of actions for becoming a predetermined emotion generated by the guidance information generating unit 20b. That is, the providing unit 20d controls to transmit the guide information generated by the guide information generating unit 20b to the client 1 from the communication unit 21, and presents the client 1 with a display (display output / voice output).
- the providing unit 20d provides the user with guidance information for instructing a traveling direction to a place or route associated with a predetermined emotion, a map image indicating a distribution of a predetermined emotion level in a region around the current location of the user, and the like. Control as follows.
- the providing unit 20d may perform control so as to provide the user with recommended information on content data associated with a predetermined emotion.
- the providing unit 20d may perform control so as to provide the user with information that suggests improvement to an environment in which a predetermined emotion is associated.
- the communication unit 21 is connected to the network 5 and transmits / receives data to / from the clients 1 and 3. For example, the communication unit 21 receives (acquires) data for estimating a user's emotion and position information / content data / environment information from the client 3 that collects data. In addition, the communication unit 21 receives (acquires) the current position information from the client 1 that presents the guide information, and returns the guide information according to control by the providing unit 20d.
- the environment-emotion DB 22 stores data indicating a correlation between environment information (time, position, atmospheric pressure, temperature, humidity, position, object, object state, etc.) and emotion data.
- environment-emotion DB 22 stores learning results learned by statistically analyzing the correlation between environment information and emotion data transmitted in large quantities from many clients 3 for data collection.
- the content-emotion DB 23 stores data indicating the correlation between content data and emotion data. For example, the content-emotion DB 23 stores learning results learned by statistically analyzing the correlation between content data and emotion data transmitted from a large number of clients 3 for data collection.
- the emotion map DB 24 stores the emotion map generated by the map information generation unit 20a.
- FIG. 3 is a block diagram illustrating an internal configuration example of the client 3a (camera apparatus) according to the present embodiment.
- client 3a camera apparatus
- FIG. 3 as an example of the client 3, a configuration of a client 3a realized by a camera device is shown.
- the client 3a (camera device) includes a control unit 30, a communication unit 31, a display unit 32, an operation unit 33, an image DB 34, an imaging unit 35, an in-camera 36, an emotion sensor unit 37, and position information.
- An acquisition unit 38 is included.
- the control unit 30 is configured by a microcomputer including, for example, a CPU, a ROM, a RAM, a nonvolatile memory, and an interface unit, and controls each configuration of the client 3a. Specifically, for example, the control unit 30 controls the imaging unit 35 to instruct execution of imaging when pressing of the shutter button is detected by the operation unit 33. In addition, when the imaging unit 35 executes imaging, the control unit 30 controls the in-camera 36 to capture the face (expression) of the user (photographer), and the face image captured by the in-camera 36 is converted to the user. Control is performed so as to transmit data from the communication unit 31 to the server 2 as data for estimating the emotion.
- a microcomputer including, for example, a CPU, a ROM, a RAM, a nonvolatile memory, and an interface unit, and controls each configuration of the client 3a. Specifically, for example, the control unit 30 controls the imaging unit 35 to instruct execution of imaging when pressing of the shutter button is detected by the operation unit 33. In addition, when the imaging
- the control unit 30 uses the sensor value (biological information such as sweat, pulse, or body temperature) detected by the emotion sensor unit 37 provided on the shutter button, for example, as the user. Control is performed so as to transmit data from the communication unit 31 to the server 2 as data for estimating the emotion. Furthermore, the control unit 30 transmits the current position information acquired by the position information acquisition unit 38 or the imaging unit 35 when transmitting data for estimating a user's emotion such as a face image and biological information to the server 2. Control is also performed so as to transmit the captured image (subject image) captured in (1).
- the sensor value biological information such as sweat, pulse, or body temperature
- the client 3 for data collection transmits data (face image, biometric information, etc.) for estimating the user's emotion to the server 2 together with the current position information.
- the emotion estimation of the user and the correlation between the estimated emotion and the position information generation of emotion map
- the client 3 for data collection transmits data (face image, biometric information, etc.) for estimating the user's emotion to the server 2 together with the captured image (subject image), so that the user on the server 2 side Emotion estimation and association between the estimated emotion and the captured image (content-emotion DB generation and learning) are performed.
- the communication unit 31 is connected to the network 5 and transmits / receives data to / from the server 2.
- the display unit 32 has a function of displaying characters, images, and other information on the display screen under the control of the control unit 30.
- the display unit 32 is realized by, for example, a liquid crystal display or an organic EL display.
- the operation unit 33 has a function of detecting an operation by a user and outputting the detected user operation to the control unit 30.
- the operation unit 33 is realized by a physical structure such as a shutter button, a power button, and a zoom button, for example, and is realized by an operation surface (for example, a touch panel) that is superimposed on the display screen and detects a user contact position with respect to the display screen. May be.
- the image DB 34 stores a captured image captured by the imaging unit 35 and a face image captured by the in-camera 36.
- the imaging unit 35 photoelectrically converts imaging light obtained by the lens system including a lens system including an imaging lens, a diaphragm, a zoom lens, a focus lens, a drive system that causes the lens system to perform a focus operation and a zoom operation, and the lens system.
- a solid-state imaging device array that generates an imaging signal.
- the solid-state imaging device array may be realized by, for example, a CCD (Charge Coupled Device) sensor array or a CMOS (Complementary Metal Oxide Semiconductor) sensor array.
- the imaging unit 35 performs imaging in accordance with an imaging execution instruction from the control unit 30.
- the image captured by the imaging unit 35 is stored in the image DB 34.
- the in-camera 36 is an imaging unit provided toward the user so as to capture an image of a user who is a photographer (operator). Specifically, for example, the in-camera 36 is provided on the surface opposite to the surface where the imaging lens of the imaging unit 35 having the imaging direction outside the camera device (client 3a) is provided, and the inside as the imaging direction. .
- the in-camera 36 captures the face of the user who is capturing according to the control by the control unit 30.
- the face image captured by the in-camera 36 is output to the control unit 30 and is transmitted to the server 2 as data for estimating the user's emotion according to the control of the control unit 30.
- the emotion sensor unit 37 is various sensors that detect biological information such as a user's pulse, sweat, or body temperature as data for estimating the emotion of the user who is an operator of the camera device (client 3a).
- the emotion sensor unit 37 is provided on, for example, a shutter button or an operation surface, and can detect biological information when the user performs an imaging operation by pressing the shutter button or touching the operation surface.
- the emotion sensor unit 37 outputs the detected biological information to the control unit 30 and is transmitted to the server 2 as data for estimating the user's emotion by the control unit 30.
- the position information acquisition unit 38 has a function of detecting the current position of the client 3a based on an external acquisition signal.
- the position information acquisition unit 38 is realized by a GPS (Global Positioning System) positioning unit, receives radio waves from a GPS satellite, detects the position where the client 3a exists, and detects the detected position. Information is output to the control unit 30.
- the position information acquisition unit 38 may detect the position by transmission / reception with, for example, Wi-Fi (registered trademark), a mobile phone / PHS / smartphone, or short distance communication.
- the configuration of the client 3a realized by the camera device has been described as an example of the configuration of the client 3 for data collection.
- the configuration of the client 3 according to the present embodiment is not limited to the example illustrated in FIG. 3.
- the client 3 may further include an environment information acquisition unit that acquires surrounding environment information.
- the environment information acquisition unit has a function of sensing, for example, ambient temperature information (place information) such as temperature, humidity, illuminance, weather information, surrounding objects, and object status.
- place information such as temperature, humidity, illuminance, weather information, surrounding objects, and object status.
- the environment information acquisition unit is realized by a humidity sensor, an illuminance sensor, a temperature sensor, or the like. Further, the environment information acquisition unit may extract an object existing in the vicinity, the state of the object, and the like as the environment information based on the subject image acquired by the imaging unit 35.
- the environmental information acquired by the environmental information acquisition unit is transmitted to the server 2 together with data (face image, biological information, etc.) for estimating the user's emotion.
- the client 3 for data collection transmits the environment information to the server 2 together with the data (face image, biometric information, etc.) for estimating the user's emotion, so that the server 2 side ,
- the user's emotion estimation and the association between the estimated emotion and the environment information generation and learning of environment-emotion DB
- the client 1 is realized by, for example, a glasses-type HMD.
- the glasses-type HMD is a glasses-type wearable device worn by a user. In the wearing state, the glasses-type HMD is for the left eye and the right eye immediately before the user's eyes, that is, at a position where a lens in normal glasses is located.
- a pair of display units are arranged.
- the display unit is a transmissive type, and the display unit is in a through state, that is, a transparent or translucent state, so that even if the glasses-type HMD is always worn, there is no problem in the normal life of the user. It is like that.
- FIG. 4 shows an example of the internal configuration of the client 1 when realized by the glasses-type HMD.
- the client 1 includes a control unit 10, a communication unit 11, a display unit 12, an operation unit 13, a storage unit 14, and a position information acquisition unit 15.
- the control unit 10 is composed of, for example, a microcomputer including a CPU, a ROM, a RAM, a nonvolatile memory, and an interface unit, and controls each configuration of the client 1. Moreover, the control part 10 by this embodiment functions as the display control part 10a and the communication control part 10b, as shown in FIG.
- the display control unit 10a generates a screen to be displayed on the display unit 12 and performs display control. For example, the display control unit 10 a controls the display unit 12 to display guidance information received from the server 2 by the communication unit 11. Further, the display control unit 10a can also control the transmittance of the display unit 12.
- the communication control unit 10b controls data transmission from the communication unit 11. For example, the communication control unit 10b transmits the current position information acquired by the position information acquisition unit 15 from the communication unit 11 to the server 2, and requests the server 2 for emotion navigation that guides the user to a predetermined emotion. In response to such a request, on the server 2 side, guidance information for guiding the user to a place or route associated with a predetermined emotion is generated based on the current position information of the user and the emotion map, and is returned to the client 1.
- the communication unit 11 has a function of connecting to the network 5 and communicating with the server 2. For example, the communication unit 11 transmits the current position information and the emotion navigation request to the server 2 according to the control of the communication control unit 10b. Further, the communication unit 11 receives the guidance information returned from the server 2 in response to the emotion navigation request.
- the display unit 12 is disposed in front of the user's eyes, that is, at a position where a lens in normal glasses is located, in a state where the client 1 (glasses type HMD) is worn by the user.
- the display unit 12 for example, a liquid crystal panel is used, and the display control unit 10a controls the transmittance of the liquid crystal panel, so that the display unit 12 is in a through state, that is, a transparent or translucent state.
- the display unit 12 displays the guide information transmitted from the server 2 according to the control by the display control unit 10a.
- a specific display example of the guidance information will be described later with reference to FIGS.
- the display unit 12 can also play back content such as photos and videos according to control by the display control unit 10a.
- the operation unit 13 has a function of detecting a user operation, and outputs the detected user operation to the control unit 10.
- the operation unit 13 may be realized by a detection unit that detects a tap operation (or vibration) on the main body of the client 1 (glasses-type HMD) in addition to a physical structure such as a power switch.
- the operation unit 13 may recognize a gesture input by capturing a user's gesture with a camera provided toward the outside, or may capture an eye of the user with a camera provided toward the inside. The input may be recognized.
- the operation unit 13 may recognize the voice input by acquiring the user's voice with a microphone that collects ambient sounds.
- the storage unit 14 stores a program for executing various processes by the control unit 10.
- the position information acquisition unit 15 has a function of detecting the current position of the client 1 based on an external acquisition signal.
- the position information acquisition unit 15 outputs the acquired current position information to the control unit 10.
- the configuration of the client 1 realized by the glasses-type HMD has been described as the configuration example of the client 1 for presenting guidance information.
- the configuration of the client 1 according to the present embodiment is not limited to the example illustrated in FIG. 4.
- the client 1 may further include an audio reproduction unit (speaker).
- the audio reproduction unit is realized by, for example, a pair of earphone speakers that can be inserted into the right ear hole and the left ear hole of the user (or only one that can be inserted into only one ear) when the client 1 (glasses type HMD) is attached.
- the voice reproducing unit outputs the guidance information transmitted from the server 2 by voice or reproduces content data such as music according to the control of the control unit 10.
- the client 1 may further include an environment information acquisition unit.
- the environment information acquisition unit has a function of sensing, for example, ambient temperature information (place information) such as temperature, humidity, illuminance, meteorological information on the current location, an object existing in the surrounding area, a state of the object, and the like.
- place information such as temperature, humidity, illuminance, meteorological information on the current location, an object existing in the surrounding area, a state of the object, and the like.
- the communication control unit 10b transmits the content data of the content (photograph, video, music, etc.) currently being viewed by the user or the current surrounding environment information to the server 2 to give the user a predetermined emotion.
- guidance information that suggests that the server 2 side improve the other environment information that leads the user to a predetermined emotion according to the user's current emotion estimated based on the environment information. Is generated and returned to the client 1.
- the configuration of the server 2, the data collection client 3, and the guidance information presentation client 1 included in the communication system according to the present embodiment has been specifically described above.
- the data collection client 3 and the guidance information presentation client 1 may be the same device.
- FIG. 5 is a sequence diagram showing an operation process when an emotion map is generated.
- the client 3 acquires data for estimating the user's emotion. Specifically, for example, in the case of the client 3 a (see FIG. 3), it is acquired by capturing a face image with the in-camera 36 or detecting biometric information with the emotion sensor unit 37.
- step S106 the client 3 acquires current position information.
- step S109 the client 3 transmits data for estimating emotions and current position information to the server 2.
- step S110 the map information generation unit 20a of the server 2 estimates the user's emotion based on the received data (face image or biological information) for estimating the emotion.
- the emotion estimation of the user may be performed on the client 3 side.
- the already estimated emotion is transmitted to the server 2, so that S110 is omitted.
- step S112 the map information generation unit 20a creates an emotion map indicating the correlation between the position and the emotion. That is, the map information generation unit 20a associates (maps) the estimated emotion with the position on the map indicated by the current position information.
- step S115 the map information generation unit 20a stores the generated emotion map in the emotion map DB 24 and updates the emotion map database. Steps S103 to S115 described above are repeatedly performed, and the map information generation unit 20a continuously updates the emotion map database based on the position information and emotion data transmitted from each client 3. In the present embodiment, a large amount of position information and emotion data continuously collected from an infinite number of clients 3 connected via the network 5 are statistically analyzed to indicate what kind of emotion is generated in which place or region. An emotion map can be generated more accurately.
- FIG. 6 is a sequence diagram showing navigation operation processing in accordance with the emotion map.
- the client 1 acquires information on the current location and the destination. For example, the client 1 acquires the current position information by the position information acquisition unit 38 and acquires the destination by gesture input / voice input / tap operation.
- step S206 the client 1 recognizes a route search execution instruction.
- the route search execution instruction is performed, for example, by gesture input / voice input / tap operation by the user.
- step S209 the client 1 transmits information on the current location and the destination to the server 2 and requests a route search to the destination.
- step S212 the guidance information generation unit 20b of the server 2 searches for a plurality of routes to the destination based on the received current location and destination information.
- the guidance information generating unit 20b refers to the searched plurality of routes and the happiness map stored in the emotion map DB 24, and among the searched plurality of routes, a place or region having a high happiness level. The route that passes through is determined, and guidance information to the destination is generated.
- step S218, the providing unit 20d controls to transmit the guide information generated by the guide information generating unit 20b from the communication unit 21 to the client 1.
- step S221 the client 1 presents the received guidance information and starts route guidance. Specifically, for example, the client 1 may perform route guidance by displaying an arrow image indicating the traveling direction based on the guidance information on the display unit 12 so as to be superimposed on the scenery in the real space.
- navigation via a place with high happiness can be performed. Specifically, for example, navigation via a place with a beautiful scenery, a safe road, a road avoiding a slope, a tourist attraction, etc. is performed, and the user feels happy (fun and happy).
- navigation according to the happiness map according to the user attribute is performed, for example, if it is an attribute with children, it may be guided to the destination via facilities such as parks, libraries, children's halls, etc. where children can play it can.
- happiness map is used as an example here, but the present embodiment is not limited to this, and a plurality of emotion maps, for example, a happiness map and an anger map are used to guide the happiness route while avoiding the anger route. It is also possible to generate guidance information.
- FIG. 7 is a sequence diagram showing operation processing of another navigation example according to the emotion map.
- the client 1 acquires information on the current location.
- the client 1 acquires the current position information by the position information acquisition unit 38.
- step S236 the client 1 recognizes an emotion navigation execution instruction.
- the emotion navigation execution instruction is performed by, for example, gesture input / voice input / tap operation by the user.
- step S239 the client 1 sends information on the current location to the server 2 and requests emotion navigation.
- step S242 the guide information generating unit 20b of the server 2 extracts the received emotion data around the current location from the emotion map.
- the level of happiness around the current location is extracted from the happiness map.
- step S245 the guidance information generation unit 20b generates guidance information indicating the level of happiness around the current location.
- step S248 the providing unit 20d controls to transmit the guide information generated by the guide information generating unit 20b from the communication unit 21 to the client 1.
- step S251 the client 1 presents the received guidance information and displays the happiness level of each route around the current location.
- a method for displaying the happiness level in each route around the current location will be described with reference to FIGS.
- FIGS. 8 and 9 are diagrams showing examples of how to display the happiness level in each route around the current location.
- display example 1 shown on the left in FIG. 8 a mark P indicating the current location and color images 401 and 402 indicating the happiness in the route around the current location are superimposed on the map image 40 around the current location.
- the darker color image 401 indicates a higher level of happiness
- the lighter color image 402 indicates a lower level of happiness, so that the user can intuitively grasp the road with a higher level of happiness. it can.
- a mark P indicating the current location and distribution images 421a to 421c indicating the happiness level in the location and area around the current location are superimposed on the map image 42 around the current location.
- the user can intuitively grasp a place where the happiness level is high by indicating that the darker color region in each distribution image has a higher happiness level.
- the above-described images of display example 1 and display example 2 shown in FIG. 8 are displayed on a part of display unit 12 subjected to transmission control when client 1 is a glasses-type HMD, for example.
- arrow images 501 and 503 indicating the traveling direction and facial expression images 502 and 504 corresponding to emotion data ahead in the traveling direction are superimposed on the real space image 50.
- the user intuitively grasps that the user can go to a place or a region where a happy emotion is felt when the user follows the arrow image 501 associated with the smiling facial expression image 502, and the facial expression of the crying face. If you go along the right road along the arrow image 503 with which the image 504 is associated, you can intuitively know that you are going to a place or area where you feel sad.
- the direction 521 having a low happiness level is processed into a dark display
- the direction 522 having a high happiness level is processed into a bright display. Guide users on the road naturally.
- the arrow image 501 or the like may be superimposed and displayed on a real space scenery (corresponding to the real space images 50 and 52) that can be visually recognized via the image 12.
- FIG. 10 is a sequence diagram showing an operation process at the time of creating data of the content-emotion DB 23.
- the data collection client 3 reproduces content data such as photographs, videos, and music.
- the reproduction of the content data may be executed according to a user operation.
- the client 3 may be a client 3b realized by a smartphone as shown in FIG.
- the client 3b reproduces the content data using the display unit and the audio output unit.
- the client 3 acquires data for estimating the user's emotion.
- an emotion sensor unit that detects the user's sweat, temperature, pulse, etc. by touching the user's hand holding the client 3b, and the face of the user who browses photos and videos Is acquired by an in-camera provided facing inward so as to capture the image.
- step S309 the client 3 transmits the data for estimating the emotion and the content information (content data, metadata, etc.) being reproduced to the server 2.
- step S310 the control unit 20 of the server 2 estimates the user's emotion based on the received data (face image and biological information) for estimating the emotion. Note that user's emotion estimation may be performed on the client 3 side. In this case, since the already estimated emotion is transmitted to the server 2, the step S310 is omitted.
- step S312 the control unit 20 creates data indicating the correlation between content and emotion. That is, the control unit 20 learns what kind of emotion the user feels while viewing what kind of content.
- step S315 the control unit 20 stores the generated data (learning result) in the content-emotion DB 23 and updates the database.
- Steps S303 to S315 described above are repeatedly performed, and the control unit 20 continuously updates the content-emotion DB 23 based on the content information and emotion data transmitted from each client 3.
- what kind of content is viewed and what kind of emotion is generated by statistically analyzing a large amount of content information and emotion data collected from countless clients 3 connected via the network 5. Can be generated with higher accuracy.
- FIG. 11 is a sequence diagram showing an operation process of content navigation. As shown in FIG. 11, first, in step S403, the client 1 reproduces content. The reproduction of the content data may be executed according to a user operation.
- the client 1 is realized by a smartphone as illustrated in FIG. 1, and reproduces content data using the display unit 12 and the audio output unit.
- step S406 the client 1 transmits information on the content being reproduced (content data, metadata, etc.) to the server 2.
- the transmission of the content information may be performed automatically or may be performed when a navigation execution instruction from the user is recognized.
- the emotion estimation unit 20c of the server 2 estimates the user's current emotion based on the received content information. Specifically, the emotion estimation unit 20c performs matching using data stored in the content-emotion DB 23, and associates it with content that is the same as the content currently being viewed by the user or similar in feature amount or metadata. Extract the attached emotion. The emotion estimation unit 20c outputs the estimated emotion data to the guidance information generation unit 20b.
- the guidance information generation unit 20b determines predetermined content data to be recommended (guided) according to the emotion data estimated by the emotion estimation unit 20c. For example, the guidance information generation unit 20b recommends (guidance) content data (photos, videos, music, etc.) for making the opposite happy and happy emotions when the music that the user is currently listening is a sad emotion. ) Further, when the current location information of the user is also transmitted from the client 1, the server 2 side can grasp the current location of the user, so the guide information generating unit 20b recommends a movie being shown in a movie theater near the current location of the user. It is also possible to do. The guidance information generation unit 20b generates content guidance information including information on the determined recommended (guidance) content, and outputs the generated content guidance information to the providing unit 20d.
- guidance content data
- the guidance information generation unit 20b generates content guidance information including information on the determined recommended (guidance) content, and outputs the generated content guidance information to the providing unit 20d.
- step S418, the providing unit 20d controls to transmit the content guide information generated by the guide information generating unit 20b from the communication unit 21 to the client 1.
- the client 1 presents the received content guidance information and performs content guidance (recommendation). Specifically, for example, when the client 1 is a glasses-type HMD, the client 1 displays content information recommended to the user based on the guidance information on a part of the display unit 12 that is controlled to be transparent. For example, the content itself, the content advertisement screen, the purchase screen, etc. can be presented. Alternatively, for example, when the client 1 is a glasses-type HMD, the guidance information image may be superimposed and displayed on the scenery in the real space that can be visually recognized via the display unit 12 that is controlled to transmit.
- FIG. 12 is a diagram showing a display example of content recommendation.
- the movie when a user passes in front of a movie theater, the movie is shown in the vicinity of the movie theater included in the scenery 54 of the real space that can be visually recognized through the display unit 12 controlled to be transparent.
- a guide information image 541 for guiding a movie in accordance with the emotion of the user at that time is displayed in a superimposed manner.
- the client 1 acquires movie information being shown in a nearby movie theater, matches the guidance information transmitted from the server 2, and when the corresponding movie is being shown in FIG.
- a guidance information image 541 as shown may be displayed.
- the client 1 transmits the current position information to the server 2, recognizes a movie theater in the vicinity of the user on the server 2 side, acquires movie information being shown in the movie theater, and shows a movie to be recommended.
- the guidance information may be transmitted to the client 1 in the middle.
- the content recommendation may be an advertisement screen of the target content or a purchase screen in addition to the presentation of the content itself.
- FIG. 13 is a sequence diagram showing an operation process when creating data in the environment-emotion DB 22.
- the data collection client 3 collects surrounding environmental information.
- the collection of environmental information may be performed periodically / continuously, may be performed with an environmental change such as position movement as a trigger, or may be performed according to a user operation.
- the client 3 collects information such as time, position, atmospheric pressure, temperature, humidity, position, surrounding objects, and state of objects as information (environment information) surrounding the user.
- the client 3 acquires data for estimating the user's emotion.
- a client 3b realized by a smartphone as shown in FIG. 1, an emotion sensor unit that detects the user's sweat, temperature, pulse, etc. by touching the user's hand holding the client 3b, Moreover, it acquires with the in-camera provided toward the inner side so that the face of the user who browses a photograph or an image may be imaged.
- step S509 the client 3 transmits data for estimating emotions and environment information to the server 2.
- step S510 the control unit 20 of the server 2 estimates the user's emotion based on the received data (face image and biological information) for estimating the emotion.
- the emotion estimation of the user may be performed on the client 3 side. In this case, the already estimated emotion is transmitted to the server 2, and thus S510 is omitted.
- step S512 the control unit 20 creates data indicating the correlation between environmental information and emotion. That is, the control unit 20 learns what kind of emotion the user has in the case of what kind of environmental information.
- step S515 the control unit 20 stores the generated data (learning result) in the environment-emotion DB 22 and updates the database.
- Steps S503 to S515 described above are repeatedly performed, and the control unit 20 continuously updates the environment-emotion DB 22 based on the environment information and emotion data transmitted from each client 3.
- what kind of emotion is caused in what kind of environmental information by statistically analyzing a large amount of environmental information and emotion data continuously collected from countless clients 3 connected via the network 5. It is possible to generate data indicating the above with higher accuracy.
- an environmental information improvement notification or an environmental information improvement plan for guiding the user to a predetermined emotion is provided (guided) to the user.
- FIG. 14 is a sequence diagram showing an operation process of environmental information navigation.
- the client 1 collects surrounding environmental information.
- the collection of the environmental information may be performed periodically / continuously, may be performed with an environmental change such as position movement as a trigger, or may be performed according to a user operation (a navigation execution instruction by the user).
- step S606 the client 1 transmits environment information to the server 2.
- step S609 the emotion estimation unit 20c of the server 2 estimates the user's current emotion based on the received environment information. Specifically, the emotion estimation unit 20c performs matching using data stored in the environment-emotion DB 22, and extracts emotions associated with environment information that is the same as or similar to the environment around the current user. To do. The emotion estimation unit 20c outputs the estimated emotion data to the guidance information generation unit 20b.
- step S612 the guidance information generation unit 20b determines whether the happiness level of the emotion data estimated by the emotion estimation unit 20c is lower than a threshold value.
- a threshold value for becoming a happy emotion is performed, so the happiness level of emotion data is determined.
- the happiness level exceeds the threshold (S612 / No)
- step S615 the guidance information generation unit 20b generates environment improvement information indicating that there is a need for environment improvement or a specific environment improvement plan. And output to the providing unit 20d.
- a specific environmental improvement plan is generated by, for example, detecting a difference between environmental information around the current user and environmental information associated with (correlated with) a target level of happiness. For example, when a depressed feeling is estimated from the environmental information when the user enters the hotel room where the user stays (when the happiness level is less than or equal to the threshold value), the guidance information generation unit 20b The difference between environmental information having a correlation with emotion is detected. In this case, for example, the brightness of the room, temperature, humidity, opening / closing of windows, opening / closing of curtains, presence / absence of music, and the like can be detected as differences. Based on such a difference, the guidance information generation unit 20b indicates, for example, brightening the room, setting values of temperature / humidity, opening a window, opening a curtain, playing music, etc. as an environmental improvement plan. Generate environmental improvement information.
- step S618, the providing unit 20d controls the environment improvement information generated by the guidance information generating unit 20b to be transmitted from the communication unit 21 to the client 1.
- the client 1 performs notification (warning) indicating that the environment improvement is necessary or / and presents an environment improvement plan based on the received environment improvement information.
- notification warning
- the environment for becoming a happy emotion can be guided to the user.
- the notification of environmental improvement / presentation of the environmental improvement plan may be displayed on a part of the display unit 12 that is controlled to be transparent or via the display unit 12. It may be displayed superimposed on a real space scenery that can be visually recognized.
- FIG. 15 is a diagram showing a display example of environment improvement information.
- the environment improvement information is superimposed and displayed in a real space landscape 56 that can be visually recognized through the display unit 12 that is controlled to be transparent when the user enters a hotel room.
- environmental improvement information 561 such as “Let's turn on electricity” is displayed near the desk lamp
- environmental improvement information 562 such as “Let's open the window and replace the air” is displayed near the window / curtain.
- an ashtray is placed on the table, the environment of a smoking room is estimated, and environmental improvement information 563 such as “Let's put on an air purifier” is displayed near the ashtray.
- environment improvement information 564 indicating recommended setting information of the air conditioner is also displayed.
- an icon image 566 indicating that navigation according to the present embodiment is being activated may be displayed.
- the communication system refers to the current position information and the emotion map received from the client 1, and includes a place where a predetermined emotion is associated with the user's surroundings or a plurality of routes to the destination. Guidance information to guide the area is presented to the user.
- the communication system estimates the current emotion of the user by referring to the content information currently received by the user 1 and the content-emotion DB 23 received from the client 1, for example, when the happiness level is low. Recommend content to users that makes them happy.
- the communication system estimates the user's current emotion by referring to the user's current surrounding environment information and the environment-emotion DB 22 received from the client 1, and is happy if the happiness level is low, for example.
- a computer program for causing the functions of the clients 1, 3, and the server 2 to be performed on hardware such as the CPU, ROM, and RAM incorporated in the clients 1, 3, and server 2 described above can be created.
- a computer-readable storage medium storing the computer program is also provided.
- emotion data is generated by mapping emotion data on a map.
- the present disclosure is not limited to this.
- the average lifetime information of each area collected in advance is displayed on the map. It is also possible to generate a lifetime map by mapping to. Thereby, for example, a user can be guided to an area or place where the average life is long.
- the user's emotion is estimated according to the content data (photograph, video, music, etc.) viewed by the user in “3-2. Content navigation”.
- the emotion of the user can be estimated according to the subject image and sound acquired when the user is shooting.
- the estimation unit 20c can estimate the emotion of the user (photographer) according to the subject. For example, when a laughing voice or the like is detected from the voice of the subject, the emotion estimation unit 20c has a happy emotion for the subject, so that the user (photographer) who is with the subject also has a happy emotion. Can be estimated.
- the configurations of the clients 1 and 3 and the server 2 included in the communication system according to the present embodiment have been described.
- the present disclosure is not limited to this, and the functions of the configurations are It may be distributed in any way.
- all the functions of the control unit 20 of the server 2 may be provided on the client 1 side, or the environment-emotion DB 22, the content-emotion DB 23, and the emotion map DB 24 are provided on separate servers. It may be.
- the user's current emotion is estimated based on the viewing content and the surrounding environment information.
- the present embodiment is not limited to this.
- the user's current location and emotion map may be referred to to estimate the user's current emotion, and content navigation or environment improvement navigation may be performed to make the user happy.
- the server 2 categorizes data (environment-emotion DB 22) indicating the correlation between environment information and emotion data, and data indicating the correlation between content data and emotion data (content-emotion DB 23) according to user attributes (age, age, age, It may be generated by sex, hobbies / preferences, place of birth, etc.).
- this technique can also take the following structures.
- a communication system comprising: (2) The communication information according to (1), wherein the guidance information includes information that instructs a user in a traveling direction according to emotion data mapped to the map information.
- the map information generation unit generates the map information according to data / estimated emotion data uploaded from a client / estimated emotion data and position information, according to (4). Communications system. (6) The communication system according to (4) or (5), wherein the map information generation unit maps average life information of each region. (7) The communication system according to any one of (1) to (6), further including a first emotion estimation unit that estimates emotion data based on content data uploaded from a client. (8) The communication according to (7), wherein the first emotion estimation unit estimates emotion data corresponding to the content data based on a learning result obtained by learning a relationship between a feature amount of the content data and emotion data. system.
- the guide information generation unit generates guide information for recommending to the user predetermined content data corresponding to emotion data estimated based on content data currently being viewed by the user.
- the communication system according to 8). (10) The communication system according to any one of (1) to (9), further including a second emotion estimation unit that estimates emotion data based on environmental information around a user uploaded from a client. system. (11) The communication system according to (10), wherein the second emotion estimation unit estimates emotion data according to the environment information based on a learning result obtained by learning a relationship between the environment information and emotion data. (12) In the above (10) or (11), the guide information generating unit generates guide information for proposing improvement of environment information to the user based on emotion data estimated based on environment information around the current user. The communication system described.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Surgery (AREA)
- Hospice & Palliative Care (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Educational Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Traffic Control Systems (AREA)
- Instructional Devices (AREA)
- Navigation (AREA)
- Telephonic Communication Services (AREA)
- Telephone Function (AREA)
Abstract
Description
1.本開示の一実施形態による通信システムの概要
2.基本構成
2-1.サーバ2の構成例
2-2.クライアント3(データ収集用)の構成例
2-3.クライアント1(案内情報提示用)の構成例
3.各ナビゲーション
3-1.感情マップに応じたナビゲーション
3-2.コンテンツナビゲーション
3-3.環境情報ナビゲーション
4.まとめ
まず、本開示の一実施形態による通信システムの概要について図1を参照して説明する。図1は、本開示の一実施形態による通信システムの概要を説明する図である。図1に示すように、本実施形態による通信システムは、データ収集用のクライアント3と、サーバ2と、案内情報提示用のクライアント1と、を含む。
次に、本実施形態による通信システムに含まれるサーバ2、クライアント3、クライアント1の各構成例について、図2~図4を参照して順次説明する。
図2は、本実施形態によるサーバ2の基本構成の一例を示すブロック図である。図2に示すように、サーバ2は、制御部20、通信部21、環境-感情DB22、コンテンツ-感情DB23、および感情マップDB24を有する。
制御部20は、例えばCPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)、不揮発性メモリ、インターフェース部を備えたマイクロコンピュータにより構成され、サーバ2の各構成を制御する。
通信部21は、ネットワーク5に接続し、クライアント1、3とデータの送受信を行う。例えば通信部21は、データ収集を行うクライアント3から、ユーザの感情を推定するためのデータと、位置情報/コンテンツデータ/環境情報を受信(取得)する。また、通信部21は、案内情報の提示を行うクライアント1から現在位置情報を受信(取得)し、これに対して、提供部20dによる制御に従って案内情報を返信する。
環境-感情DB22は、環境情報(時間、位置、気圧、気温、湿度、位置、物体、物体の状態等)と感情データとの相関関係を示すデータを格納する。例えば、環境-感情DB22には、データ収集用の多数のクライアント3から大量に送信された環境情報と感情データの相関関係を統計解析して学習した学習結果が格納される。
コンテンツ-感情DB23は、コンテンツデータと感情データとの相関関係を示すデータを格納する。例えば、コンテンツ-感情DB23には、データ収集用の多数のクライアント3から大量に送信されたコンテンツデータと感情データの相関関係を統計解析して学習した学習結果が格納される。
感情マップDB24は、地図情報生成部20aにより生成された感情マップを格納する。
次に、データ収集用のクライアント3の構成について図3を参照して説明する。図3は、本実施形態によるクライアント3a(カメラ装置)の内部構成例を示すブロック図である。図3では、クライアント3の一例として、カメラ装置により実現されるクライアント3aの構成を示す。
制御部30は、例えばCPU、ROM、RAM、不揮発性メモリ、インターフェース部を備えたマイクロコンピュータにより構成され、クライアント3aの各構成を制御する。具体的には、例えば制御部30は、操作部33によりシャッターボタンの押下が検出された際に撮像部35に対して撮像実行を指示するよう制御する。また、制御部30は、撮像部35で撮像を実行した際に、インカメラ36でユーザ(撮影者)の顔(表情)を撮像するよう制御し、インカメラ36で撮像した顔画像を、ユーザの感情を推定するためのデータとして通信部31からサーバ2に送信するよう制御する。若しくは、制御部30は、撮像部35で撮像を実行した際に、例えばシャッターボタン等に設けられた感情センサ部37により検知したセンサ値(発汗、脈拍、または体温等の生体情報)を、ユーザの感情を推定するためのデータとして通信部31からサーバ2に送信するよう制御する。さらに、制御部30は、顔画像、生体情報等のユーザの感情を推定するためのデータをサーバ2に送信する際に、併せて位置情報取得部38により取得した現在位置情報、または撮像部35で撮像された撮像画像(被写体画像)も送信するよう制御する。
通信部31は、ネットワーク5に接続し、サーバ2とデータの送受信を行う。
表示部32は、制御部30の制御に従って、表示画面に文字、画像その他の情報を表示する機能を有する。表示部32は、例えば液晶ディスプレイや有機ELディスプレイ等により実現される。
操作部33は、ユーザによる操作を検出し、検出したユーザ操作を制御部30に出力する機能を有する。操作部33は、例えばシャッターボタン、電源ボタン、ズームボタン等の物理的な構造により実現される他、表示画面に重畳され表示画面に対するユーザの接触位置を検知する操作面(例えばタッチパネル)により実現されてもよい。
画像DB34は、撮像部35により撮像された撮像画像や、インカメラ36により撮像された顔画像を格納する。
撮像部35は、撮像レンズ、絞り、ズームレンズ、及びフォーカスレンズ等により構成されるレンズ系、レンズ系に対してフォーカス動作やズーム動作を行わせる駆動系、レンズ系で得られる撮像光を光電変換して撮像信号を生成する固体撮像素子アレイ等を有する。固体撮像素子アレイは、例えばCCD(Charge Coupled Device)センサアレイや、CMOS(Complementary Metal Oxide Semiconductor)センサアレイにより実現されてもよい。
インカメラ36は、撮影者(操作者)であるユーザを撮像するよう、ユーザ側に向けて設けられた撮像部である。具体的には、例えばインカメラ36は、カメラ装置(クライアント3a)の外側を撮像方向とする撮像部35の撮像レンズが設けられている面と反対側の面に、内側を撮像方向として設けられる。
感情センサ部37は、カメラ装置(クライアント3a)の操作者であるユーザの感情を推定するためのデータとして、ユーザの脈拍、発汗、または体温等の生体情報を検知する各種センサである。感情センサ部37は、例えばシャッターボタンや操作面に設けられ、ユーザがシャッターボタンの押下または操作面をタッチして撮像操作を行った際の生体情報を検知することができる。感情センサ部37は、検知した生体情報を制御部30に出力し、制御部30によりユーザの感情を推定するためのデータとしてサーバ2に送信される。
位置情報取得部38は、外部からの取得信号に基づいてクライアント3aの現在位置を検知する機能を有する。具体的には、例えば位置情報取得部38は、GPS(Global Positioning System)測位部により実現され、GPS衛星からの電波を受信して、クライアント3aが存在している位置を検知し、検知した位置情報を制御部30に出力する。また、位置情報取得部38は、GPSの他、例えばWi-Fi(登録商標)、携帯電話・PHS・スマートフォン等との送受信、または近距離通信等により位置を検知するものであってもよい。
次に、案内情報提示用のクライアント1の構成について説明する。クライアント1は、図1に示したように、例えばメガネ型HMDにより実現される。メガネ型HMDは、ユーザに装着される眼鏡型のウェアラブル装置であって、装着状態において、ユーザの両眼の直前、即ち通常の眼鏡におけるレンズが位置する場所に、左眼用と右眼用の一対の表示部が配置される構成となっている。表示部は透過型であって、表示部がスルー状態、即ち透明または半透明の状態とされることで、当該メガネ型HMDを常時装着していてもユーザの通常の生活には支障が出ないようになっている。
制御部10は、例えばCPU、ROM、RAM、不揮発性メモリ、インターフェース部を備えたマイクロコンピュータにより構成され、クライアント1の各構成を制御する。また、本実施形態による制御部10は、図4に示すように、表示制御部10a、および通信制御部10bとして機能する。
通信部11は、ネットワーク5に接続し、サーバ2と通信を行う機能を有する。例えば通信部11は、通信制御部10bの制御にしたがって、現在位置情報と感情ナビゲーションの要求をサーバ2に送信する。また、通信部11は、感情ナビゲーションの要求に応じてサーバ2から返信される案内情報を受信する。
表示部12は、上述したように、クライアント1(メガネ型HMD)をユーザが装着した状態において、ユーザの両眼の直前、即ち通常の眼鏡におけるレンズが位置する場所に配置される。この表示部12には、例えば液晶パネルが用いられ、表示制御部10aにより液晶パネルの透過率が制御されることで、スルー状態、即ち透明または半透明の状態になる。
操作部13は、ユーザ操作を検出する機能を有し、検出したユーザ操作を制御部10に出力する。操作部13は、電源スイッチ等の物理的構造によるものの他、クライアント1(メガネ型HMD)本体へのタップ操作(または振動)を検知する検知部によって実現されてもよい。また、操作部13は、外側に向けて設けられたカメラによりユーザのジェスチャーを撮像してジェスチャー入力を認識してもよいし、内側に向けて設けられたカメラによりユーザの眼を撮像して視線入力を認識してもよい。若しくは、操作部13は、周囲の音を集音するマイクロホンによりユーザの音声を取得して音声入力を認識してもよい。
記憶部14は、制御部10による各種処理を実行するためのプログラムを格納する。
位置情報取得部15は、外部からの取得信号に基づいてクライアント1の現在位置を検知する機能を有する。位置情報取得部15は、取得した現在位置情報を制御部10に出力する。
<3-1.感情マップに応じたナビゲーション>
まず、感情マップに応じたナビゲーションについて、図5~図9を参照して説明する。
図5は、感情マップ生成時の動作処理を示すシーケンス図である。図5に示すように、まず、ステップS103において、クライアント3は、ユーザの感情を推定するためのデータを取得する。具体的には、例えばクライアント3a(図3参照)の場合、インカメラ36による顔画像の撮像、または感情センサ部37による生体情報の検知により取得する。
続いて、生成した感情マップに応じたナビゲーションについて、図6~図9を参照して説明する。ナビゲーションの種類は様々あるが、ここでは、例えば現在地から目的地までの経路誘導や現在地周辺の経路誘導を、感情マップに応じて行う場合について説明する。また、ナビゲーションに用いる感情マップは、一例として幸福度と位置の相関関係を示す感情マップ(幸福マップとも称す)を用いる。これにより、本実施形態では、幸福ナビゲーションが実現する。
まず、ナビゲーション1として、現在地から目的地までの経路誘導を感情マップに応じて行う場合について図6を参照して説明する。図6は、感情マップに応じたナビゲーションの動作処理を示すシーケンス図である。図6に示すように、まず、ステップS203において、クライアント1は、現在地および目的地の情報を取得する。例えばクライアント1は、位置情報取得部38により現在位置情報を取得し、ジェスチャー入力/音声入力/タップ操作により目的地を取得する。
続いて、ナビゲーション2として、現在地周辺の経路誘導を感情マップに応じて行う場合について図7を参照して説明する。図7は、感情マップに応じた他のナビゲーション例の動作処理を示すシーケンス図である。図7に示すように、まず、ステップS233において、クライアント1は、現在地の情報を取得する。例えばクライアント1は、位置情報取得部38により現在位置情報を取得する。
次に、コンテンツのナビゲーション(提案)について、図10~図12を参照して説明する。
図10は、コンテンツ-感情DB23のデータ作成時の動作処理を示すシーケンス図である。図10に示すように、まず、ステップS303において、データ収集用のクライアント3は、写真、映像、音楽等のコンテンツデータの再生を行う。コンテンツデータの再生はユーザ操作に応じて実行してもよい。ここで、クライアント3は、図1に示すようなスマートフォンで実現されるクライアント3bとしてもよい。クライアント3bは、表示部および音声出力部を用いてコンテンツデータの再生を行う。
続いて、生成したコンテンツ-感情DB23のデータを用いたナビゲーションについて、図11~図12を参照して説明する。本実施形態によるコンテンツナビゲーションでは、所定の感情にユーザを導くためのコンテンツをユーザに提供(案内)する。
次に、環境情報のナビゲーション(提案)について図13~図15を参照して説明する。
図13は、環境-感情DB22のデータ作成時の動作処理を示すシーケンス図である。図13に示すように、まず、ステップS503において、データ収集用のクライアント3は、周囲の環境情報を収集する。環境情報の収集は、定期的/継続的に行ってもよいし、位置移動等の環境変化をトリガとして行ってもよいし、ユーザ操作に応じて行ってもよい。クライアント3は、ユーザを取り巻く環境の情報(場情報)として、例えば時間、位置、気圧、気温、湿度、位置、周囲に存在する物体、物体の状態等の情報を収集する。
続いて、生成した環境-感情DB22のデータを用いたナビゲーションについて、図14~図15を参照して説明する。本実施形態による環境情報ナビゲーションでは、環境情報の改善通知、または所定の感情にユーザを導くための環境情報の改善案をユーザに提供(案内)する。
上述したように、本開示の実施形態による通信システムでは、感情マップに応じて所定の感情になるための行動をユーザに案内することができる。すなわち、本実施形態による通信システムは、クライアント1から受信した現在位置情報と感情マップを参照し、ユーザの周囲、または目的地までの複数の経路のうち、所定の感情が紐付けられた場所や地域に案内する案内情報をユーザに提示する。
(1)
ユーザの現在位置情報を取得する取得部と、
前記現在位置情報と、感情データがマッピングされた地図情報とに応じて、案内情報を生成する案内情報生成部と、
前記案内情報をユーザに提供する提供部と、
を備える、通信システム。
(2)
前記案内情報は、前記地図情報にマッピングされた感情データに応じた進行方向をユーザに指示する情報を含む、前記(1)に記載の通信システム。
(3)
前記取得部は、前記ユーザの感情を推定するためのデータを取得する、前記(1)または(2)に記載の通信システム。
(4)
前記通信システムは、前記感情データがマッピングされた地図情報を生成する地図情報生成部をさらに備える、前記(1)~(3)のいずれか1項に記載の通信システム。
(5)
前記地図情報生成部は、クライアントからアップロードされる前記ユーザの感情を推定するためのデータ/推定済みの感情データと、位置情報に応じて、前記地図情報を生成する、前記(4)に記載の通信システム。
(6)
前記地図情報生成部は、各地域の平均寿命情報もマッピングする、前記(4)または(5)に記載の通信システム。
(7)
前記通信システムは、クライアントからアップロードされたコンテンツデータに基づいて感情データを推定する第1の感情推定部をさらに備える、前記(1)~(6)のいずれか1項に記載の通信システム。
(8)
前記第1の感情推定部は、前記コンテンツデータの特徴量と感情データとの関係を学習した学習結果に基づいて、前記コンテンツデータに応じた感情データを推定する、前記(7)に記載の通信システム。
(9)
前記案内情報生成部は、現在ユーザが視聴しているコンテンツデータに基づいて推定される感情データに応じた所定のコンテンツデータをユーザに推薦するための案内情報を生成する、前記(7)または(8)に記載の通信システム。
(10)
前記通信システムは、クライアントからアップロードされるユーザ周辺の環境情報に基づいて感情データを推定する第2の感情推定部をさらに備える、前記(1)~(9)のいずれか1項に記載の通信システム。
(11)
前記第2の感情推定部は、前記環境情報と感情データとの関係を学習した学習結果に基づいて、前記環境情報に応じた感情データを推定する、前記(10)に記載の通信システム。
(12)
前記案内情報生成部は、現在のユーザ周辺の環境情報に基づいて推定される感情データに基づいて、環境情報の改善をユーザに提案する案内情報を生成する、前記(10)または(11)に記載の通信システム。
(13)
ユーザの現在位置情報をクライアントから取得することと、
前記現在位置情報と、感情データがマッピングされた地図情報とに応じて、案内情報を生成することと、
前記案内情報をユーザに提供するよう制御することと、
を含む、制御方法。
(14)
コンピュータを、
ユーザの現在位置情報を取得する取得部と、
前記現在位置情報と、感情データがマッピングされた地図情報とに応じて、案内情報を生成する案内情報生成部と、
前記案内情報をユーザに提供する提供部と、
として機能させるためのプログラムが記憶された、記憶媒体。
10 制御部
11 通信部
12 表示部
13 操作部
14 記憶部
2 サーバ
20 制御部
20a 地図情報生成部
20b 案内情報生成部
20c 感情推定部
20d 提供部
21 通信部
22 環境-感情DB
23 コンテンツ-感情DB
24 感情マップDB
3、3a~3c クライアント(データ収集用)
30 制御部
31 通信部
32 表示部
33 操作部
34 画像DB
35 撮像部
36 インカメラ
37 感情センサ部
38 位置情報取得部
5 ネットワーク
Claims (14)
- ユーザの現在位置情報を取得する取得部と、
前記現在位置情報と、感情データがマッピングされた地図情報とに応じて、案内情報を生成する案内情報生成部と、
前記案内情報をユーザに提供する提供部と、
を備える、通信システム。 - 前記案内情報は、前記地図情報にマッピングされた感情データに応じた進行方向をユーザに指示する情報を含む、請求項1に記載の通信システム。
- 前記取得部は、前記ユーザの感情を推定するためのデータを取得する、請求項1に記載の通信システム。
- 前記通信システムは、前記感情データがマッピングされた地図情報を生成する地図情報生成部をさらに備える、請求項1に記載の通信システム。
- 前記地図情報生成部は、クライアントからアップロードされる前記ユーザの感情を推定するためのデータ/推定済みの感情データと、位置情報に応じて、前記地図情報を生成する、請求項4に記載の通信システム。
- 前記地図情報生成部は、各地域の平均寿命情報もマッピングする、請求項4に記載の通信システム。
- 前記通信システムは、クライアントからアップロードされたコンテンツデータに基づいて感情データを推定する第1の感情推定部をさらに備える、請求項1に記載の通信システム。
- 前記第1の感情推定部は、前記コンテンツデータの特徴量と感情データとの関係を学習した学習結果に基づいて、前記コンテンツデータに応じた感情データを推定する、請求項7に記載の通信システム。
- 前記案内情報生成部は、現在ユーザが視聴しているコンテンツデータに基づいて推定される感情データに応じた所定のコンテンツデータをユーザに推薦するための案内情報を生成する、請求項7に記載の通信システム。
- 前記通信システムは、クライアントからアップロードされるユーザ周辺の環境情報に基づいて感情データを推定する第2の感情推定部をさらに備える、請求項1に記載の通信システム。
- 前記第2の感情推定部は、前記環境情報と感情データとの関係を学習した学習結果に基づいて、前記環境情報に応じた感情データを推定する、請求項10に記載の通信システム。
- 前記案内情報生成部は、現在のユーザ周辺の環境情報に基づいて推定される感情データに基づいて、環境情報の改善をユーザに提案する案内情報を生成する、請求項10に記載の通信システム。
- ユーザの現在位置情報をクライアントから取得することと、
前記現在位置情報と、感情データがマッピングされた地図情報とに応じて、案内情報を生成することと、
前記案内情報をユーザに提供するよう制御することと、
を含む、制御方法。 - コンピュータを、
ユーザの現在位置情報を取得する取得部と、
前記現在位置情報と、感情データがマッピングされた地図情報とに応じて、案内情報を生成する案内情報生成部と、
前記案内情報をユーザに提供する提供部と、
として機能させるためのプログラムが記憶された、記憶媒体。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15783327.8A EP3136055A4 (en) | 2014-04-21 | 2015-01-20 | Communication system, control method, and storage medium |
US15/302,093 US10488208B2 (en) | 2014-04-21 | 2015-01-20 | Communication system, control method, and storage medium |
CN201580019990.6A CN106255866B (zh) | 2014-04-21 | 2015-01-20 | 通信系统、控制方法以及存储介质 |
BR112016023982A BR112016023982A2 (pt) | 2014-04-21 | 2015-01-20 | sistema de comunicação, método de controle, e, mídia de armazenamento |
JP2016514736A JP6574937B2 (ja) | 2014-04-21 | 2015-01-20 | 通信システム、制御方法、および記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014087608 | 2014-04-21 | ||
JP2014-087608 | 2014-04-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015162949A1 true WO2015162949A1 (ja) | 2015-10-29 |
Family
ID=54332123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/051411 WO2015162949A1 (ja) | 2014-04-21 | 2015-01-20 | 通信システム、制御方法、および記憶媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10488208B2 (ja) |
EP (1) | EP3136055A4 (ja) |
JP (1) | JP6574937B2 (ja) |
CN (1) | CN106255866B (ja) |
BR (1) | BR112016023982A2 (ja) |
WO (1) | WO2015162949A1 (ja) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106534320A (zh) * | 2016-11-17 | 2017-03-22 | 包磊 | 一种用户状态地图生成方法和装置 |
JP2017181449A (ja) * | 2016-03-31 | 2017-10-05 | カシオ計算機株式会社 | 電子機器、経路検索方法及びプログラム |
WO2017217038A1 (ja) * | 2016-06-14 | 2017-12-21 | ソニー株式会社 | 情報処理装置および記憶媒体 |
JP2018018492A (ja) * | 2016-07-15 | 2018-02-01 | パナソニックIpマネジメント株式会社 | コンテンツ提示のための情報処理装置、情報処理装置の制御方法、及び制御プログラム |
JP2018100936A (ja) * | 2016-12-21 | 2018-06-28 | トヨタ自動車株式会社 | 車載装置及び経路情報提示システム |
WO2018123041A1 (ja) * | 2016-12-28 | 2018-07-05 | 本田技研工業株式会社 | 情報処理システム、及び情報処理装置 |
WO2018123055A1 (ja) * | 2016-12-28 | 2018-07-05 | 本田技研工業株式会社 | 情報提供システム |
JP2018132551A (ja) * | 2017-02-13 | 2018-08-23 | 沖電気工業株式会社 | 情報処理装置、情報処理方法およびプログラム |
JP2018200192A (ja) * | 2017-05-25 | 2018-12-20 | 本田技研工業株式会社 | 地点提案装置及び地点提案方法 |
JP2019036201A (ja) * | 2017-08-18 | 2019-03-07 | ヤフー株式会社 | 出力制御装置、出力制御方法及び出力制御プログラム |
KR20190119571A (ko) * | 2017-02-20 | 2019-10-22 | 소니 주식회사 | 정보 처리 시스템 및 정보 처리 방법 |
JP2020149592A (ja) * | 2019-03-15 | 2020-09-17 | ヤフー株式会社 | 提供装置、提供方法及び提供プログラム |
JP2020154870A (ja) * | 2019-03-20 | 2020-09-24 | ヤフー株式会社 | 選択装置、選択方法及び選択プログラム |
JP2020190776A (ja) * | 2019-05-17 | 2020-11-26 | ヤフー株式会社 | 判定装置、判定方法及び判定プログラム |
JP2021124922A (ja) * | 2020-02-04 | 2021-08-30 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
WO2021241373A1 (ja) * | 2020-05-25 | 2021-12-02 | パナソニックIpマネジメント株式会社 | イベント情報評価装置、生体情報抽出システム、イベント情報評価システムおよび生体情報抽出装置 |
JP7044925B1 (ja) | 2020-12-04 | 2022-03-30 | 株式会社メタリアル | メガネ型ウェアラブル端末、広告表示制御方法、広告表示制御プログラム、広告提供装置、広告提供方法、広告提供プログラムおよび広告提供システム |
WO2022118455A1 (ja) * | 2020-12-04 | 2022-06-09 | 株式会社ロゼッタ | メガネ型ウェアラブル端末、広告表示制御方法、広告表示制御プログラム、広告提供装置、広告提供方法、広告提供プログラムおよび広告提供システム |
WO2022180770A1 (ja) * | 2021-02-26 | 2022-09-01 | 享 山中 | プログラム、情報処理装置、及び情報処理方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11563340B2 (en) | 2017-03-30 | 2023-01-24 | Gs Yuasa International Ltd. | Power supply device, server, and power supply device management system |
JP6897377B2 (ja) * | 2017-07-11 | 2021-06-30 | トヨタ自動車株式会社 | 情報提供装置 |
US11418467B2 (en) * | 2017-09-12 | 2022-08-16 | Get Together, Inc. | Method for delivery of an encoded EMS profile to a user device |
CN109077741A (zh) * | 2018-08-21 | 2018-12-25 | 华南师范大学 | 心理状态识别方法及系统 |
US11460314B1 (en) * | 2019-06-28 | 2022-10-04 | GM Cruise Holdings LLC. | Sentiment-based autonomous vehicle user interaction and routing recommendations |
CN110826436A (zh) * | 2019-10-23 | 2020-02-21 | 上海能塔智能科技有限公司 | 情绪数据传输及处理方法、装置、终端设备、云平台 |
US12055404B2 (en) * | 2021-05-04 | 2024-08-06 | At&T Intellectual Property I, L.P. | Sentiment-based navigation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007034664A (ja) * | 2005-07-27 | 2007-02-08 | Sony Corp | 感情推定装置および方法、記録媒体、および、プログラム |
JP2009098446A (ja) * | 2007-10-17 | 2009-05-07 | Sony Corp | 情報提供システム、情報提供装置、情報提供方法、端末装置、表示方法 |
WO2013190689A1 (ja) * | 2012-06-21 | 2013-12-27 | トヨタ自動車 株式会社 | ルートの検索装置及びルートの検索方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4707877B2 (ja) * | 2001-06-11 | 2011-06-22 | パイオニア株式会社 | ドライブ候補地提示装置及び方法、ナビゲーション装置及びコンピュータプログラム |
CN101382936A (zh) * | 2007-12-25 | 2009-03-11 | 苏州中茵泰格科技有限公司 | 一种主动式互动导航的方法和系统 |
JP5024093B2 (ja) | 2008-02-08 | 2012-09-12 | ソニー株式会社 | 画像の撮影装置、その表示装置および画像データの管理システム |
FR2938672A1 (fr) * | 2008-11-19 | 2010-05-21 | Alcatel Lucent | Procede et dispositif d'enregistrement de donnees representatives de sentiments ressentis par des personnes dans des lieux localisables, et serveur associe |
EP2553396B1 (en) * | 2010-03-26 | 2018-12-19 | Koninklijke Philips N.V. | A method and navigation device for providing at least one route |
US8364395B2 (en) * | 2010-12-14 | 2013-01-29 | International Business Machines Corporation | Human emotion metrics for navigation plans and maps |
BR112014010841A8 (pt) * | 2011-11-09 | 2017-06-20 | Koninklijke Philips Nv | método de provisão de um serviço em uma rede de dados, dispositivo móvel de comunicação eletrônica, e, software de controle para permitir a realização de um método |
US9389091B2 (en) * | 2011-12-20 | 2016-07-12 | Safe Path Technologies, LLC | Methods and apparatus for attribute-based navigation |
-
2015
- 2015-01-20 JP JP2016514736A patent/JP6574937B2/ja active Active
- 2015-01-20 WO PCT/JP2015/051411 patent/WO2015162949A1/ja active Application Filing
- 2015-01-20 US US15/302,093 patent/US10488208B2/en active Active
- 2015-01-20 BR BR112016023982A patent/BR112016023982A2/pt not_active Application Discontinuation
- 2015-01-20 CN CN201580019990.6A patent/CN106255866B/zh active Active
- 2015-01-20 EP EP15783327.8A patent/EP3136055A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007034664A (ja) * | 2005-07-27 | 2007-02-08 | Sony Corp | 感情推定装置および方法、記録媒体、および、プログラム |
JP2009098446A (ja) * | 2007-10-17 | 2009-05-07 | Sony Corp | 情報提供システム、情報提供装置、情報提供方法、端末装置、表示方法 |
WO2013190689A1 (ja) * | 2012-06-21 | 2013-12-27 | トヨタ自動車 株式会社 | ルートの検索装置及びルートの検索方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3136055A4 * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017181449A (ja) * | 2016-03-31 | 2017-10-05 | カシオ計算機株式会社 | 電子機器、経路検索方法及びプログラム |
JPWO2017217038A1 (ja) * | 2016-06-14 | 2019-04-18 | ソニー株式会社 | 情報処理装置および記憶媒体 |
WO2017217038A1 (ja) * | 2016-06-14 | 2017-12-21 | ソニー株式会社 | 情報処理装置および記憶媒体 |
US11185998B2 (en) | 2016-06-14 | 2021-11-30 | Sony Corporation | Information processing device and storage medium |
JP2021073552A (ja) * | 2016-06-14 | 2021-05-13 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
JP2018018492A (ja) * | 2016-07-15 | 2018-02-01 | パナソニックIpマネジメント株式会社 | コンテンツ提示のための情報処理装置、情報処理装置の制御方法、及び制御プログラム |
CN106534320A (zh) * | 2016-11-17 | 2017-03-22 | 包磊 | 一种用户状态地图生成方法和装置 |
EP3343175A1 (en) * | 2016-12-21 | 2018-07-04 | Toyota Jidosha Kabushiki Kaisha | In-vehicle device and route information presentation system |
JP2018100936A (ja) * | 2016-12-21 | 2018-06-28 | トヨタ自動車株式会社 | 車載装置及び経路情報提示システム |
CN108225366A (zh) * | 2016-12-21 | 2018-06-29 | 丰田自动车株式会社 | 车载装置与路线信息提示系统 |
WO2018123041A1 (ja) * | 2016-12-28 | 2018-07-05 | 本田技研工業株式会社 | 情報処理システム、及び情報処理装置 |
WO2018123055A1 (ja) * | 2016-12-28 | 2018-07-05 | 本田技研工業株式会社 | 情報提供システム |
JP7030062B2 (ja) | 2016-12-28 | 2022-03-04 | 本田技研工業株式会社 | 情報処理システム、及び情報処理装置 |
US11237009B2 (en) | 2016-12-28 | 2022-02-01 | Honda Motor Co., Ltd. | Information provision system for route proposition based on emotion information |
CN110088575A (zh) * | 2016-12-28 | 2019-08-02 | 本田技研工业株式会社 | 信息处理系统和信息处理装置 |
JPWO2018123041A1 (ja) * | 2016-12-28 | 2019-08-08 | 本田技研工業株式会社 | 情報処理システム、及び情報処理装置 |
JPWO2018123055A1 (ja) * | 2016-12-28 | 2019-10-31 | 本田技研工業株式会社 | 情報提供システム |
US11435201B2 (en) | 2016-12-28 | 2022-09-06 | Honda Motor Co., Ltd. | Information processing system and information processing device |
JP2018132551A (ja) * | 2017-02-13 | 2018-08-23 | 沖電気工業株式会社 | 情報処理装置、情報処理方法およびプログラム |
KR102492046B1 (ko) * | 2017-02-20 | 2023-01-26 | 소니그룹주식회사 | 정보 처리 시스템 및 정보 처리 방법 |
KR20190119571A (ko) * | 2017-02-20 | 2019-10-22 | 소니 주식회사 | 정보 처리 시스템 및 정보 처리 방법 |
US11413519B2 (en) | 2017-02-20 | 2022-08-16 | Sony Corporation | Information processing system and information processing method |
JP2018200192A (ja) * | 2017-05-25 | 2018-12-20 | 本田技研工業株式会社 | 地点提案装置及び地点提案方法 |
JP2019036201A (ja) * | 2017-08-18 | 2019-03-07 | ヤフー株式会社 | 出力制御装置、出力制御方法及び出力制御プログラム |
JP7090573B2 (ja) | 2019-03-15 | 2022-06-24 | ヤフー株式会社 | 提供装置、提供方法及び提供プログラム |
JP2020149592A (ja) * | 2019-03-15 | 2020-09-17 | ヤフー株式会社 | 提供装置、提供方法及び提供プログラム |
JP2020154870A (ja) * | 2019-03-20 | 2020-09-24 | ヤフー株式会社 | 選択装置、選択方法及び選択プログラム |
JP7175810B2 (ja) | 2019-03-20 | 2022-11-21 | ヤフー株式会社 | 選択装置、選択方法及び選択プログラム |
JP7160757B2 (ja) | 2019-05-17 | 2022-10-25 | ヤフー株式会社 | 判定装置、判定方法及び判定プログラム |
JP2020190776A (ja) * | 2019-05-17 | 2020-11-26 | ヤフー株式会社 | 判定装置、判定方法及び判定プログラム |
JP2021124922A (ja) * | 2020-02-04 | 2021-08-30 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
WO2021241373A1 (ja) * | 2020-05-25 | 2021-12-02 | パナソニックIpマネジメント株式会社 | イベント情報評価装置、生体情報抽出システム、イベント情報評価システムおよび生体情報抽出装置 |
WO2022118455A1 (ja) * | 2020-12-04 | 2022-06-09 | 株式会社ロゼッタ | メガネ型ウェアラブル端末、広告表示制御方法、広告表示制御プログラム、広告提供装置、広告提供方法、広告提供プログラムおよび広告提供システム |
JP2022089738A (ja) * | 2020-12-04 | 2022-06-16 | 株式会社メタリアル | メガネ型ウェアラブル端末、広告表示制御方法、広告表示制御プログラム、広告提供装置、広告提供方法、広告提供プログラムおよび広告提供システム |
JP7044925B1 (ja) | 2020-12-04 | 2022-03-30 | 株式会社メタリアル | メガネ型ウェアラブル端末、広告表示制御方法、広告表示制御プログラム、広告提供装置、広告提供方法、広告提供プログラムおよび広告提供システム |
WO2022180770A1 (ja) * | 2021-02-26 | 2022-09-01 | 享 山中 | プログラム、情報処理装置、及び情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
EP3136055A4 (en) | 2018-04-11 |
JPWO2015162949A1 (ja) | 2017-04-13 |
EP3136055A1 (en) | 2017-03-01 |
US10488208B2 (en) | 2019-11-26 |
CN106255866B (zh) | 2019-09-13 |
JP6574937B2 (ja) | 2019-09-18 |
CN106255866A (zh) | 2016-12-21 |
US20170205240A1 (en) | 2017-07-20 |
BR112016023982A2 (pt) | 2017-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6574937B2 (ja) | 通信システム、制御方法、および記憶媒体 | |
JP7077376B2 (ja) | 撮像装置およびその制御方法 | |
JP6094190B2 (ja) | 情報処理装置および記録媒体 | |
KR102184272B1 (ko) | 글래스 타입 단말기 및 이의 제어방법 | |
CN110506249B (zh) | 信息处理设备、信息处理方法和记录介质 | |
JP6360619B2 (ja) | 再生制御方法、再生制御装置、コンピュータプログラム及びコンピュータ読み取り可能な記憶媒体 | |
JP7092108B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US20140123015A1 (en) | Information processing system, information processing apparatus, and storage medium | |
TWI681363B (zh) | 用於基於關聯性之視覺媒體項目修改之裝置及方法、與相關行動通訊設備、伺服器及電腦可讀媒體 | |
WO2022227393A1 (zh) | 图像拍摄方法及装置、电子设备和计算机可读存储介质 | |
CN108848313B (zh) | 一种多人拍照方法、终端和存储介质 | |
US20210350823A1 (en) | Systems and methods for processing audio and video using a voice print | |
KR20090098505A (ko) | 상태 정보를 이용하여 미디어 신호를 생성하는 방법 및장치 | |
WO2015068440A1 (ja) | 情報処理装置、制御方法およびプログラム | |
JP2019110509A (ja) | 撮像装置及びその制御方法、プログラム、記憶媒体 | |
JP2015126451A (ja) | 画像の記録方法、電子機器およびコンピュータ・プログラム | |
CN113574525A (zh) | 媒体内容推荐方法及设备 | |
US20230005471A1 (en) | Responding to a user query based on captured images and audio | |
JP2023057157A (ja) | 撮像装置及びその制御方法、プログラム | |
JP2018045558A (ja) | 制御装置、制御システム及び制御方法 | |
CN112099639A (zh) | 展示属性的调整方法、装置、展示设备和存储介质 | |
CN114079730B (zh) | 一种拍摄方法和拍摄系统 | |
WO2019044135A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP7199808B2 (ja) | 撮像装置およびその制御方法 | |
CN117061849A (zh) | 捕获和存储物理环境的图像 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15783327 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016514736 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15302093 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2015783327 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015783327 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112016023982 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112016023982 Country of ref document: BR Kind code of ref document: A2 Effective date: 20161014 |