WO2016181670A1 - 情報処理装置、情報処理方法及びプログラム - Google Patents
情報処理装置、情報処理方法及びプログラム Download PDFInfo
- Publication number
- WO2016181670A1 WO2016181670A1 PCT/JP2016/052491 JP2016052491W WO2016181670A1 WO 2016181670 A1 WO2016181670 A1 WO 2016181670A1 JP 2016052491 W JP2016052491 W JP 2016052491W WO 2016181670 A1 WO2016181670 A1 WO 2016181670A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- information
- prediction
- information processing
- output control
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/041—Abduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a program.
- Patent Document 1 discloses a technique for obtaining a route and time to a destination by learning a user's activity state as a probabilistic state transition model using time-series data obtained from a wearable sensor. Yes.
- Patent Document 1 has room for improvement in the usefulness of information provided based on the prediction result.
- a prediction result related to a certain user has only been used for providing information to the user.
- the present disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of providing useful information to other users using a prediction result related to a certain user.
- prediction information indicating a prediction result of the context information of the second user related to the context information of the first user, generated based on a history of context information of the second user
- An information processing device includes an output control unit that outputs the target to the first user.
- the prediction result of the context information of the second user related to the context information of the first user generated based on the history of the context information of the second user is shown.
- an information processing method including outputting prediction information for the first user by a processor.
- the computer predicts the context information of the second user related to the context information of the first user generated based on the history of the context information of the second user.
- a program for causing the prediction information indicating the result to function as an output control unit that outputs the first user as a target is provided.
- FIG. 1 is a diagram for explaining an overview of an information processing system according to the present embodiment.
- An image 10 illustrated in FIG. 1 is an image in which information 11 is superimposed and displayed on an image obtained by capturing a real space by an augmented reality (AR) technology.
- AR augmented reality
- the AR technology is a technology that superimposes additional information on the real world and presents it to the user.
- Information presented to the user in AR technology is also called annotation, and can be visualized using various forms of virtual objects such as text, icons or animation.
- the user can view an image on which an annotation as shown in FIG. 1 is superimposed and displayed using various user terminals (terminal devices).
- the user terminal include a smartphone, an HMD (Head Mounted Display), a car navigation system, and the like.
- a transparent HMD is a display unit that is placed in front of the user's eyes in a mounted state, and displays an image such as text or a figure in a transparent or translucent state, thereby superimposing an annotation on a real space landscape. It is a device that can display.
- An image displayed on the display unit (transmission type display) of the user terminal (including a background that is visible through transmission and an annotation that is superimposed and displayed) is also referred to as a real space image below. That is, the image 10 is a real space image.
- the real space image 10 shown in FIG. 1 is an example of an image displayed when the user visits the cafeteria at lunch time.
- an annotation 11 indicating a prediction result of the remaining time until another user in the meal stands up is displayed in association with each of the other users. As a result, the user can wait near another user who is predicted to have the least remaining time.
- information indicating the prediction result of a certain user can be useful for other users.
- the information processing system according to the present embodiment can improve the convenience of the entire user by mutually visualizing information indicating the prediction result of a certain user to other users. .
- an object for which information is collected by the information processing system according to the present embodiment and / or receives information is referred to as a user.
- a user who receives information is also referred to as a first user.
- a user associated with the presented information is also referred to as a second user. That is, the first user is presented with information indicating the prediction result related to the second user.
- a user other than the first user and the second user may be referred to as a third user.
- the first user, the second user, and the third user are simply referred to as users when it is not necessary to distinguish them.
- FIG. 2 is a block diagram illustrating an example of a logical configuration of the information processing system 1 according to the present embodiment.
- the information processing system 1 according to the present embodiment includes a server 100, a user terminal 200, a recognition device 300, an output device 400, and an external device 500.
- the server 100 includes a communication unit 110, a context information DB 120, a predictor DB 130, and a processing unit 140.
- the communication unit 110 is a communication module for performing transmission / reception of data between the server 100 and other devices by wire / wireless.
- the communication unit 110 communicates with the user terminal 200, the recognition device 300, the output device 400, and the external device 500 directly or indirectly via another node.
- the context information DB 120 has a function of storing user context information.
- the context information is information about the user. Details will be described later.
- the predictor DB 130 has a function of storing a predictor for predicting context information.
- the processing unit 140 provides various functions of the server 100. As illustrated in FIG. 2, the processing unit 140 includes an acquisition unit 141, a learning unit 142, a generation unit 143, and an output control unit 144. The processing unit 140 may further include other components other than these components. That is, the processing unit 140 can perform operations other than the operations of these components.
- the acquisition unit 141 has a function of acquiring context information. For example, the acquisition unit 141 acquires context information recognized by the user terminal 200 and the recognition device 300. Then, the acquisition unit 141 stores the acquired context information in the context information DB 120.
- the learning unit 142 has a function of learning time-series changes in context information. For example, the learning unit 142 learns a predictor for predicting a time-series change of context information based on the history of context information stored in the context information DB 120. Then, the learning unit 142 stores the learned predictor in the predictor DB 130.
- the generation unit 143 has a function of generating prediction information (annotation) to be presented to the first user. For example, the generation unit 143 generates prediction information based on the history of the context information of the second user. Specifically, the generation unit 143 inputs the second user's real-time context information acquired by the acquisition unit 141 to the second user's predictor stored in the predictor DB 130, thereby Predict the user's context information. And the production
- the output control unit 144 has a function of outputting the prediction information generated by the generation unit 143 for the first user. For example, the output control unit 144 causes the first user's user terminal 200 or the environment-installed output device 400 around the user terminal 200 to output the prediction information.
- the term “environmental type” or “environmental installation type” is used for a device that is fixedly or semi-fixedly provided in the real space.
- the digital signage is an environment-installed output device 400.
- the surveillance camera is an environment-installed recognition device 300.
- the user terminal 200 includes a communication unit 210, a recognition unit 220, and an output unit 230.
- the communication unit 210 is a communication module for performing transmission / reception of data between the user terminal 200 and another device by wire / wireless.
- the communication unit 210 communicates with the server 100 directly or indirectly via another node.
- the recognition unit 220 has a function of recognizing context information.
- the recognized context information is transmitted to the server 100 by the communication unit 210.
- the recognition unit 220 may include various sensors, and recognizes context information based on the detected sensor information.
- the recognition unit 220 can include various sensors such as a camera, a microphone, an acceleration sensor, a gyro sensor, a GPS (Global Positioning System), and a geomagnetic sensor.
- the recognition unit 220 may include a communication interface that detects information about peripheral radio waves such as Wi-Fi (registered trademark, Wireless Fidelity), Bluetooth (registered trademark), and the like.
- the recognition unit 220 may include a sensor that detects information about the environment such as temperature, humidity, wind speed, atmospheric pressure, illuminance, and substances (stress substances such as pollen, smells, and the like).
- the recognition unit 220 may include a sensor that detects biological information such as body temperature, sweating, electrocardiogram, pulse wave, heartbeat, blood pressure, blood glucose, myoelectricity, and electroencephalogram.
- the recognition unit 220 may include an input unit that receives input of context information from the user.
- the output unit 230 has a function of outputting information from the server 100.
- the output unit 230 may include a display unit that can display an image, a speaker that can output sound, a vibration motor that can vibrate, and the like.
- the output unit 230 can be realized as a transmissive display.
- the recognition device 300 includes a communication unit 310 and a recognition unit 320.
- the configurations of the communication unit 310 and the recognition unit 320 are the same as those of the communication unit 210 and the recognition unit 220.
- the recognition apparatus 300 can be realized by, for example, a wearable device, an environment-installed camera, an environment-installed microphone, IoT (Internet of Things), IoT (Internet of Everything), and the like.
- the output device 400 includes a communication unit 410 and an output unit 420.
- the configurations of the communication unit 410 and the output unit 420 are the same as those of the communication unit 210 and the output unit 230.
- the output device 400 can be realized by, for example, a digital signage, an in-vehicle guidance display device, a projection mapping device, a voice guidance device, or the like.
- the external device 500 is a device having information about a user.
- the external device 500 is an SNS (social networking service) server, a mail server, a server that provides a service using location information, or the like.
- the external device 500 transmits the user context information to the server 100.
- one user terminal 200, one recognition device 300, one output device 400, and one external device 500 are shown, but a plurality of devices may be provided.
- the information processing system 1 includes the environment-installed recognition device 300 and the output device 400. For this reason, the information processing system 1 can generate and output prediction information of a user who does not have the user terminal 200, and can output prediction information for a user who does not have the user terminal 200. .
- the context information is information indicating a situation where the user is placed.
- the context information may be recognized from various information related to the user, or may be input by the user. Hereinafter, an example of the context information will be described.
- the context information may include information indicating user behavior.
- Recognized actions can be classified into basic actions that are elements of basic actions and higher-order actions that are combinations of basic actions.
- Basic actions include, for example, sitting and standing, standing and resting, walking, running, riding an elevator (up and down), riding an escalator (up and down), riding a vehicle ( Bicycles, trains, cars, buses, ... and other vehicles).
- Higher-order actions include, for example, moving (going to school, going home, trap ..., other movements), studying, working (manual labor, desk work, ... (more detailed work types)), playing (playing) Types), sports (types of sports), shopping (genre of shopping), meals (contents of meals), and the like.
- information indicating the user's behavior can be recognized based on sensor information detected by an acceleration sensor, a gyro sensor, a geomagnetic sensor, or the like included in the user terminal 200 carried by the user.
- information indicating the user's behavior can be recognized based on an image recognition result of a captured image captured by, for example, a monitoring camera.
- the information indicating the user's behavior can be recognized based on the application being used on the user terminal 200, for example, when the running application is used on the user terminal 200, it is recognized that the user is running.
- the information indicating the user's behavior can be recognized based on, for example, a status setting such as “working / leaving” performed in the messaging application being used by the user terminal 200.
- these recognition methods may be combined and other arbitrary recognition methods may be used.
- the context information may include information indicating the position of the user.
- the information indicating the position may include information indicating relative coordinates from a certain object, whether indoor or outdoor, height, etc., in addition to geographical absolute coordinates.
- the information indicating the position may include information indicating a latitude, a longitude, an altitude, an address, a GEO tag, a building name, a store name, and the like.
- information indicating the position can be recognized by a positioning technique using GPS, an autonomous positioning technique, or the like.
- the information indicating the position can be recognized from sensor information such as an acceleration sensor, a gyro sensor, and a geomagnetic sensor included in the user terminal 200.
- the information indicating the position can be recognized by a human detection technique, a face recognition technique, or the like based on a captured image of the environment-installed camera.
- the information indicating the position can be recognized by a communication result or the like regarding an environment-installed communication apparatus that can estimate the distance (that is, proximity relationship) with the user terminal 200 such as Bluetooth or a beacon.
- the information indicating the position can be recognized from the use result of the service using the position information by the user terminal 200 or the like.
- these recognition methods may be combined and other arbitrary recognition methods may be used.
- the context information may include information indicating the line of sight of the user.
- the information indicating the line of sight may include an object noted by the user, features of other users noted by the user, context information of other users noted, context information of the user at the time of attention, and the like.
- the information indicating the line of sight may be information indicating what kind of behavior the user himself / herself is performing and what other users are performing the attention.
- the information indicating the line of sight can be recognized by the recognition result of the direction of the line of sight based on the captured image and the depth information by a stereo camera provided in the HMD with the user's eyeball as the imaging range.
- the information indicating the line of sight is information indicating the position and orientation of the user terminal 200 in the real space, which is recognized by a known image recognition technique such as the SfM (Structure from Motion) method or the SLAM (Simultaneous Localization And Mapping) method. Can be recognized.
- information indicating the line of sight can be recognized based on myoelectricity around the eyeball.
- the information indicating the line of sight can be recognized by a face recognition technique, a line-of-sight detection technique, or the like based on a captured image of the environment-installed camera.
- these recognition methods may be combined and other arbitrary recognition methods may be used.
- the context information may include information output by the user.
- the information output by the user may include information indicating the content of the user's utterance, written text, and the like.
- information output by the user can be recognized by voice recognition for voice acquired by the microphone of the user terminal 200.
- the information output by the user can be recognized by voice recognition for voice acquired by an environment-installed microphone, a laser Doppler sensor, or the like.
- the information output by the user can be recognized by image recognition for the captured image of the mouth imaged by the environment-installed camera.
- the information output by the user can also be recognized by the content of the mail transmitted by the user, the content of the message, the posting to the SNS, the search keyword, and the like.
- these recognition methods may be combined and other arbitrary recognition methods may be used.
- the context information may include information indicating the user status.
- the information indicating the user's state may include information indicating the user's emotion, health condition, whether or not the user is sleeping, and the like.
- information indicating the state of the user can be recognized using biological information. Also, information indicating the user's state can be recognized based on the facial expression of the user imaged by the camera installed in the environment. Also, information indicating the user's state can be recognized based on the content spoken by the user and the written text. In addition, these recognition methods may be combined and other arbitrary recognition methods may be used.
- the context information may include user attribute information.
- Attribute information includes gender, birthday (age), occupation, career, address (home, school, workplace, etc.), hobby, favorite food, favorite content (music, movies, books, etc.), life log (where you go often) , Travel history, etc.), illness, medical history, etc.
- the attribute information can be recognized based on input to the application by the user, posting to the SNS, and the like. Further, the attribute information can be recognized based on user feedback (for example, product, content purchase, reproduction history, evaluation information) to the shopping service and the content distribution service. Further, the attribute information can be recognized based on the usage history and operation history of the user terminal 200.
- the attribute information can be recognized by image recognition for a captured image captured by an environment-installed camera. More specifically, for example, the age and sex can be recognized from the image of the face portion, and the occupation can be recognized from the image of the clothing portion. Further, the attribute information can be recognized by a time series change of the position information.
- a place that stays for a long time at night may be recognized as a home, and a place that stays for a long time in the day may be recognized as a work or school.
- these recognition methods may be combined and other arbitrary recognition methods may be used.
- Information indicating user's human relationship For example, the context information may include information indicating the user's human relationship.
- the information indicating the human relationship may include information indicating who the person is with, family relationship, friend relationship, intimacy, and the like.
- information indicating a human relationship can be recognized based on an input to an application by a user, a posting to an SNS, or the like.
- the information indicating the human relationship can be recognized based on the length of time of being together, the facial expression when being together, and the like.
- the information indicating the human relationship can be recognized based on whether or not the home is the same, whether or not the workplace or the school is the same.
- the context information only needs to include at least one of the information described above.
- Context information recognition processing may be performed by the user terminal 200 and the recognition device 300, or may be performed by the server 100.
- raw sensor information is transmitted and received between the user terminal 200 and the recognition device 300 and the server 100, so that the user terminal 200 and the recognition device 300 recognize from the viewpoint of traffic. It is desirable that processing be performed.
- the recognized context information may be information that is not directly predicted as long as it is useful for prediction of context information. For example, even when the prediction of the utterance is not performed, the content of the utterance may be recognized when the next action is predicted from the content of the utterance.
- the context information can be recognized without active input by the user. Thereby, the burden on the user is reduced.
- Information that is actively input by the user such as schedule input and destination input to the car navigation system, may also be used for context information recognition.
- the server 100 (for example, the learning unit 142) learns the predictor. Then, the server 100 (for example, the generation unit 143) performs prediction of context information using a predictor.
- Predictor learning is performed using techniques such as state transition model, neural network, deep learning, HMM (Hidden Markov Model), k-nearest neighbor algorithm, kernel method, SVM (support vector machine), etc. Can be broken.
- HMM Hidden Markov Model
- k-nearest neighbor algorithm kernel method
- SVM support vector machine
- the predictor may be learned for each user, or may be learned among a plurality of users or in common for all users. For example, a common predictor can be learned for each family, each workplace, and each friend. Moreover, learning of the predictor may be performed for each recognition unit 220, that is, for each user terminal 200 or recognition device 300.
- the predictor may be regarded as a model expressing dependency relationships (for example, time-series changes) of a plurality of context information generated based on the history of context information.
- dependency relationships for example, time-series changes
- context information acquired in real time is input to a predictor, so that context information having a dependency relationship (for example, predicted to be acquired at the next time in time series) is obtained. Is output.
- the user's behavior at the current time, the user's position, the content output by the user, the user's line of sight, the user's state, the user's attribute information, and the user's human relationship are input to the predictor, so that The behavior, the position of the user, the content output by the user, the user's line of sight, the user's state, the user's attribute information, and the prediction result of the user's human relationship are output.
- the generation unit 143 generates prediction information indicating the prediction result output from the predictor in this way.
- the server 100 (for example, the generation unit 143 and the output control unit 144) causes the first user to display prediction information related to the context information of the first user. For this purpose, the server 100 selects which user's prediction information is to be displayed to the first user, that is, which user is the second user.
- the server 100 displays the prediction information of the second user selected from a plurality of other users based on the context information of the first user.
- the 1st user can know the prediction information of the user corresponding to the situation (namely, context information) where he was put. Therefore, it is possible to prevent an excessive number of prediction information from being displayed and the real space image from becoming complicated.
- the server 100 may display the prediction information of the second user determined to be related to the first user and the context information.
- the first user the user who is viewing the real space image 10.
- the first user can know the prediction information of the user related to the situation in which the first user is placed or related to the situation to be placed in the future.
- the server 100 may display the prediction information of the second user determined to be focused on by the first user.
- another user who is determined to be focused by the first user and who is sitting in a seat that the first user wants to sit is selected as the second user. .
- the first user can know the prediction information of the user who wants to know.
- the server 100 may display the prediction information of the second user determined to be similar in context information to the third user noticed in the past by the first user.
- the 1st user can know the prediction information of the same user as the user whom he wanted to know prediction information in the past.
- the server 100 (for example, the generation unit 143 and the output control unit 144) causes the first user to display prediction information related to the context information of the first user. At that time, the server 100 (for example, the generation unit 143 and the output control unit 144) displays the prediction information generated based on the context information of the second user. That is, the server 100 controls the content of the prediction information to be displayed based on the context information of the first user and the second user. Thereby, the 1st user can know the prediction information of the content corresponding to the condition where he was placed.
- the server 100 may display different prediction information depending on whether or not the second user is moving. For example, when it is determined that the second user is moving, the server 100 displays prediction information indicating a prediction result of the movement trajectory of the second user. Thereby, the 1st user can know beforehand whether a movement locus crosses with self and the 2nd user. On the other hand, when it is determined that the second user is not moving, the server 100 displays prediction information indicating the time when the second user is predicted to start moving. Thereby, the 1st user can perform movement etc., for example, grasping the remaining time until the 2nd user starts movement. A specific display example will be described in detail later.
- the remaining time in the prediction information can also be understood as the remaining time until an arbitrary action starts or ends.
- the remaining time until the movement is started is the remaining time until the stationary action ends.
- the remaining time until the movement is stopped after the movement is started can be displayed as the prediction information.
- the server 100 may display prediction information corresponding to the human relationship between the first user and the second user. For example, the server 100 displays detailed prediction information for a second user who is close to the first user, and is simplified for a second user who is close to the second user. The prediction information may be displayed or hidden. Thereby, it is not necessary to see prediction information unnecessary for the first user, and privacy can be protected for the second user.
- the server 100 is generated based on the history of the context information of the third user determined to be similar to the attribute information of the second user when there is no or insufficient context information of the second user.
- Prediction information may be displayed. For example, the server 100 stays using the predictor of the third user of the same age and occupation based on the recognition result of the second user's age and occupation for the second user who has come to the cafe for the first time. Prediction information is displayed by predicting time and behavior. Note that the case where the history of context information is not sufficient indicates, for example, the case where the prediction accuracy of the predictor falls below a threshold value. Thereby, even if it is a case where the learning regarding a 2nd user is not enough, the 1st user can receive presentation of suitable prediction information.
- the server 100 may preferentially display prediction information indicating a prediction result of context information with high prediction accuracy. This prevents the first user from being confused by incorrect prediction information.
- the server 100 may display prediction information of a plurality of second users on a map instead of the real space image. Thereby, the 1st user can visually recognize the prediction information of a plurality of 2nd users from a bird's-eye view. Such a display is useful for an amusement park administrator, for example.
- the server 100 may display a plurality of pieces of prediction information for one second user.
- the server 100 may display prediction information indicating prediction results of a plurality of types of context information of different types.
- FIG. 3 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- a state in the train as viewed from the first user on the train is displayed.
- Prediction information 22 indicating (time) is displayed.
- the prediction information 22 it is predicted that the second user 21 will get off after two minutes. This allows the first user to prepare for the second user who gets off or to prepare to sit on the seat on which the second user is sitting. Also for the second user, the surrounding users are expected to prepare in advance for their own getting off, so it is possible to get off more comfortably.
- publicizing the prediction information of the second user to the first user can be beneficial for both the first user and the second user.
- FIG. 4 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the real space image 30 shown in FIG. 4 with respect to the second users 31 and 32 that are determined to be sitting in the seat and not moving, it is predicted that the estimated time until the seat stands (that is, the movement is started).
- Prediction information 33 and 34 indicating (time) are displayed.
- the prediction information 33 and 34 it is predicted that the second users 31 and 32 will stand after 25 minutes.
- the ratio of the remaining time or the elapsed time in the total time related to the action of sitting on the seat is displayed as a bar.
- the display using the bar can take various forms as shown in FIG.
- FIG. 5 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- FIG. 5 shows a specific example of prediction information displayed in a bar.
- the prediction information 35 the length from the start of the current action to the predicted end time is defined as 100% (reference numeral 36), and the elapsed time from the start of the current action is the length of the bar (reference numeral 37). It is expressed by.
- the example shown in FIG. 4 corresponds to this.
- the prediction information 38 expresses that the display method can be used when the prediction accuracy is low, and that there is a range in prediction.
- the predicted time can be expanded or contracted according to the context information acquired most recently. By displaying such a ratio, the first user can predict the fluctuation of the approximate end time.
- the prediction information indicating the remaining time until the start of movement is displayed for the second user who is not moving has been described, but the present technology is not limited to such an example.
- information indicating the remaining time until the second user starts (performs) or ends an arbitrary action may be displayed.
- information indicating the remaining time until the second user speaks a specific word may be displayed.
- FIG. 6 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the state of the child who is taking a walk with the family as viewed from the parent who is the first user is displayed.
- prediction information 42 indicating the remaining time until the child 41 as the second user performs a specific action is displayed.
- the prediction information 42 it is predicted that “I am tired and cannot walk” and that I want to go to the toilet after 1 hour and 20 minutes, and that I am hungry after 30 minutes. Is shown to be predicted.
- Such a display allows the parent to act in advance to meet the child's request. Thus, the trouble in multi-person action is prevented beforehand by a plurality of users who act together knowing each other's prediction information.
- FIG. 7 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the vehicles 52 and 53 (specifically, the second user who drives the vehicle 52 and the vehicle 53 are viewed) viewed from the first user who is driving the vehicle 51.
- Prediction information 54 and 55 of the (second user) is displayed.
- prediction information 54 indicating a predicted time until the movement is started is displayed. Since the remaining time has a margin of 5 minutes 3 seconds, the first user can pass the car 52 with peace of mind.
- prediction information 55 indicating a prediction result of the movement locus is displayed. Since the first user can easily know that the movement trajectory crosses in the traveling direction, the first user can safely stop. By displaying such prediction information, traffic safety is improved.
- FIG. 8 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the predicted result of the movement locus of the motorcycle 62 (specifically, the second user driving the motorcycle 62) viewed from the first user driving the vehicle 61 is shown.
- the prediction information 63 to be shown is displayed. According to the prediction information 63, since the motorcycle 62 goes straight as it is, the first user can make the vehicle 61 go straight without worrying about the lane change of the motorcycle 62. By displaying such prediction information, driving comfort is improved.
- FIG. 9 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the real space image 70 shown in FIG. 9 includes prediction information 73 and 74 indicating prediction results of the movement trajectories of the other second users 71 and 72 who are walking as seen from the first user who is walking along the road. Is displayed. Thereby, the 1st user can walk so that it may not collide with the 2nd users 71 and 72. Moreover, when the 1st user pays attention to the 2nd user 71, the prediction information 75 which shows a more detailed prediction result may be displayed.
- the forecast information 75 indicates that after 5 minutes you will arrive at AA station, move on the train, 15 minutes later at BB station, move on foot, and 25 minutes later at CC company.
- a list of action schedules of the second user 71 is included.
- FIG. 10 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the state in the elevator is displayed.
- prediction information 81 indicating a prediction result of how many second users pay attention to the first user is displayed. More specifically, the prediction information 81 displays a time-series change in the number of people predicted to pay attention to the first user. Thereby, the 1st user can perform appearance etc., for example, before a door opens and it attracts attention to itself.
- FIG. 11 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- information 39 indicating the current emotion of the second user 31 is displayed in addition to the real space image 30 illustrated in FIG. 4.
- the information 39 indicating emotion indicates that the second user 31 is currently delighted.
- the server 100 may display the context information of the second user 31 in addition to the prediction information 33. Thereby, the first user can perform an action according to the situation where the second user is currently placed, for example, when the second user is in trouble.
- the server 100 can display various prediction information.
- the server 100 may display a keyword predicted to be spoken by the second user. In that case, the first user can use the keyword to excite the story.
- the server 100 (for example, the generation unit 143 and the output control unit 144) can set whether or not the user prediction information can be output to other users. From the viewpoint of displaying the prediction information to the first user, the server 100 can set whether or not the prediction information of the second user can be output to the first user.
- the server 100 displays the prediction information permitted to be displayed by the second user. Thereby, the 2nd user can protect privacy.
- Authorization may be performed based on instructions from the second user. In that case, the second user directly sets whether or not to release the prediction information.
- the authorization may be based on settings related to the location of the second user. For example, it is possible to make a setting such that disclosure of detailed prediction information is not permitted as the second user is closer to his / her home.
- the authorization may be based on settings related to the human relationship between the first user and the second user. For example, it is possible to set the prediction within 10 minutes to everyone, allow predictions within 1 hour to allow friends to share, and allow predictions beyond that to no one else. is there.
- the server 100 (for example, the generation unit 143 and the output control unit 144) may filter the generated prediction information. For example, the server 100 displays a part of the generated prediction information and does not display the other part.
- the server 100 provides an interaction function between the displayed prediction information and the first user.
- the server 100 stores the deleted prediction information, and thereafter the same type Do not display prediction information for. Thereby, only appropriate prediction information will be presented.
- the server 100 may display the prediction information indicating the prediction result of the first user's context information for the first user himself / herself. That is, the server 100 may display the prediction information of the first user displayed for other users as a target so that the first user can see the prediction information. Thereby, the 1st user can know what kind of prediction information is disclosed to other users.
- FIG. 12 is a diagram for explaining a display example by the information processing system 1 according to the present embodiment.
- the state in the train viewed from the first user sitting on the train seat is displayed.
- prediction information 93 and 94 of the first user are displayed.
- the prediction information 92 indicates that the second user 91 is predicted to get off after 30 minutes.
- the prediction information 93 indicates that the first user is predicted to have lunch one hour later.
- the prediction information 94 indicates that the first user is predicted to get off after two minutes.
- the prediction information 94 may be emphasized as shown in FIG. 12 when attention is paid to other users (for example, displayed to many other users). Thereby, the 1st user can know what kind of prediction information of the 1st user the 2nd user will act from now on.
- the prediction information may be corrected by the user.
- the server 100 may display the prediction information modified based on the instruction from the second user. For example, in the example shown in FIG. 12, when there is an error in the prediction information 93 or 94, the first user corrects the prediction information. This prevents other users from being misled by incorrect prediction information. Further, when there is an error in the prediction information, emphasizing the prediction information attracting attention by other users as in the example shown in FIG. 12 may lead to prompting the first user to correct it. .
- the first user when the first user actually gets off after 30 minutes, the first user corrects the prediction information 94 after 2 minutes to 30 minutes so that other users around It is possible to prevent the user from getting ready for getting off the vehicle. Thus, it is expected that more accurate information is presented for the prediction information having a high degree of attention.
- FIG. 13 is a flowchart illustrating an example of a predictor learning process executed in the server 100 according to the present embodiment.
- the acquisition unit 141 acquires context information from the user terminal 200 and the recognition device 300 via the communication unit 110 (step S102) and stores the context information in the context information DB 120 (step S104). . Then, the learning unit 142 learns a predictor based on the history of context information accumulated in the context information DB 120 (step S106).
- FIG. 14 is a flowchart illustrating an example of the flow of the prediction information display process executed in the server 100 according to the present embodiment.
- the generation unit 143 selects a second user (step S202). This process will be described in detail later with reference to FIG. Next, the generation unit 143 generates prediction information of the second user (Step S204). This process will be described in detail later with reference to FIG. And the output control part 144 displays the produced
- the generation unit 143 generates prediction information of a user who can be the second user (for example, all other users included in the real space image), and selects prediction information to be displayed from the prediction information. It will be.
- FIG. 15 is a flowchart illustrating an example of the flow of a second user selection process executed in the server 100 according to the present embodiment. This flow expresses step S202 in FIG. 14 in detail.
- the generation unit 143 selects based on information indicating human relations (step S302). For example, the generation unit 143 selects another user having a relationship such as a friendship with the first user as a candidate for the second user. Next, the generation unit 143 selects based on the information indicating the position (step S304). For example, the generation unit 143 selects other users who are close to the first user as candidates for the second user. Next, the production
- the generation unit 143 selects based on the information indicating the line of sight (step S308). For example, the generation unit 143 sorts the users that have been selected as candidates for the second user so far according to the degree to which the first user has focused, and sets the predetermined number in order from the user with the highest degree of attention to the second user. Select as.
- FIG. 16 is a flowchart illustrating an example of the flow of prediction information generation processing executed in the server 100 according to the present embodiment. This flow expresses step S204 in FIG. 14 in detail.
- the generation unit 143 determines whether or not the second user is moving (step S402). When it determines with moving (step S402 / YES), the production
- the generation unit 143 adjusts the content of the prediction information based on the disclosure permission / inhibition setting (step S408). For example, the generation unit 143 simplifies or conceals the content of the prediction information according to the human relationship between the first user and the second user.
- generation part 143 adjusts the content of prediction information based on attention degree (step S410). For example, the generation unit 143 details the content of the prediction information for the second user who has a high degree of attention of the first user.
- FIG. 17 is a block diagram illustrating an example of a hardware configuration of the information processing apparatus according to the present embodiment.
- the information processing apparatus 900 illustrated in FIG. 17 can realize, for example, the server 100, the user terminal 200, the recognition apparatus 300, the output apparatus 400, or the external device 500 illustrated in FIG.
- Information processing by the server 100, the user terminal 200, the recognition device 300, the output device 400, or the external device 500 according to the present embodiment is realized by cooperation between software and hardware described below.
- the information processing apparatus 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, and a host bus 904a.
- the information processing apparatus 900 includes a bridge 904, an external bus 904b, an interface 905, an input device 906, an output device 907, a storage device 908, a drive 909, a connection port 911, and a communication device 913.
- the information processing apparatus 900 may include a processing circuit such as a DSP or an ASIC in place of or in addition to the CPU 901.
- the CPU 901 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the information processing apparatus 900 according to various programs. Further, the CPU 901 may be a microprocessor.
- the ROM 902 stores programs used by the CPU 901, calculation parameters, and the like.
- the RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
- the CPU 901 can form the processing unit 140 shown in FIG.
- the CPU 901, ROM 902, and RAM 903 are connected to each other by a host bus 904a including a CPU bus.
- the host bus 904 a is connected to an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 904.
- an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus
- PCI Peripheral Component Interconnect / Interface
- the host bus 904a, the bridge 904, and the external bus 904b do not necessarily have to be configured separately, and these functions may be mounted on one bus.
- the input device 906 is realized by a device in which information is input by the user, such as a mouse, a keyboard, a touch panel, a button, a microphone, a switch, and a lever.
- the input device 906 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA that supports the operation of the information processing device 900.
- the input device 906 may include, for example, an input control circuit that generates an input signal based on information input by the user using the above-described input means and outputs the input signal to the CPU 901.
- a user of the information processing apparatus 900 can input various data and instruct a processing operation to the information processing apparatus 900 by operating the input device 906.
- the input device 906 may be formed by a sensor that senses information about the user.
- the input device 906 includes various sensors such as an image sensor (for example, a camera), a depth sensor (for example, a stereo camera), an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, a sound sensor, a distance sensor, and a force sensor. Can be included.
- the input device 906 includes information related to the information processing device 900 state, such as the posture and movement speed of the information processing device 900, and information related to the surrounding environment of the information processing device 900, such as brightness and noise around the information processing device 900. May be obtained.
- the input device 906 may also include a GPS sensor that receives GPS signals and measures the latitude, longitude, and altitude of the device.
- the input device 906 can form, for example, the recognition unit 220 and the recognition unit 320 shown in FIG.
- the output device 907 is formed of a device that can notify the user of the acquired information visually or audibly.
- Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices, display devices such as laser projectors, LED projectors and lamps, audio output devices such as speakers and headphones, printer devices, and the like.
- the output device 907 outputs results obtained by various processes performed by the information processing device 900.
- the display device visually displays results obtained by various processes performed by the information processing device 900 in various formats such as text, images, tables, and graphs.
- the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs it aurally.
- the display device and the audio output device can form, for example, the output unit 230 and the output unit 420 shown in FIG.
- the storage device 908 is a data storage device formed as an example of a storage unit of the information processing device 900.
- the storage apparatus 908 is realized by, for example, a magnetic storage device such as an HDD, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
- the storage device 908 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
- the storage device 908 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the storage device 908 can form, for example, the context information DB 120 and the predictor DB 130 illustrated in FIG.
- the drive 909 is a storage medium reader / writer, and is built in or externally attached to the information processing apparatus 900.
- the drive 909 reads information recorded on a removable storage medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM 903.
- the drive 909 can also write information to a removable storage medium.
- connection port 911 is an interface connected to an external device, and is a connection port with an external device capable of transmitting data by USB (Universal Serial Bus), for example.
- USB Universal Serial Bus
- the communication device 913 is a communication interface formed by a communication device or the like for connecting to the network 920, for example.
- the communication device 913 is, for example, a communication card for wired or wireless LAN (Local Area Network), LTE (Long Term Evolution), Bluetooth (registered trademark), or WUSB (Wireless USB).
- the communication device 913 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communication, or the like.
- the communication device 913 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet and other communication devices.
- the communication device 913 may form, for example, the communication unit 110, the communication unit 210, the communication unit 310, and the communication unit 410 illustrated in FIG.
- the network 920 is a wired or wireless transmission path for information transmitted from a device connected to the network 920.
- the network 920 may include a public line network such as the Internet, a telephone line network, and a satellite communication network, various LANs including the Ethernet (registered trademark), a wide area network (WAN), and the like.
- the network 920 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
- IP-VPN Internet Protocol-Virtual Private Network
- each of the above components may be realized using a general-purpose member, or may be realized by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
- a computer program for realizing each function of the information processing apparatus 900 according to the present embodiment as described above can be produced and mounted on a PC or the like.
- a computer-readable recording medium storing such a computer program can be provided.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
- the above computer program may be distributed via a network, for example, without using a recording medium.
- the information processing system 1 predicts the second user related to the context information of the first user generated based on the history of the context information of the second user.
- the information can be displayed for the first user.
- the prediction information of the second user is presented as information useful for the first user.
- the future behavior of the second user can be visually recognized, so that smooth communication is possible and it is also possible to easily make an own action plan.
- the information processing system 1 displays the prediction information generated based on the context information of the second user as the prediction information related to the context information of the first user.
- the 1st user can know the prediction information of the content corresponding to the condition where he was placed. For example, when the first user is driving a car, the course of another car is visualized and smooth traffic is realized. Further, when the first user wants to have a conversation, it can know whether or not to speak to the other party. In addition, when the first user is on the train, it is possible to know in advance the seat availability in a crowded car. Further, when the first user goes to a place where he / she goes for the first time, he / she can easily reach the destination by following the second user who is going to the same destination as himself / herself.
- the information processing system 1 displays the prediction information of the second user selected from among a plurality of other users based on the context information of the first user. This prevents an excessive number of prediction information from being displayed and the real space image from becoming complicated.
- the user terminal 200 may be realized as an immersive (video-through) HMD that displays a captured image of a real space and superimposes an AR virtual object on the captured image of the real space.
- an immersive HMD a captured image in a virtual space may be used instead of a captured image in real space.
- the user terminal 200 may be realized as a projection-type HMD provided with an LED light source or the like that projects an image directly on the user's retina.
- the server 100 is formed as a single device.
- the present technology is not limited to such an example.
- part or all of the server 100 may be included in different devices.
- the context information DB 120 and the predictor DB 130 may be realized as devices different from the server 100.
- the present technology is not limited to such an example.
- part or all of the server 100 may be included in the user terminal 200.
- accumulation of context information and / or learning of a predictor can be performed.
- Prediction information indicating a prediction result of the context information of the second user which is related to the context information of the first user, generated based on a history of context information of the second user
- An output control unit for outputting to a user An information processing apparatus comprising: (2) The information according to (1), wherein the output control unit displays the prediction information of the second user selected from a plurality of other users based on the context information of the first user. Processing equipment. (3) The information processing apparatus according to (2), wherein the output control unit displays the prediction information of the second user determined to be associated with the first user and the context information.
- the context information includes information indicating a user's line of sight
- the context information includes information indicating a user's line of sight
- the output control unit displays the prediction information of the second user determined to be similar to the third user that the first user has noticed in the past and the context information is (2) to (4 ).
- the information processing apparatus according to any one of (6) The information processing apparatus according to any one of (1) to (5), wherein the output control unit displays the prediction information generated based on the context information of the second user.
- the context information includes information indicating user behavior
- the output control unit displays the prediction information indicating the time when the second user is predicted to start moving, (7) or ( The information processing apparatus according to 8).
- the context information includes information indicating a user's relationship, The information processing apparatus according to any one of (6) to (9), wherein the output control unit displays the prediction information corresponding to a human relationship between the first user and the second user. .
- the context information includes user attribute information; The output control unit is based on the context information history of a third user determined to have similar attribute information to the second user when there is no or insufficient history information of the second user.
- the information processing apparatus according to any one of (6) to (10), wherein the prediction information generated in the above is displayed.
- the permission is performed based on an instruction from the second user or a setting relating to at least one of the position of the second user or the human relationship between the first user and the second user.
- the information processing apparatus according to 12).
- the output control unit displays the prediction information modified based on an instruction from the second user.
- the output control unit displays the prediction information indicating a prediction result of the context information of the first user.
- the output control unit displays the context information of the second user in addition to the prediction information.
- the context information indicates information indicating the user's behavior, information indicating the user's position, information indicating the user's line of sight, information output by the user, information indicating the user's state, user attribute information, or user relationships.
- the information processing apparatus according to any one of (1) to (16), including at least one of information.
- the output control unit according to any one of (1) to (17), wherein the prediction information is displayed by a terminal device of the first user or an output device provided around the first user. The information processing apparatus described.
- Prediction information indicating a prediction result of the context information of the second user which is related to the context information of the first user, generated based on a history of context information of the second user,
- An information processing method including: (20) Computer Prediction information indicating a prediction result of the context information of the second user, which is related to the context information of the first user, generated based on a history of context information of the second user, An output control unit for outputting to a user, Program to function as.
- Information processing system 100 Server 110 Communication unit 120 Context information DB 130 Predictor DB 140 processing unit 141 acquisition unit 142 learning unit 143 generation unit 144 output control unit 200 user terminal 210 communication unit 220 recognition unit 230 output unit 300 recognition device 310 communication unit 320 recognition unit 400 output device 410 communication unit 420 output unit 500 external device
Abstract
Description
1.概要
2.構成例
3.技術的特徴
3.1.コンテキスト情報
3.2.学習及び予測
3.3.第2のユーザの選択
3.4.予測情報の生成
3.5.出力可否の設定
3.6.生成された予測情報のフィルタリング
3.7.ユーザ自身の予測情報の出力
4.動作処理例
5.ハードウェア構成例
6.まとめ
まず、図1を参照して、本開示の一実施形態に係る情報処理システムの概要を説明する。
図2は、本実施形態に係る情報処理システム1の論理的な構成の一例を示すブロック図である。図2に示すように、本実施形態に係る情報処理システム1は、サーバ100、ユーザ端末200、認識装置300、出力装置400及び外部機器500を含む。
図2に示すように、サーバ100は、通信部110、コンテキスト情報DB120、予測器DB130及び処理部140を含む。
図2に示すように、ユーザ端末200は、通信部210、認識部220及び出力部230を含む。
図2に示すように、認識装置300は、通信部310及び認識部320を含む。通信部310及び認識部320の構成は、通信部210及び認識部220と同様である。
図2に示すように、出力装置400は、通信部410及び出力部420を含む。通信部410及び出力部420の構成は、通信部210及び出力部230と同様である。
外部機器500は、ユーザに関する情報を有する装置である。例えば、外部機器500は、SNS(social networking service)サーバ、メールサーバ、位置情報を用いたサービスを提供するサーバ等である。外部機器500は、ユーザのコンテキスト情報をサーバ100へ送信する。
<3.1.コンテキスト情報>
コンテキスト情報とは、ユーザが置かれた状況を示す情報である。コンテキスト情報は、ユーザに関する各種情報から認識されてもよいし、ユーザにより入力されてもよい。以下、コンテキスト情報の一例を説明する。
例えば、コンテキスト情報は、ユーザの行動を示す情報を含んでいてもよい。
例えば、コンテキスト情報は、ユーザの位置を示す情報を含んでいてもよい。
例えば、コンテキスト情報は、ユーザの視線を示す情報を含んでいてもよい。
例えば、コンテキスト情報は、ユーザが出力した情報を含んでいてもよい。
例えば、コンテキスト情報は、ユーザの状態を示す情報を含んでいてもよい。
例えば、コンテキスト情報は、ユーザの属性情報を含んでいてもよい。
例えば、コンテキスト情報は、ユーザの人間関係を示す情報を含んでいてもよい。
サーバ100(例えば、学習部142)は、予測器の学習を行う。そして、サーバ100(例えば、生成部143)は、予測器を用いてコンテキスト情報の予測を行う。
サーバ100(例えば、生成部143及び出力制御部144)は、第1のユーザのコンテキスト情報に関連する予測情報を第1のユーザへ表示させる。そのために、サーバ100は、どのユーザの予測情報を第1のユーザへ表示させるべきかを選択する、即ちどのユーザを第2のユーザとするかを選択する。
サーバ100(例えば、生成部143及び出力制御部144)は、第1のユーザのコンテキスト情報に関連する予測情報を第1のユーザへ表示させる。その際、サーバ100(例えば、生成部143及び出力制御部144)は、第2のユーザのコンテキスト情報に基づいて生成された予測情報を表示させる。つまり、サーバ100は、第1のユーザ及び第2のユーザのコンテキスト情報に基づいて、表示させる予測情報の内容を制御する。これにより、第1のユーザは、自身の置かれた状況に対応する内容の予測情報を知ることができる。
図3は、本実施形態に係る情報処理システム1による表示例を説明するための図である。図3に示した実空間画像20には、電車に乗っている第1のユーザから見た電車内の様子が表示されている。また、実空間画像20では、座席に座っており移動中でないと判定された第2のユーザ21に関し、座席を立つ(即ち、下車する)までの予測時間(即ち、移動を開始すると予測される時間)を示す予測情報22が表示されている。予測情報22によれば、第2のユーザ21は2分後に下車することが予測されている。これにより、第1のユーザは、下車する第2のユーザのために道を空ける準備をしたり、第2のユーザが座っている座席に座るための準備をしたりすることが可能となる。第2のユーザにとっても、周囲のユーザが自身の下車のための準備を予め行うことが期待されるので、より快適に下車することができる。このように、第2のユーザの予測情報を第1のユーザに公開することは、第1のユーザにとっても第2のユーザにとっても有益になり得る。
図7は、本実施形態に係る情報処理システム1による表示例を説明するための図である。図7に示した実空間画像50には、車51を運転中である第1のユーザから見た、車52及び53(詳しくは、車52を運転する第2のユーザ及び車53を運転する第2のユーザ)の予測情報54及び55が表示されている。移動中でない(例えば、停車中である)と判定された車52に関しては、移動を開始するまでの予測時間を示す予測情報54が表示されている。残り時間が5分3秒と余裕があるので、第1のユーザは、安心して車52を追い抜くことができる。また、移動中である(例えば、走行中である又は右折のための一時停止中である)と判定された車53に関しては、移動軌跡の予測結果を示す予測情報55が表示されている。第1のユーザは、移動軌跡が進行方向に交錯することを容易に知ることができるので、安全に停止することができる。このような予測情報が表示されることで、交通の安全性が向上する。
図10は、本実施形態に係る情報処理システム1による表示例を説明するための図である。図10に示した実空間画像80には、エレベータ内の様子が表示されている。また、実空間画像80では、何人の第2のユーザが第1のユーザを注目するかの予測結果を示す予測情報81が表示されている。より詳しくは、予測情報81では、第1のユーザを注目すると予測される人数の時系列変化が表示されている。これにより、第1のユーザは、例えばドアが開いて自身に注目が集まる前に、身だしなみを整える等を行うことができる。
図11は、本実施形態に係る情報処理システム1による表示例を説明するための図である。図11に示した実空間画像30には、図4に示した実空間画像30に加えて、第2のユーザ31の現在の感情を示す情報39が表示されている。感情を示す情報39では、第2のユーザ31が現在喜んでいることが示されている。このように、サーバ100は、予測情報33に加えて第2のユーザ31のコンテキスト情報を表示させてもよい。これにより、第1のユーザは、例えば第2のユーザが困っている場合に手助けする等、第2のユーザが現在置かれている状況に応じた行動を行うことが可能となる。
他にも、サーバ100は、多様な予測情報を表示させ得る。例えば、サーバ100は、第2のユーザが話すと予測されるキーワードを表示させてもよい。その場合、第1のユーザは、当該キーワードを使って話を盛り上げることができる。
サーバ100(例えば、生成部143及び出力制御部144)は、ユーザの予測情報の他のユーザへの出力可否を設定することができる。第1のユーザへの予測情報の表示という観点から言うと、サーバ100は、第2のユーザの予測情報の、第1のユーザへの出力可否を設定することができる。
サーバ100(例えば、生成部143及び出力制御部144)は、生成された予測情報をフィルタリングしてもよい。例えば、サーバ100は、生成された予測情報のうち一部を表示させて、他の一部を表示させない。
サーバ100(例えば、生成部143及び出力制御部144)は、第1のユーザのコンテキスト情報の予測結果を示す予測情報を、第1のユーザ自身を対象として表示させてもよい。つまり、サーバ100は、他のユーザを対象として表示されている第1のユーザの予測情報を、第1のユーザ自身にも視認可能に表示させてもよい。これにより、第1のユーザは、自身のどのような予測情報が他のユーザに公開されているかを知ることができる。
(1)学習処理
図13は、本実施形態に係るサーバ100において実行される予測器の学習処理の流れの一例を示すフローチャートである。
図14は、本実施形態に係るサーバ100において実行される予測情報の表示処理の流れの一例を示すフローチャートである。
図15は、本実施形態に係るサーバ100において実行される第2のユーザの選択処理の流れの一例を示すフローチャートである。本フローは、図14におけるステップS202を詳細に表現したものである。
図16は、本実施形態に係るサーバ100において実行される予測情報の生成処理の流れの一例を示すフローチャートである。本フローは、図14におけるステップS204を詳細に表現したものである。
最後に、図17を参照して、本実施形態に係る情報処理装置のハードウェア構成について説明する。図17は、本実施形態に係る情報処理装置のハードウェア構成の一例を示すブロック図である。なお、図17に示す情報処理装置900は、例えば、図2に示したサーバ100、ユーザ端末200、認識装置300、出力装置400又は外部機器500を実現し得る。本実施形態に係るサーバ100、ユーザ端末200、認識装置300、出力装置400又は外部機器500による情報処理は、ソフトウェアと、以下に説明するハードウェアとの協働により実現される。
以上、図1~図17を参照して、本開示の一実施形態に係る情報処理システム1について詳細に説明した。上記説明したように、本実施形態に係る情報処理システム1は、第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザのコンテキスト情報に関連する、第2のユーザの予測情報を、第1のユーザを対象として表示することが可能である。これにより、第2のユーザの予測情報が、第1のユーザにとって有益な情報として提示される。第1のユーザにとっては、例えば第2のユーザの未来の行動を視認できるので、スムーズなコミュニケーションが可能になると共に、自身の行動計画も容易に立てることが可能となる。
(1)
第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザの前記コンテキスト情報に関連する、前記第2のユーザの前記コンテキスト情報の予測結果を示す予測情報を、前記第1のユーザを対象として出力させる出力制御部、
を備える情報処理装置。
(2)
前記出力制御部は、前記第1のユーザの前記コンテキスト情報に基づいて複数の他のユーザの中から選択された前記第2のユーザの前記予測情報を表示させる、前記(1)に記載の情報処理装置。
(3)
前記出力制御部は、前記第1のユーザと前記コンテキスト情報が関連すると判定された前記第2のユーザの前記予測情報を表示させる、前記(2)に記載の情報処理装置。
(4)
前記コンテキスト情報は、ユーザの視線を示す情報を含み、
前記出力制御部は、前記第1のユーザが注目していると判定された前記第2のユーザの前記予測情報を表示させる、前記(2)又は(3)に記載の情報処理装置。
(5)
前記コンテキスト情報は、ユーザの視線を示す情報を含み、
前記出力制御部は、前記第1のユーザが過去に注目した第3のユーザと前記コンテキスト情報が類似すると判定された前記第2のユーザの前記予測情報を表示させる、前記(2)~(4)のいずれか一項に記載の情報処理装置。
(6)
前記出力制御部は、前記第2のユーザの前記コンテキスト情報に基づいて生成された前記予測情報を表示させる、前記(1)~(5)のいずれか一項に記載の情報処理装置。
(7)
前記コンテキスト情報は、ユーザの行動を示す情報を含み、
前記出力制御部は、前記第2のユーザが移動中であるか否かに応じて異なる前記予測情報を表示させる、前記(6)に記載の情報処理装置。
(8)
前記出力制御部は、前記第2のユーザが移動中であると判定された場合、前記第2のユーザの移動軌跡の予測結果を示す前記予測情報を表示させる、前記(7)に記載の情報処理装置。
(9)
前記出力制御部は、前記第2のユーザが移動中でないと判定された場合、前記第2のユーザが移動を開始すると予測される時間を示す前記予測情報を表示させる、前記(7)又は(8)に記載の情報処理装置。
(10)
前記コンテキスト情報は、ユーザの人間関係を示す情報を含み、
前記出力制御部は、前記第1のユーザと前記第2のユーザとの人間関係に対応する前記予測情報を表示させる、前記(6)~(9)のいずれか一項に記載の情報処理装置。
(11)
前記コンテキスト情報は、ユーザの属性情報を含み、
前記出力制御部は、前記第2のユーザの前記コンテキスト情報の履歴が無い又は十分でない場合、前記第2のユーザと属性情報が類似すると判定された第3のユーザの前記コンテキスト情報の履歴に基づいて生成された前記予測情報を表示させる、前記(6)~(10)のいずれか一項に記載の情報処理装置。
(12)
前記出力制御部は、前記第2のユーザにより表示することを許可された前記予測情報を表示させる、前記(1)~(11)のいずれか一項に記載の情報処理装置。
(13)
前記許可は、前記第2のユーザによる指示、又は前記第2のユーザの位置若しくは前記第1のユーザと前記第2のユーザとの人間関係の少なくともいずれかに関する設定に基づいて行われる、前記(12)に記載の情報処理装置。
(14)
前記出力制御部は、前記第2のユーザによる指示に基づいて修正された前記予測情報を表示させる、前記(1)~(13)のいずれか一項に記載の情報処理装置。
(15)
前記出力制御部は、前記第1のユーザの前記コンテキスト情報の予測結果を示す前記予測情報を表示させる、前記(1)~(14)のいずれか一項に記載の情報処理装置。
(16)
前記出力制御部は、前記予測情報に加えて前記第2のユーザの前記コンテキスト情報を表示させる、前記(1)~(15)のいずれか一項に記載の情報処理装置。
(17)
前記コンテキスト情報は、ユーザの行動を示す情報、ユーザの位置を示す情報、ユーザの視線を示す情報、ユーザが出力した情報、ユーザの状態を示す情報、ユーザの属性情報又はユーザの人間関係を示す情報の少なくともいずれかを含む、前記(1)~(16)のいずれか一項に記載の情報処理装置。
(18)
前記出力制御部は、前記第1のユーザの端末装置又は前記第1のユーザの周辺に設けられた出力装置により前記予測情報を表示させる、前記(1)~(17)のいずれか一項に記載の情報処理装置。
(19)
第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザの前記コンテキスト情報に関連する、前記第2のユーザの前記コンテキスト情報の予測結果を示す予測情報を、前記第1のユーザを対象としてプロセッサにより出力させること、
を含む情報処理方法。
(20)
コンピュータを、
第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザの前記コンテキスト情報に関連する、前記第2のユーザの前記コンテキスト情報の予測結果を示す予測情報を、前記第1のユーザを対象として出力させる出力制御部、
として機能させるためのプログラム。
100 サーバ
110 通信部
120 コンテキスト情報DB
130 予測器DB
140 処理部
141 取得部
142 学習部
143 生成部
144 出力制御部
200 ユーザ端末
210 通信部
220 認識部
230 出力部
300 認識装置
310 通信部
320 認識部
400 出力装置
410 通信部
420 出力部
500 外部機器
Claims (20)
- 第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザの前記コンテキスト情報に関連する、前記第2のユーザの前記コンテキスト情報の予測結果を示す予測情報を、前記第1のユーザを対象として出力させる出力制御部、
を備える情報処理装置。 - 前記出力制御部は、前記第1のユーザの前記コンテキスト情報に基づいて複数の他のユーザの中から選択された前記第2のユーザの前記予測情報を表示させる、請求項1に記載の情報処理装置。
- 前記出力制御部は、前記第1のユーザと前記コンテキスト情報が関連すると判定された前記第2のユーザの前記予測情報を表示させる、請求項2に記載の情報処理装置。
- 前記コンテキスト情報は、ユーザの視線を示す情報を含み、
前記出力制御部は、前記第1のユーザが注目していると判定された前記第2のユーザの前記予測情報を表示させる、請求項2に記載の情報処理装置。 - 前記コンテキスト情報は、ユーザの視線を示す情報を含み、
前記出力制御部は、前記第1のユーザが過去に注目した第3のユーザと前記コンテキスト情報が類似すると判定された前記第2のユーザの前記予測情報を表示させる、請求項2に記載の情報処理装置。 - 前記出力制御部は、前記第2のユーザの前記コンテキスト情報に基づいて生成された前記予測情報を表示させる、請求項1に記載の情報処理装置。
- 前記コンテキスト情報は、ユーザの行動を示す情報を含み、
前記出力制御部は、前記第2のユーザが移動中であるか否かに応じて異なる前記予測情報を表示させる、請求項6に記載の情報処理装置。 - 前記出力制御部は、前記第2のユーザが移動中であると判定された場合、前記第2のユーザの移動軌跡の予測結果を示す前記予測情報を表示させる、請求項7に記載の情報処理装置。
- 前記出力制御部は、前記第2のユーザが移動中でないと判定された場合、前記第2のユーザが移動を開始すると予測される時間を示す前記予測情報を表示させる、請求項7に記載の情報処理装置。
- 前記コンテキスト情報は、ユーザの人間関係を示す情報を含み、
前記出力制御部は、前記第1のユーザと前記第2のユーザとの人間関係に対応する前記予測情報を表示させる、請求項6に記載の情報処理装置。 - 前記コンテキスト情報は、ユーザの属性情報を含み、
前記出力制御部は、前記第2のユーザの前記コンテキスト情報の履歴が無い又は十分でない場合、前記第2のユーザと属性情報が類似すると判定された第3のユーザの前記コンテキスト情報の履歴に基づいて生成された前記予測情報を表示させる、請求項6に記載の情報処理装置。 - 前記出力制御部は、前記第2のユーザにより表示することを許可された前記予測情報を表示させる、請求項1に記載の情報処理装置。
- 前記許可は、前記第2のユーザによる指示、又は前記第2のユーザの位置若しくは前記第1のユーザと前記第2のユーザとの人間関係の少なくともいずれかに関する設定に基づいて行われる、請求項12に記載の情報処理装置。
- 前記出力制御部は、前記第2のユーザによる指示に基づいて修正された前記予測情報を表示させる、請求項1に記載の情報処理装置。
- 前記出力制御部は、前記第1のユーザの前記コンテキスト情報の予測結果を示す前記予測情報を表示させる、請求項1に記載の情報処理装置。
- 前記出力制御部は、前記予測情報に加えて前記第2のユーザの前記コンテキスト情報を表示させる、請求項1に記載の情報処理装置。
- 前記コンテキスト情報は、ユーザの行動を示す情報、ユーザの位置を示す情報、ユーザの視線を示す情報、ユーザが出力した情報、ユーザの状態を示す情報、ユーザの属性情報又はユーザの人間関係を示す情報の少なくともいずれかを含む、請求項1に記載の情報処理装置。
- 前記出力制御部は、前記第1のユーザの端末装置又は前記第1のユーザの周辺に設けられた出力装置により前記予測情報を表示させる、請求項1に記載の情報処理装置。
- 第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザの前記コンテキスト情報に関連する、前記第2のユーザの前記コンテキスト情報の予測結果を示す予測情報を、前記第1のユーザを対象としてプロセッサにより出力させること、
を含む情報処理方法。 - コンピュータを、
第2のユーザのコンテキスト情報の履歴に基づいて生成された、第1のユーザの前記コンテキスト情報に関連する、前記第2のユーザの前記コンテキスト情報の予測結果を示す予測情報を、前記第1のユーザを対象として出力させる出力制御部、
として機能させるためのプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017517618A JPWO2016181670A1 (ja) | 2015-05-11 | 2016-01-28 | 情報処理装置、情報処理方法及びプログラム |
EP16792396.0A EP3296944A4 (en) | 2015-05-11 | 2016-01-28 | Information processing device, information processing method, and program |
US15/546,708 US20180025283A1 (en) | 2015-05-11 | 2016-01-28 | Information processing apparatus, information processing method, and program |
CN201680026010.XA CN107533712A (zh) | 2015-05-11 | 2016-01-28 | 信息处理装置、信息处理方法和程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-096596 | 2015-05-11 | ||
JP2015096596 | 2015-05-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016181670A1 true WO2016181670A1 (ja) | 2016-11-17 |
Family
ID=57247933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/052491 WO2016181670A1 (ja) | 2015-05-11 | 2016-01-28 | 情報処理装置、情報処理方法及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20180025283A1 (ja) |
EP (1) | EP3296944A4 (ja) |
JP (1) | JPWO2016181670A1 (ja) |
CN (1) | CN107533712A (ja) |
WO (1) | WO2016181670A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190136337A (ko) * | 2018-05-30 | 2019-12-10 | 가천대학교 산학협력단 | 소셜미디어 컨텐츠 기반 감정 분석 방법, 시스템 및 컴퓨터-판독가능 매체 |
JPWO2018168247A1 (ja) * | 2017-03-15 | 2020-01-23 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
JP2020091836A (ja) * | 2018-10-12 | 2020-06-11 | アクセンチュア グローバル ソリューションズ リミテッド | 拡大現実のためのリアルタイムモーションフィードバック |
JP2021149697A (ja) * | 2020-03-19 | 2021-09-27 | ヤフー株式会社 | 出力装置、出力方法及び出力プログラム |
WO2023135939A1 (ja) * | 2022-01-17 | 2023-07-20 | ソニーグループ株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
WO2024075817A1 (ja) * | 2022-10-07 | 2024-04-11 | 株式会社日立製作所 | 表示方法、表示システム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11669345B2 (en) * | 2018-03-13 | 2023-06-06 | Cloudblue Llc | System and method for generating prediction based GUIs to improve GUI response times |
US20190378334A1 (en) * | 2018-06-08 | 2019-12-12 | Vulcan Inc. | Augmented reality portal-based applications |
CN110334669B (zh) * | 2019-07-10 | 2021-06-08 | 深圳市华腾物联科技有限公司 | 一种形态特征识别的方法和设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002297832A (ja) * | 2001-03-30 | 2002-10-11 | Fujitsu Ltd | 情報処理装置、料金提示用プログラムおよび料金提示方法 |
JP2006345269A (ja) * | 2005-06-09 | 2006-12-21 | Sony Corp | 情報処理装置および方法、並びにプログラム |
JP2012221234A (ja) * | 2011-04-08 | 2012-11-12 | Sony Computer Entertainment Inc | 画像処理装置および画像処理方法 |
JP2013171516A (ja) * | 2012-02-22 | 2013-09-02 | Nec Corp | 予測情報提示システム、予測情報提示装置、予測情報提示方法および予測情報提示プログラム |
JP2014123277A (ja) * | 2012-12-21 | 2014-07-03 | Sony Corp | 表示制御システム及び記録媒体 |
JP2015056772A (ja) * | 2013-09-12 | 2015-03-23 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030210228A1 (en) * | 2000-02-25 | 2003-11-13 | Ebersole John Franklin | Augmented reality situational awareness system and method |
US7233933B2 (en) * | 2001-06-28 | 2007-06-19 | Microsoft Corporation | Methods and architecture for cross-device activity monitoring, reasoning, and visualization for providing status and forecasts of a users' presence and availability |
JP5495014B2 (ja) * | 2009-09-09 | 2014-05-21 | ソニー株式会社 | データ処理装置、データ処理方法、およびプログラム |
US20110153343A1 (en) * | 2009-12-22 | 2011-06-23 | Carefusion 303, Inc. | Adaptable medical workflow system |
US9348141B2 (en) * | 2010-10-27 | 2016-05-24 | Microsoft Technology Licensing, Llc | Low-latency fusing of virtual and real content |
US9019174B2 (en) * | 2012-10-31 | 2015-04-28 | Microsoft Technology Licensing, Llc | Wearable emotion detection and feedback system |
US9959674B2 (en) * | 2013-02-26 | 2018-05-01 | Qualcomm Incorporated | Directional and X-ray view techniques for navigation using a mobile device |
US9500865B2 (en) * | 2013-03-04 | 2016-11-22 | Alex C. Chen | Method and apparatus for recognizing behavior and providing information |
US8738292B1 (en) * | 2013-05-14 | 2014-05-27 | Google Inc. | Predictive transit calculations |
-
2016
- 2016-01-28 JP JP2017517618A patent/JPWO2016181670A1/ja active Pending
- 2016-01-28 US US15/546,708 patent/US20180025283A1/en not_active Abandoned
- 2016-01-28 WO PCT/JP2016/052491 patent/WO2016181670A1/ja active Application Filing
- 2016-01-28 EP EP16792396.0A patent/EP3296944A4/en not_active Ceased
- 2016-01-28 CN CN201680026010.XA patent/CN107533712A/zh not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002297832A (ja) * | 2001-03-30 | 2002-10-11 | Fujitsu Ltd | 情報処理装置、料金提示用プログラムおよび料金提示方法 |
JP2006345269A (ja) * | 2005-06-09 | 2006-12-21 | Sony Corp | 情報処理装置および方法、並びにプログラム |
JP2012221234A (ja) * | 2011-04-08 | 2012-11-12 | Sony Computer Entertainment Inc | 画像処理装置および画像処理方法 |
JP2013171516A (ja) * | 2012-02-22 | 2013-09-02 | Nec Corp | 予測情報提示システム、予測情報提示装置、予測情報提示方法および予測情報提示プログラム |
JP2014123277A (ja) * | 2012-12-21 | 2014-07-03 | Sony Corp | 表示制御システム及び記録媒体 |
JP2015056772A (ja) * | 2013-09-12 | 2015-03-23 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3296944A4 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2018168247A1 (ja) * | 2017-03-15 | 2020-01-23 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
US11244510B2 (en) | 2017-03-15 | 2022-02-08 | Sony Corporation | Information processing apparatus and method capable of flexibility setting virtual objects in a virtual space |
JP7131542B2 (ja) | 2017-03-15 | 2022-09-06 | ソニーグループ株式会社 | 情報処理装置、情報処理方法およびプログラム |
KR20190136337A (ko) * | 2018-05-30 | 2019-12-10 | 가천대학교 산학협력단 | 소셜미디어 컨텐츠 기반 감정 분석 방법, 시스템 및 컴퓨터-판독가능 매체 |
KR102111672B1 (ko) | 2018-05-30 | 2020-05-15 | 가천대학교 산학협력단 | 소셜미디어 컨텐츠 기반 감정 분석 방법, 시스템 및 컴퓨터-판독가능 매체 |
JP2020091836A (ja) * | 2018-10-12 | 2020-06-11 | アクセンチュア グローバル ソリューションズ リミテッド | 拡大現実のためのリアルタイムモーションフィードバック |
JP2021149697A (ja) * | 2020-03-19 | 2021-09-27 | ヤフー株式会社 | 出力装置、出力方法及び出力プログラム |
JP7405660B2 (ja) | 2020-03-19 | 2023-12-26 | Lineヤフー株式会社 | 出力装置、出力方法及び出力プログラム |
WO2023135939A1 (ja) * | 2022-01-17 | 2023-07-20 | ソニーグループ株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
WO2024075817A1 (ja) * | 2022-10-07 | 2024-04-11 | 株式会社日立製作所 | 表示方法、表示システム |
Also Published As
Publication number | Publication date |
---|---|
EP3296944A4 (en) | 2018-11-07 |
JPWO2016181670A1 (ja) | 2018-03-01 |
US20180025283A1 (en) | 2018-01-25 |
EP3296944A1 (en) | 2018-03-21 |
CN107533712A (zh) | 2018-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016181670A1 (ja) | 情報処理装置、情報処理方法及びプログラム | |
US10853650B2 (en) | Information processing apparatus, information processing method, and program | |
KR102334942B1 (ko) | 돌봄 로봇을 위한 데이터 처리 방법 및 장치 | |
US10302444B2 (en) | Information processing system and control method | |
US20190383620A1 (en) | Information processing apparatus, information processing method, and program | |
US9316502B2 (en) | Intelligent mobility aid device and method of navigating and providing assistance to a user thereof | |
JP5904021B2 (ja) | 情報処理装置、電子機器、情報処理方法、及びプログラム | |
US11302325B2 (en) | Automatic dialogue design | |
JP2005315802A (ja) | ユーザ支援装置 | |
US20210145340A1 (en) | Information processing system, information processing method, and recording medium | |
CN110996796A (zh) | 信息处理设备、方法和程序 | |
US20170131103A1 (en) | Information processing apparatus, information processing method, and program | |
US20220306155A1 (en) | Information processing circuitry and information processing method | |
US20220357172A1 (en) | Sentiment-based navigation | |
JPWO2018163560A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2020026986A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2015194270A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
US11270682B2 (en) | Information processing device and information processing method for presentation of word-of-mouth information | |
WO2022124164A1 (ja) | 注目対象共有装置、注目対象共有方法 | |
WO2019054009A1 (ja) | 情報処理装置、情報処理方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16792396 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017517618 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15546708 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016792396 Country of ref document: EP |