WO2016136104A1 - 情報処理装置、情報処理方法及びプログラム - Google Patents
情報処理装置、情報処理方法及びプログラム Download PDFInfo
- Publication number
- WO2016136104A1 WO2016136104A1 PCT/JP2015/085377 JP2015085377W WO2016136104A1 WO 2016136104 A1 WO2016136104 A1 WO 2016136104A1 JP 2015085377 W JP2015085377 W JP 2015085377W WO 2016136104 A1 WO2016136104 A1 WO 2016136104A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- information
- content
- information processing
- context information
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
- H04L67/306—User profiles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/636—Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/637—Administration of user profiles, e.g. generation, initialization, adaptation or distribution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02438—Detecting, measuring or recording pulse rate or heart rate with portable devices, e.g. worn by the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1112—Global tracking of patients, e.g. by using GPS
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
- A61B5/1114—Tracking parts of the body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W88/00—Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
- H04W88/02—Terminal devices
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a program.
- Patent Document 1 may not extract content appropriate for the user.
- the technology described in Patent Document 1 may not extract content appropriate for the user.
- the present disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of extracting appropriate content according to the user's state.
- a context information acquisition unit that acquires context information about the user's state obtained by analyzing information including at least one sensing data about the user, and a content group based on the context information.
- An information processing apparatus including a content extraction unit that extracts one or a plurality of contents is provided.
- the processor can execute a content group based on the context information.
- An information processing method including extracting one or a plurality of contents from the content is provided.
- a function for acquiring context information regarding the user state obtained by analyzing information including at least one sensing data regarding the user, and from the content group is provided.
- FIG. 2 is a system diagram illustrating a configuration of a system according to first and second embodiments of the present disclosure.
- FIG. It is a figure showing functional composition of a detecting device concerning a 1st and 2nd embodiment of this indication.
- It is a figure showing functional composition of a server concerning a 1st embodiment of this indication.
- It is a figure showing functional composition of a terminal unit concerning the 1st and 2nd embodiments of this indication.
- FIG. 3 is a diagram illustrating an information processing sequence according to the first embodiment of the present disclosure. It is explanatory drawing (the 1) for demonstrating 1st Example. It is explanatory drawing (the 2) for demonstrating 1st Example. It is explanatory drawing for demonstrating 2nd Example.
- First Embodiment 1-1 System configuration 1-2. Functional configuration of detection apparatus 1-3. Functional configuration of server 1-4. 1. Functional configuration of terminal device Information processing method 2-1. First embodiment 2-2. Second embodiment 2-3. Third embodiment 2-4. Fourth Embodiment 3. FIG. Second Embodiment 3-1. Functional Configuration of Server 3-2. Information processing method 3-3. Fifth Embodiment 4. Hardware configuration Supplement
- FIG. 1 is a system diagram illustrating a schematic configuration of a system according to the first embodiment of the present disclosure.
- the system 10 may include a detection device 100, a server 200, and a terminal device 300.
- the detection device 100, the server 200, and the terminal device 300 can communicate with each other via various wired or wireless networks.
- the number of detection devices 100 and terminal devices 300 included in the system 10 is not limited to the numbers illustrated in FIG. 1, and may be more or less.
- the detection apparatus 100 detects the state of one or a plurality of users, and transmits sensing data regarding the detected user state to the server 200.
- the server 200 acquires the sensing data transmitted from the detection device 100, analyzes the acquired sensing data, and acquires context information indicating the state of the user. Further, the server 200 extracts one or a plurality of contents from a group of contents that can be acquired via the network based on the acquired context information. The server 200 can also transmit content information (content title, storage location, content, format, capacity, etc.) regarding the extracted content or content to the terminal device 300 or the like.
- content information content title, storage location, content, format, capacity, etc.
- the terminal device 300 can output the content information transmitted from the server 200 to the user.
- All of the detection device 100, the server 200, and the terminal device 300 can be realized by, for example, a hardware configuration of an information processing device to be described later.
- each device does not necessarily have to be realized by a single information processing device, and may be realized by, for example, a plurality of information processing devices that are connected via various wired or wireless networks and cooperate with each other. .
- the detection apparatus 100 may be a wearable device that is worn on a part of the user's body, such as eyewear, wristwear, or a ring-type terminal. Alternatively, the detection apparatus 100 may be an independent camera or microphone that is fixedly installed. Furthermore, the detection device 100 may be included in a device carried by a user such as a mobile phone (including a smartphone), a tablet or notebook PC (Personal Computer), a portable media player, or a portable game machine. Good. The detection device 100 may be included in a device installed around the user, such as a desktop PC or TV, a stationary media player, a stationary game machine, or a stationary telephone. Note that the detection device 100 is not necessarily included in the terminal device.
- FIG. 2 is a diagram illustrating a schematic functional configuration of the detection apparatus 100 according to the first embodiment of the present disclosure.
- the detection device 100 includes a sensing unit 110 and a transmission unit 130.
- the sensing unit 110 includes at least one sensor that provides sensing data regarding the user.
- the sensing unit 110 outputs the generated sensing data to the transmission unit 130, and the transmission unit 130 transmits the sensing data to the server 200.
- the sensing unit 110 may include a motion sensor that detects a user's operation, a sound sensor that detects sound generated around the user, a biological sensor that detects user's biological information, and the like.
- the sensing unit 110 may include a position sensor that detects user position information. For example, when a plurality of sensors are included, the sensing unit 110 may be separated into a plurality of parts.
- the motion sensor is a sensor that detects a user's operation, and specifically includes an acceleration sensor and a gyro sensor. Specifically, the motion sensor detects changes in acceleration, angular velocity, etc. that occur with the user's movement, and generates sensing data indicating these detected changes.
- the sound sensor can be specifically a sound collecting device such as a microphone.
- Sound sensors are not only sounds generated by the user's utterances (not limited to utterances but may include pronunciations that do not make sense such as onomatopoeia or exclamation) as well as hands. It is possible to detect sounds generated by user actions such as hitting, environmental sounds around the user, utterances of people located around the user, and the like. Further, the sound sensor may be optimized so as to detect a single type of sound among the types of sounds exemplified above, or configured to detect a plurality of types of sounds. Also good.
- the biosensor is a sensor that detects a user's biometric information.
- the biosensor is directly attached to a part of the user's body, and the heart rate, blood pressure, brain wave, breathing, sweating, myoelectric potential, skin temperature, skin electrical resistance, etc.
- a sensor to measure can be included.
- the biosensor may include an imaging device and detect eye movement, pupil size, gaze time, and the like.
- the position sensor is a sensor that detects the position of a user or the like, and specifically, can be a GNSS (Global Navigation Satellite System) receiver or the like. In this case, the position sensor generates sensing data indicating the latitude and longitude of the current location based on a signal from the GNSS satellite.
- GNSS Global Navigation Satellite System
- a receiver that receives a wireless signal such as Bluetooth (registered trademark) from the terminal device 300 existing around the user can also be used as a position sensor for detecting a relative positional relationship with the terminal device 300. .
- the sensing unit 110 may include an image pickup device and an image pickup device that picks up an image of the user and the surroundings of the user using various members such as a lens for controlling the formation of a subject image on the image pickup device.
- the user's operation is captured in the image captured by the imaging device.
- the sensing unit 110 may include various sensors such as a temperature sensor that measures the environmental temperature, in addition to the sensors other than those described above.
- the detection apparatus 100 may include a receiving unit (not shown) that acquires information such as control information for controlling the sensing unit 110.
- the receiving unit is realized by a communication device that communicates with the server 200 via a network.
- FIG. 3 is a diagram illustrating a schematic functional configuration of the server 200 according to the first embodiment of the present disclosure.
- the server 200 may include a reception unit 210, a storage 220, a context information acquisition unit 230, a content extraction unit 240, an output control unit 250, and a transmission unit 260.
- the context information acquisition unit 230, the content extraction unit 240, and the output control unit 250 are realized by software using, for example, a CPU (Central Processing Unit).
- Part or all of the functions of the server 200 may be realized by the detection device 100 or the terminal device 300.
- the receiving unit 210 is realized by a communication device that communicates with the detection device 100 or the like via a network.
- the reception unit 210 communicates with the detection device 100 and receives sensing data transmitted from the detection device 100.
- the reception unit 210 outputs the received sensing data to the context information acquisition unit 230.
- the receiving unit 210 communicates with other devices via a network, and uses other information such as user profile information (hereinafter referred to as a user profile) used by the context information acquisition unit 230 and the content extraction unit 240 described below. It is also possible to receive information related to content stored in other devices. Details of the user profile will be described later.
- the context information acquisition unit 230 analyzes the sensing data received by the reception unit 210 and generates context information regarding the user's state. Further, the context information acquisition unit 230 outputs the generated context information to the content extraction unit 240 or the storage 220. Details of analysis and generation of context information in the context information acquisition unit 230 will be described later.
- the context information acquisition unit 230 can also acquire the user profile received by the reception unit 210.
- the content extraction unit 240 is a content group that can be used by the terminal device 300 (for example, content stored in the storage 220 of the server 200 or stored in another server accessible via the network). Content and / or local content stored in the terminal device 300). Furthermore, the content extraction unit 240 can also output content information, which is information regarding the extracted content, to the output control unit 250 or the storage 220.
- the output control unit 250 controls the output of the extracted content to the user. Specifically, the output control unit 250 determines an output method such as an output format when outputting content information to the user, a terminal device 300 that outputs, and an output timing based on the content information and context information corresponding thereto. select. Details of selection of the output method by the output control unit 250 will be described later. Further, the output control unit 250 outputs the content information to the transmission unit 260 or the storage 220 based on the selected output method.
- an output method such as an output format when outputting content information to the user, a terminal device 300 that outputs, and an output timing based on the content information and context information corresponding thereto. select. Details of selection of the output method by the output control unit 250 will be described later. Further, the output control unit 250 outputs the content information to the transmission unit 260 or the storage 220 based on the selected output method.
- the transmission unit 260 is realized by a communication device that communicates with the terminal device 300 or the like via a network.
- the transmission unit 260 communicates with the terminal device 300 selected by the output control unit 250 and transmits content information to the terminal device 300.
- the terminal device 300 includes a mobile phone (including a smartphone), a tablet-type or notebook-type or desktop-type PC or TV, a portable or installed media player (including a music player, a video display, etc.), a portable or installed game. Machine, wearable computer, etc., and is not particularly limited.
- the terminal device 300 receives the content information transmitted from the server 200 and outputs it to the user.
- the function of the terminal device 300 may be realized by the same device as the detection device 100, for example. Further, when the system 10 includes a plurality of detection devices 100, some of them may realize the function of the terminal device 300.
- FIG. 4 is a diagram illustrating a schematic functional configuration of the terminal device 300 according to the first embodiment of the present disclosure.
- the terminal device 300 may include a receiving unit 350, an output control unit 360, and an output unit 370.
- the receiving unit 350 is realized by a communication device that communicates with the server 200 via a network, and receives content information transmitted from the server 200. Furthermore, the receiving unit 350 outputs the content information to the output control unit 360.
- the output control unit 360 is realized by software using, for example, a CPU or the like, and controls the output of the output unit 370 based on the content information.
- the output unit 370 is configured by a device that can output the acquired content information to the user.
- the output unit 370 can include, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display, an audio output device such as a speaker or headphones, and the like.
- a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display
- an audio output device such as a speaker or headphones, and the like.
- the terminal device 300 may further include an input unit 330 that accepts user input and a transmission unit 340 that transmits information and the like from the terminal device 300 to the server 200 and the like. Specifically, for example, the terminal device 300 may change the output of the output unit 370 based on the input received by the input unit 330. In this case, the transmission unit 340 may transmit a signal requesting the server 200 to transmit new information based on the input received by the input unit 330.
- the detection device 100 includes a sensing unit 110 including a sensor that provides at least one sensing data, a context information acquisition unit 230, and content extraction.
- Part 240 (described above as the functional configuration of the server 200).
- the terminal device 300 can include an output unit 370 that outputs content, a context information acquisition unit 230, and a content extraction unit 240.
- the system 10 does not necessarily include the server 200.
- the detection device 100 and the terminal device 300 are realized by the same device, the system 10 may be completed inside the device.
- the server 200 analyzes information including sensing data related to the user state detected by the detection device 100, and the user state obtained from the analysis Context information indicating is acquired. Further, the server 200 extracts one or a plurality of contents from the content group based on the context information.
- FIG. 5 is a sequence diagram illustrating an information processing method according to the first embodiment of the present disclosure.
- step S101 the sensing unit 110 of the detection apparatus 100 generates sensing data indicating the state of the user, and the transmission unit 130 transmits the sensing data to the server 200.
- the generation and transmission of sensing data may be performed periodically, for example, or may be performed when the user is determined to be in a predetermined state based on other sensing data.
- the sensing unit 110 includes a plurality of types of sensors, the generation and transmission of sensing data may be performed collectively or at different timings for each sensor.
- step S102 the receiving unit 210 of the server 200 receives the sensing data transmitted from the detection device 100.
- the context information acquisition unit 230 acquires the received sensing data.
- the sensing data may be received by the reception unit 210, temporarily stored in the storage 220, and read by the context information acquisition unit 230 as necessary.
- step S103 may be executed as necessary, and the receiving unit 210 may acquire a user profile that is information about the user via the network.
- the user profile includes, for example, information on user preferences (interest graph), information on user friendships (social graph), user schedule, image data such as user faces, and user voice feature data. Can be included.
- the context information acquisition unit 230 can also acquire various information other than the user profile such as traffic information and a broadcast program guide via the Internet. Note that the processing order of step S102 and step S103 is not limited to this, and may be simultaneous or reversed.
- the context information acquisition unit 230 analyzes the sensing data, generates context information indicating the state of the user, and outputs the generated context information to the content extraction unit 240.
- the context information acquisition unit 230 may include a keyword corresponding to the acquired sensing data (a keyword expressing an operation if sensing data related to an operation, and a user corresponding to an audio if sensing data related to a user's voice).
- Context information including a keyword expressing a user's emotion, a keyword expressing a user's emotion corresponding to the biological information, and the like as long as sensing data related to the user's biological information may be generated.
- the context information acquisition unit 230 represents the user's emotion obtained by analyzing the sensing data by a plurality of axes such as an axis including excitement and sedation and an axis including joy and grief. Context information including an index value may be generated. Furthermore, the context information acquisition unit 230 generates individual emotions as separate index values (for example, excitement 80, sedation 20, joy 60, etc.), and includes context information that includes these index values. May be generated.
- the context information acquisition unit 230 may generate context information including specific user position information.
- the context information acquisition unit 230 includes specific information about the person or the terminal device 300 around the user. Context information including may be generated.
- the context information acquisition unit 230 may associate the generated context information with a time stamp based on the time stamp of the sensing data, or may associate with the time stamp corresponding to the time when the context information is generated. .
- the context information acquisition unit 230 may refer to the user profile when analyzing the sensing data. For example, the context information acquisition unit 230 may collate position information included in the sensing data with a schedule included in the user profile, and specify a specific place where the user is located. In addition, the context information acquisition unit 230 can analyze voice information included in the sensing data with reference to feature data of the user's voice included in the user profile. Further, for example, the context information acquisition unit 230 may generate context information including keywords (keywords corresponding to user preferences, user friend names, etc.) obtained by analyzing the acquired user profile. . In addition, the context information acquisition unit 230 may generate context information including an index value indicating the depth of friendship of the user, or user action schedule information.
- keywords keywords corresponding to user preferences, user friend names, etc.
- step S105 the content extraction unit 240 extracts one or a plurality of contents from the contents that can be acquired via the network based on the context information generated by the context information acquisition unit 230. Then, the content extraction unit 240 outputs content information, which is information about the extracted content, to the output control unit 250 or the storage 220.
- the content extraction unit 240 extracts content having contents suitable for the user's state expressed by a keyword or the like included in the context information, for example. At this time, the content extraction unit 240 extracts content in a format (text file, still image file, moving image file, audio file, etc.) suitable for the location information of the user included in the context information and the terminal device 300 used by the user. You can also Furthermore, the content extraction unit 240 calculates a fitness indicating the degree of compatibility between each extracted content and the context information used in the extraction, and outputs the calculated fitness as content information of each content. May be.
- step S106 the output control unit 250 selects an output method when content information is output to the user, a terminal device 300 that outputs the content information, a timing for output, and the like, and transmits information related to the selection to the transmission unit 260 or the storage. To 220.
- the output control unit 250 performs the selection based on the content information and context information related to the content information.
- the output control unit 250 recommends the most suitable content by the agent, whether to output the extracted content such as video or audio, or to output a list in which the content titles are arranged. Select the content output method, such as whether to do so. For example, when the output control unit 250 outputs a list in which content titles and the like are arranged, the information on each content may be arranged in an order according to the calculated fitness, or may be different from the fitness, for example, They may be arranged on the basis of reproduction time or the like. In addition, the output control unit 250 selects one or more of the terminal devices 300 as an output terminal that outputs content information.
- the output control unit 250 identifies the terminal device 300 located around the user based on the context information, and selects content having a format or size that can be output by the terminal device 300 from the extracted content. Further, for example, the output control unit 250 selects the timing for outputting the content information based on the user's action schedule information included in the context information, or matches the user's surrounding environment based on the user's position information. Determine the volume of content playback.
- step S107 the transmission unit 260 communicates with the terminal device 300 via the network, and transmits content information based on the selection of the output control unit 250.
- step S108 the receiving unit 350 of the terminal device 300 receives the content information. Then, the output control unit 360 controls the output unit 370 based on the received content information.
- step S109 the output unit 370 is controlled by the output control unit 360, and outputs content information (for example, information such as a content entity or a title) to the user.
- content information for example, information such as a content entity or a title
- the server 200 can acquire information on content viewed by the user as the viewing history of the user.
- the server 200 includes a history acquisition unit (not shown in FIG. 3), and the history acquisition unit can acquire information related to user preferences by learning the acquired viewing history. Further, the acquired user preference information can be used in the next content extraction.
- the server 200 can acquire a user's evaluation on the extracted content.
- the input unit 330 provided in the terminal device 300 receives an input from the user, and the evaluation is transmitted from the terminal device 300 to the server 200.
- the server 200 further includes an evaluation acquisition unit (not shown in FIG. 3), and the evaluation acquisition unit acquires information on the user's preference by accumulating and learning the evaluation.
- the server 200 may accept an input of a keyword for extraction from the user.
- the timing of accepting may be before the content is extracted or after the content information of the extracted content is output to the user.
- the apparatus which receives an input can be the input unit of the server 200, the sensing unit 110 of the detection apparatus 100, or the like, and is not particularly limited.
- FIGS. 6 and 7 are explanatory diagrams for explaining the first embodiment.
- the first embodiment as shown in FIG. 6, it is assumed that the user is watching a soccer broadcast on the TV in the living room at home.
- the smartphone 100a and the listware 100b carried by the user function as the detection device 100.
- the smartphone 100a detects position information indicating that the user is in the living room from a communicable Wi-Fi access point 100d and radio wave intensity, and transmits sensing data based on the detection to the server 200.
- the server 200 separately accesses the TV 300a that is specified to be in the living room by the information registered by the user based on the sensing data through the Internet or the like, and the state of the TV 300a (power state, channel being received, etc.) Information) can be acquired.
- the context information acquisition unit 230 of the server 200 is in a state where the user is in the living room, the TV 300a is the terminal device 300 located around the user, the TV 300a is activated, and the channel 8 is received. Can be grasped.
- the context information acquisition unit 230 acquires the program guide of the channel 8 that can be used on the network via the reception unit 210.
- the context information acquisition unit 230 indicates that the program estimated to be viewed by the user based on the acquired information is a soccer game relay, the name of the soccer team that is playing in the game, the game It is possible to specify the start date and time, etc.
- the acceleration sensor included in the wrist wear 100b transmits sensing data indicating a change in acceleration generated by pushing up the arm to the server 200.
- the context information acquisition unit 230 specifies that the user's “push up arm” operation has occurred by analyzing the transmitted sensing data.
- the context information acquiring unit 230 is excited when the user “watches the football broadcast” because the “push up arm” operation has occurred in the already identified context “watching the football broadcast”. Then, context information indicating that the arm is pushed up is generated.
- the content extraction unit 240 extracts content such as “a scene of an exciting soccer game” based on the generated context information.
- the content extraction unit 240 may extract the content by using keywords such as “soccer” and “excitement” included in the context information, or use a feature vector indicating the type of sport or scene features. Then, the content may be extracted.
- the content extraction unit 240 can grasp the state that the user is watching a soccer broadcast on the TV 300a in the living room based on the context information, the content extraction unit 240 limits the content to be extracted to a moving image having a size suitable for output on the TV 300a. And extract.
- a plurality of contents are extracted by the content extraction unit 240 as contents suitable for the user's state indicated by the context information.
- a fitness indicating the degree of fitness between each extracted content and the context information used in the extraction is calculated, and the calculated fitness is included in the content information regarding each content.
- the output control unit 250 selects to output the content information in a list format in the TV 300a.
- the output control unit 250 of the server 200 displays a list in which the titles and thumbnails of each content are arranged in the order of suitability of each content (for example, content information of content with a high suitability is shown at the top). Select to output.
- the output control unit 250 refers to the information related to the soccer relay, and selects to output the list at the timing when the first half of the game ends and the half time starts.
- a list (LIST) in which the titles and thumbnails of the extracted contents are arranged is displayed on the screen of the TV 300a as half time starts. Further, when the user selects content that the user wants to view from the list, the selected content is reproduced. In this case, content selection by the user is input by a remote controller of TV 300a (an example of input unit 330 of terminal device 300) or the like.
- the listware 100b detects the user's operation that is difficult to express in words, such as the operation of pushing up the arm, and the server 200 extracts the content according to the operation. It can be carried out. At this time, the state in which the user is watching the football relay on the TV 300a in the living room is also grasped by the position information provided by the smartphone 100a and the information provided from the TV 300a, so that more appropriate content can be extracted. It becomes possible.
- content is extracted with the detection of an operation performed by the user without intention to extract the content as a trigger.
- the terminal device 300 TV 300a
- the output state of the terminal device 300 the soccer relay is being output and the game will soon enter halftime
- the user when the user looks at the list and wants to extract a moving image of a certain player from the content listed on the list, the user selects a keyword for content extraction (for example, the player's video). Name) may be entered.
- the user can input the keyword by operating the smartphone 100a carried by the user. That is, in this case, the smartphone 100a functions as the detection device 100 that provides the user's position information, and also functions as the terminal device 300 that receives the user's operation input.
- the content extraction unit 240 In the server 200 that has received the input keyword, the content extraction unit 240 further extracts one or more contents that match the keyword from the plurality of already extracted contents.
- the server 200 can perform extraction using a keyword in addition to context information obtained by analyzing sensing data, and can extract content more appropriate for the user.
- the context information acquisition unit 230 identifies the meaning intended by the user by analyzing the context information obtained from the sensing data together with the keyword. can do. Specifically, when the keyword “interesting” is input from the user, “interesting” includes meanings such as “fun” and “interesting”.
- the context information acquisition unit 230 analyzes the user's brain waves detected by a biosensor mounted on the user's head, and determines the user's context that “the user is concentrated”. To grasp. In this case, the server 200 specifies that the user's intended meaning of the keyword “interesting” is “interesting” based on the context information “users are concentrated”, and selects content corresponding to “interesting”. Extract.
- FIG. 8 is an explanatory diagram for explaining the second embodiment.
- the second embodiment assumes a case where user A is chatting with a friend user B while watching a football broadcast on TV in the living room of user A.
- the faces of the users A and B are photographed by the imaging device 100c installed in the living room of the user A corresponding to the detection device 100.
- the imaging apparatus 100c transmits sensing data including position information of the imaging apparatus 100c and face images of the users A and B to the server 200.
- the face image included in the transmitted sensing data is the face image of the users A and B by referring to the face image data included in the user profile acquired by the context information acquisition unit 230 via the network. Identify that there is.
- the context information acquisition part 230 grasps
- the context information acquisition unit 230 is in a chat with the user A and the user B based on the moving images of the operations of the users A and B (for example, the faces sometimes face each other) transmitted from the imaging device 100c. Also grasp.
- the context information acquisition unit 230 acquires a user profile including the interest graphs of the users A and B via the network. Based on the acquired interest graph, the context information acquisition unit 230 determines each preference of the users A and B (for example, “User A enjoys watching a variety program”, “User A ’s favorite group is , “ABC37”, “How to spend fun time for user B is to play soccer”, etc.).
- the Wi-Fi access point 100d installed in the living room of the user A communicates with the TV 300b in the living room and the projector 300c that projects an image on the wall surface of the living room.
- the context information acquisition unit 230 of the server 200 may specify that the TV 300b and the projector 300c are available terminal devices 300. it can.
- the user A has fun and laughs during the context as described above (the users A and B are chatting).
- the microphone 100e installed in the living room together with the imaging device 100c detects the laughter and transmits sensing data including voice data of the laughter to the server 200.
- the context information acquisition unit 230 refers to the voice feature information included in the acquired user profile, and specifies that the laughing voice of the user A is included in the transmitted sensing data.
- the context information acquisition unit 230 that identifies the person who uttered the laughing voice has information about the correlation between the voice and emotion of the user A included in the user profile (a loud laughing voice is pleasant, and a sour voice is sad) , Etc.), and generates context information including a keyword (for example, “fun”) indicating the emotion of the user A when laughing.
- a keyword for example, “fun”
- the laughing voice of the user A is detected by the microphone 100e.
- the voice detected by the microphone 100e is a cheer such as “Wow!” Or a nose. It may be a sound, a coughing sound, a speech voice, or the like.
- the microphone 100e may detect a sound caused by the operation of the user B.
- the content extraction unit 240 of the server 200 can extract content by two methods.
- the content extraction unit 240 uses the keyword “fun” included in the context information and the preference of the user A (“user A enjoys watching a variety program”, “user A's favorite group” Is “ABC37”)), for example, the contents of the variety program in which “ABC37” appears are extracted.
- the content extraction unit 240 in addition to the plurality of information used in the first method, the content extraction unit 240 also uses the preference of user B included in the context information (user B enjoys playing soccer). To extract the content.
- the extracted content includes, for example, a variety program content related to soccer such as a variety program in which soccer players and “ABC37” appear and a variety program in which “ABC37” challenges soccer.
- the content extraction unit 240 may extract content using either the first method or the second method described above, or may extract content using both methods.
- the server 200 recognizes that the TV 300b is activated by communicating with the TV 300b via the Wi-Fi access point 100d.
- the context information acquisition unit 230 generates context information that further includes information indicating that the users A and B are watching the TV 300b.
- the output control unit 250 selects the projector 300c as the terminal device 300 that outputs content information so as not to disturb viewing of the TV 300b. Further, the output control unit 250 selects from the content information to display a list including the title of each moving image and a still image of a representative scene of each moving image by the projector 300c.
- the output control unit 250 selects to output the content information extracted by each method separately. Specifically, as shown in FIG. 8, the projector 300c can project images on two wall surfaces W1, W2 in the vicinity of the TV 300b in the living room. Therefore, the output control unit 250 projects the content information of the variety program extracted by the first method on the right wall surface W1, and the content information of the variety program related to soccer extracted by the second method is displayed on the left wall surface W2. Decide to project.
- the output control unit 250 refers to information such as the first broadcast date and time associated with each extracted content, and arranges the content in order from the closest to the TV 300b of the wall surfaces W1 and W2. .
- the latest content information is projected on the portion closest to the TV on the wall surfaces W1 and W2.
- the oldest content information is projected on the part farthest from the TV on the wall surfaces W1, W2.
- the content information (INFO) of the recommended content is displayed on the TV 300b as shown in FIG. You may display small also in the upper left part of a screen.
- the user A selects the content he / she wants to view from the projected content information
- the selected content is reproduced on the screen of the TV 300b.
- the user A may select the content by a controller that can select the position in the image projected on the wall surfaces W1 and W2, or may select the content by voice input that reads out the content title or the like. Good.
- voice input the voice of user A may be detected by the microphone 100e.
- the context information acquisition unit 230 can analyze the sensing data more accurately because it refers to the user profile including information about the relationship between the user's action and emotion when analyzing the sensing data. Furthermore, since the context information acquisition unit 230 extracts the content based on the preference information of the user B included in the user profile, the content that the users A and B can enjoy at the same time can be extracted.
- FIGS. 9 and 10 are explanatory diagrams for explaining the third embodiment.
- the third embodiment assumes that the user is on a train and watching the screen of the smartphone 100f while listening to music.
- the user carries a smartphone 100f as the detection device 100, and the smartphone 100f detects the user's position information by a GNSS receiver included in the smartphone 100f, and transmits sensing data based on the detection to the server 200. Furthermore, the smartphone 100f communicates with the headphones 300d worn by the user via Bluetooth (registered trademark), and transmits an audio signal for outputting music to the headphones 300d. The smartphone 100f transmits information indicating that the user is using the headphones 300d to the server 200 together with the position information.
- the context information acquisition unit 230 acquires a user profile including schedule information from the network via the reception unit 210 in addition to the information transmitted from the smartphone 100f as described above. And the context information acquisition part 230 is based on the user's position information received from the smartphone 100f and the user's schedule information (more specifically, the user is commuting and is on the subway line 3). Figure out that is on the train. Furthermore, the context information acquisition unit 230 analyzes the information included in the sensing data and grasps the state that the user is using the headphones 300d together with the smartphone 100f.
- the context information acquisition unit 230 analyzes the image and identifies that the user's facial expression is a “joyful facial expression”. Furthermore, the context information acquisition unit 230 generates context information including a keyword (for example, “happy”) corresponding to the user's emotion expressed by such a facial expression.
- a keyword for example, “happy”
- the above keyword is not limited to a keyword that expresses the user's emotion when the user has a facial expression. For example, if the user has a sad facial expression, the keyword is “encourage” Also good.
- the content extraction unit 240 extracts content that can be output by the smartphone 100f based on the keyword “I am happy” included in the context information. Further, at the time of the above extraction, the content extraction unit 240 recognizes from the schedule information included in the user profile that there are 10 minutes remaining until the user gets off the train, and in the case of moving images or audio, the playback time is 10 minutes. The extraction may be limited to the content within. As a result, the content extraction unit 240 extracts the blog of the user who recorded the happy event, the news site where the happy article was written, and the music data of the music that the user felt happy. The server 200 outputs content information (title, format, etc.) regarding the extracted content.
- the output control unit 250 refers to the information of the available terminal device 300 included in the context information, and selects the smartphone 100f as the terminal device 300 that outputs the content information. That is, in the present embodiment, the smartphone 100f functions as both the detection device 100 and the terminal device 300.
- the content information transmitted from the server 200 is displayed on the screen of the smartphone 100f.
- the agent is displayed on the screen of the smartphone 100f, and the agent recommends the extracted content (for example, the character is displayed on the screen and the character balloon is displayed. Is displayed as “Jimmy's site is recommended!”).
- the user can reproduce desired content by operating the smartphone 100f. Further, the user may input the evaluation for the reproduced content by operating the smartphone 100f, and may input not only the evaluation for the content but also the evaluation for the content output method (output timing, etc.). Good.
- the music data when there is no remaining time until the user gets off the train, only the music data may be extracted and output so as not to disturb the user's transfer.
- the music data is output from the headphones 300d via the smartphone 100f.
- the smartphone 100f For example, when the user is driving a car, only content that can be played back by a speaker installed in the car may be extracted.
- the server 200 can extract and output content corresponding to the user action schedule information obtained by analyzing the user profile. Therefore, the content can be extracted and output according to the user's condition, so that the user can enjoy the content more comfortably.
- FIG. 11 is an explanatory diagram for explaining the fourth embodiment.
- user A is having a break with friends (friends B, C, D) in a school classroom.
- the user A carries the smartphone 100g as the detection device 100, and the location information of the user A is detected by the smartphone 100g. Furthermore, the smart phone 100g communicates with the smart phones 100h, 100i, and 100j carried by the friends B, C, and D around the user A via Bluetooth (registered trademark), so that the smart phone 100h, 100i, 100j is detected.
- the smartphone 100g transmits information indicating the detected other terminal devices (that is, the smartphones 100h, 100i, and 100j) to the server 200. Further, the smartphone 100g transmits the position information of the user A acquired by the GNSS receiver, the Wi-Fi communication device, or the like to the server 200.
- the context information acquisition unit 230 grasps the state that the user A is in the school classroom based on the position information received from the smartphone 100g. Furthermore, the context information acquisition unit 230 recognizes the smartphones 100h, 100i, and 100j as other terminal devices located around the user A based on the information received from the smartphone 100g. In addition, the server 200 refers to the account information associated with each smartphone described above via the network, and the friends B, C, and D who are the owners of the smartphones 100h, 100i, and 100j are assumed to be persons around the user A. You may specify.
- the context information acquisition unit 230 acquires a user profile including the schedule information of the user A from the network via the reception unit 210 in addition to the information transmitted from the smartphone 100g as described above. To do. From the schedule information, the context information acquisition unit 230 can also grasp the context that the user A is in the break time.
- the context information acquisition unit 230 may extract information from the social graph included in the user profile of the user A for the friends B, C, and D specified as the persons around the user A. More specifically, the context information acquisition unit 230, based on the acquired social graph, information on the friendship relationship between the user A and friends B, C, D (for example, 5 for best friends and family, and 5 for classmates) 4. Generate context information including the degree of familiarity and the relationship, which is an index value) such as 1 if the degree of proximity is close.
- the content extraction unit 240 may extract content reflecting the friendship between the user A and the friends B, C, and D. Specifically, for example, when it is recognized from the friendship relationship information that the friends B, C, and D are not particularly intimate with the user A, the content extraction unit 240 displays the private content of the user A (home video The moving image of the user A photographed in step 1) is not extracted. Note that if the friends B, C, and D have a particularly close relationship with the user A, the content extraction unit 240 may extract the private content of the user A that has been designated as being publicly available. Good.
- a disclosure level that can be disclosed for each person by the user A for example, a content disclosure range is disclosed for each person such that the content related to the private is disclosed to the friend E and the content related to the private is not disclosed to the friend F.
- the disclosure level information describing the information may be created in advance, and content may be extracted according to the disclosure level information.
- the acceleration sensor included in the wristware 100m worn on the arm of the user A transmits sensing data indicating the acceleration change generated by the above operation to the server 200.
- the context information acquisition unit 230 specifies that the user A has performed a tennis shot operation by analyzing the transmitted sensing data. Furthermore, the context information acquisition unit 230 generates context information including keywords (for example, “tennis” and “shot”) corresponding to the above-described operation of the user A.
- the context extraction unit 240 extracts a tennis shot moving image based on the keywords “tennis” and “shot” included in the context information and the terminal device information, and content information about the extracted moving image. Is output.
- a moving image or the like of the tennis played by the user A taken with the home video is not extracted. In this embodiment, it is assumed that one moving image has been extracted.
- the output control unit 250 refers to the terminal device information included in the context information, and selects the smartphones 100g, 100h, 100i, and 100j as the terminal device 300 that outputs the content information. More specifically, since there is one extracted moving image, the output control unit 250 displays this moving image on the smartphone 100g carried by the user A and simultaneously displays it on the smartphones 100h, 100i, and 100j. Choose that.
- the server 200 performs generation of context information and content extraction processing triggered by the acquisition of the sensing data, and the extracted content is sent to the user A and friends B, C, and D. Is output. Further, when a state of a new user A or the like is detected, the server 200 extracts new content according to the detected state of the new user A or the like.
- the content information is output to each smartphone at the same time.
- the present invention is not limited to this, and the content information may be displayed on each smartphone at different timings.
- the smartphone 100i may display content information at a timing different from that of other smartphones after it is confirmed that the operation has been completed. Good.
- the user A may input the timing to display on each smartphone and the content to be viewed by operating the smartphone 100g.
- the friend D among the surrounding friends carries the feature phone, it can be displayed as follows. For example, on the feature phone of the friend D, content consisting of text and still images corresponding to the content displayed on each smartphone may be displayed according to the capability of the feature phone screen display function.
- the content information can be output not only to the smartphone 100g carried by the user A but also to each smartphone carried by a nearby friend, and the content can be shared with the surrounding friend. Furthermore, since the server 200 extracts the content according to the friendship relationship information of the user A, the private video that the user A does not want to show to the friend or the like is not displayed on the friend's smartphone. You can enjoy the content with peace of mind.
- context information indicating the state of the user is separately used as meta information of content corresponding to the context information.
- This meta information is used, for example, when extracting content described in the first embodiment. That is, in this embodiment, when extracting content, meta information (corresponding to past content information) associated with the content and context information are used (for example, meta information and context information are collated, Or compare). Therefore, it becomes possible to extract the content more suited to the user's state.
- the system according to the second embodiment includes a detection device 100, a terminal device 300, and a server 400.
- the functional configurations of the detection device 100 and the terminal device 300 are the same as those in the first embodiment, and thus description thereof is omitted here.
- FIG. 12 shows a schematic functional configuration of the server 400 according to the second embodiment.
- the server 400 according to the second embodiment is similar to the server 200 according to the first embodiment in that the reception unit 210, the storage 220, the context information acquisition unit 230, and the content extraction unit. 240 and a transmitter 260 may be included.
- the server 400 can also include a meta information processing unit 470.
- the context information acquisition unit 230, the content extraction unit 240, and the meta information processing unit 470 are realized in software using, for example, a CPU or the like.
- the meta information processing unit 470 associates the context information generated by the context information acquisition unit 230 with one or more contents extracted by the content extraction unit 240 based on the context information as meta information. Then, the meta information processing unit 470 can output meta information based on the context information to the transmission unit 260 or the storage 220. Note that the receiving unit 210, the storage 220, the context information acquisition unit 230, the content extraction unit 240, and the transmission unit 260 of the server 400 are the same as those in the first embodiment, and thus description thereof is omitted here. .
- FIG. 13 is a sequence diagram illustrating an information processing method according to the second embodiment of the present disclosure.
- the information processing method according to the second embodiment will be described with reference to FIG. First, steps S101 to S104 are executed. Since these steps are the same as those shown in FIG. 5 in the first embodiment, description thereof is omitted here.
- the content extraction unit 240 of the server 400 extracts one or a plurality of contents corresponding to the context information from the enormous contents that can be acquired via the network, based on the generated context information. Specifically, the content extraction unit 240 extracts content such as moving images and music viewed by the user based on the user's position information included in the context information, the terminal device information used by the user, and the like. To do. More specifically, the content extraction unit 240 may extract a moving image or the like associated with a time stamp at the same time as the time when sensing data is acquired. Then, the server 400 outputs the content information regarding the extracted content to the meta information processing unit 470 or the storage 220.
- the content extraction unit 240 may extract a moving image or the like associated with a time stamp at the same time as the time when sensing data is acquired.
- the meta information processing unit 470 associates the generated context information with the extracted content as meta information.
- the extracted content includes not only the information used in the extraction in step S205 but also other information included in the context information (for example, user biometric information obtained by analyzing sensing data). Associated. Then, the meta information processing unit 470 outputs the content information associated with the meta information based on the context information to the transmission unit 260 or the storage 220.
- a meta information processing unit performs processing similar to that of the first embodiment (extraction of content to be output by the terminal device 300).
- Meta information associated with the content can be used by 470.
- the content extraction unit 240 compares and collates meta information (including information corresponding to past context information) associated with the content with the context information newly acquired by the context information acquisition unit 230. . As a result, it is possible to extract content more in accordance with the user's state (context).
- FIG. 14 is an explanatory diagram for explaining the fifth embodiment.
- the user A is listening to music at an outdoor concert venue.
- the user A carries the smartphone 100p as the detection device 100, and the position information of the user A is detected by the smartphone 100p. Furthermore, the smartphone 100p transmits sensing data based on the detection to the server 400. And in the server 400, the context information acquisition part 230 analyzes the acquired sensing data, and grasps
- a pulse sensor included in wrist wear 100r attached to user A's wrist as detection device 100 detects a pulse in the excited state of user A and transmits sensing data to server 400.
- the context information acquisition unit 230 analyzes the sensing data and generates context information including the user's pulse information.
- sensing data that can grasp that the friend B who is a friend of the user A was watching the same concert at the concert venue is obtained by analyzing the sensing data.
- Information may also be included in the context information.
- the content extraction unit 240 of the server 400 extracts one or a plurality of contents based on the information related to the specified concert and the time stamp of the sensing data. More specifically, the content extraction unit 240 extracts content related to the concert that is associated with a time stamp that is the same as or close to the time indicated by the time stamp.
- the extracted content is, for example, a moving image of the concert recorded by the camera 510 installed in the concert venue and recorded in the content server 520, music data played at the concert, and a concert by the audience of the concert. A tweet etc. are mentioned.
- the meta information processing unit 470 associates the previously generated context information with the extracted content as meta information. Further, the meta information processing unit 470 outputs the associated meta information.
- the pulse sensor 110s attached to the wrist of the user who is listening to music in the living room at home detects the pulse in the user's excitement state and transmits sensing data to the server 400.
- the context information acquisition unit 230 analyzes the sensing data and generates context information including the user's pulse information.
- the content extraction unit 240 compares and collates the pulse information included in the context information with the meta information of each content, and extracts content that matches the context information. More specifically, the content extraction unit 240 extracts, for example, music that the user was watching at the concert venue having, as meta-information, a pulse rate comparable to the pulse rate included in the context information.
- the server 400 is content as context information indicating the state of the user even if the state of the user is difficult to be embodied in words such as the user's pulse detected by the sensor 110s. Can be associated with Therefore, when content is extracted in the first embodiment, meta information based on context information can also be used for content extraction. Therefore, content can be further extracted according to the user's state. it can.
- FIG. 15 is a block diagram for explaining a hardware configuration of the information processing apparatus.
- the illustrated information processing apparatus 900 can realize, for example, the detection apparatus 100, the server 200, and the terminal apparatus 300 in the above-described embodiment.
- the information processing apparatus 900 includes a CPU 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905.
- the information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925. Further, the information processing apparatus 900 may include a sensor 935.
- the information processing apparatus 900 may include a processing circuit such as a DSP (Digital Signal Processor) instead of or in addition to the CPU 901.
- DSP Digital Signal Processor
- the CPU 901 functions as an arithmetic processing unit and a control unit, and controls all or a part of the operation in the information processing apparatus 900 according to various programs recorded in the ROM 903, the RAM 905, the storage apparatus 919, or the removable recording medium 927.
- the ROM 903 stores programs and calculation parameters used by the CPU 901.
- the RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
- the CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
- PCI Peripheral Component Interconnect / Interface
- the input device 915 is a device operated by the user such as a button, a keyboard, a touch panel, and a mouse.
- the input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a smartphone that supports the operation of the information processing device 900.
- the input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user can input various data and instruct processing operations to the information processing apparatus 900 by operating the input device 915.
- the output device 917 is a device that can notify the user of the acquired information visually or audibly.
- the output device 917 can be, for example, a display device such as an LCD or an organic EL display, or an audio output device such as a speaker or headphones.
- the output device 917 outputs the result obtained by the processing of the information processing device 900 as a video such as text or an image, or outputs it as a sound or sound.
- the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900.
- the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, and the like.
- the storage device 919 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900.
- the drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905.
- the drive 921 writes a record in the attached removable recording medium 927.
- the connection port 923 is a port for directly connecting a device to the information processing apparatus 900.
- the connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like.
- the connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like.
- the communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931.
- the communication device 925 can be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), WUSB (Wireless USB).
- the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
- the communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example.
- the communication network 931 connected to the communication device 925 is a wired or wireless network, such as the Internet, a home LAN, infrared communication, or satellite communication.
- the sensor 935 includes various sensors such as a motion sensor, a sound sensor, a biological sensor, or a position sensor.
- the sensor 935 may include an imaging device.
- Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
- an information processing method executed by the information processing apparatus or system as described above a program for causing the information processing apparatus to function, and a program are recorded. It may include tangible media that is not temporary. Further, the program may be distributed via a communication line (including wireless communication) such as the Internet.
- a context information acquisition unit that acquires context information about the user's state obtained by analyzing information including at least one sensing data about the user, and one or more content groups based on the context information
- An information processing apparatus (2) The information processing apparatus according to (1), wherein the at least one sensing data is provided by a motion sensor that detects an operation of the user. (3) The information processing apparatus according to (1) or (2), wherein the at least one sensing data is provided by a sound sensor that detects sound generated around the user. (4) The information processing apparatus according to any one of (1) to (3), wherein the at least one sensing data is provided by a biological sensor that detects biological information of the user.
- the information processing apparatus according to any one of (1) to (4), wherein the at least one sensing data is provided by a position sensor that detects a position of the user.
- the information includes user profile information of the user.
- the information processing apparatus according to any one of (1) to (6), further including an output control unit that controls output of the one or more contents to the user.
- the output control unit controls output of the one or more contents based on the context information.
- the information processing apparatus according to any one of (1) to (9), wherein the content extraction unit calculates a degree of matching between the one or more contents and the context information.
- the system further includes an output control unit that controls output of the one or more contents to the user so that information indicating the one or more contents is arranged and output according to the fitness. ).
- the information processing apparatus according to any one of (1) to (11), further including a meta information processing unit that associates meta information based on the context information with the one or more contents.
- the information processing apparatus according to any one of (1) to (12), further including a sensor that provides the at least one sensing data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Physiology (AREA)
- Computer Hardware Design (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
1.第1の実施形態
1-1.システムの構成
1-2.検出装置の機能構成
1-3.サーバの機能構成
1-4.端末装置の機能構成
2.情報処理方法
2-1.第1実施例
2-2.第2実施例
2-3.第3実施例
2-4.第4実施例
3.第2の実施形態
3-1. サーバの機能構成
3-2.情報処理方法
3-3.第5実施例
4.ハードウェア構成
5.補足
以下に、本開示の第1の実施形態を説明する。まず、図を参照して、本開示の第1の実施形態に係るシステム及び各装置の概略的な機能構成について説明する。
図1は、本開示の第1の実施形態に係るシステムの概略的な構成を示すシステム図である。図1を参照すると、システム10は、検出装置100と、サーバ200と、端末装置300とを含むことができる。上記の検出装置100と、サーバ200と、端末装置300とは、互いに有線又は無線の各種ネットワークを介して通信することができる。なお、システム10に含まれる検出装置100及び端末装置300は、図1に図示された数に限定されるものではなく、さらに多くても少なくてもよい。
検出装置100は、例えばアイウェア、リストウェア、又は指輪型端末等ユーザの身体の一部に装着するウェアラブルデバイスであってもよい。又は、検出装置100は、固定設置される独立したカメラやマイクロフォン等であってもよい。さらに、検出装置100は、携帯電話(スマートフォンを含む)、タブレット型もしくはノート型のPC(Personal Computer)、携帯型メディアプレーヤ、又は携帯型ゲーム機等のユーザの携帯する装置に含まれていてもよい。また、検出装置100は、デスクトップ型のPC又はTV、設置型メディアプレーヤ、設置型ゲーム機、設置型電話機等のユーザの周囲に設置された装置に含まれてもよい。なお、検出装置100は必ずしも端末装置に含まれなくてもよい。
図3は、本開示の第1の実施形態に係るサーバ200の概略的な機能構成を示す図である。図3を参照すると、サーバ200は、受信部210と、ストレージ220と、コンテクスト情報取得部230と、コンテンツ抽出部240と、出力制御部250と、送信部260とを含むことができる。なお、コンテクスト情報取得部230と、コンテンツ抽出部240と、出力制御部250とは、例えばCPU(Central Processing Unit)等を用いてソフトウェア的に実現される。なお、サーバ200の機能の一部又は全部は、検出装置100又は端末装置300によって実現されてもよい。
端末装置300は、携帯電話(スマートフォンを含む)、タブレット型又はノート型又はデスクトップ型のPC又はTV、携帯型又は設置型メディアプレーヤ(音楽プレーヤ、映像ディスプレイ等を含む)、携帯型又は設置型ゲーム機、又はウェアラブルコンピュータ等を含み、特に限定されない。端末装置300は、サーバ200から送信されたコンテンツ情報を受信し、ユーザに対して出力する。なお、端末装置300の機能は、例えば検出装置100と同じ装置によって実現されてもよい。また、システム10が複数の検出装置100を含む場合、その一部が端末装置300の機能を実現してもよい。
次に、本開示の第1の実施形態における情報処理方法について説明する。まず、第1の実施形態における情報処理方法の大まかな流れを説明すると、サーバ200は、検出装置100により検出されたユーザの状態に関するセンシングデータを含む情報を解析し、解析から得られるユーザの状態を示すコンテクスト情報を取得する。さらに、サーバ200は、上記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出する。
以下に、第1実施例を図6及び図7を参照してさらに具体的に説明する。図6及び図7は、第1実施例を説明するための説明図である。第1実施例は、図6に示されるように、ユーザが自宅の居間のTVでサッカー中継を見ている場合を想定している。
以下に、第2実施例を図8を参照してさらに具体的に説明する。図8は、第2実施例を説明するための説明図である。図8に示されるように、第2実施例は、ユーザAが、ユーザAの自宅の居間で、友人のユーザBとTVでサッカー中継を見ながら歓談している場合を想定している。
以下に、第3実施例を図9及び図10を参照してさらに具体的に説明する。図9及び図10は、第3実施例を説明するための説明図である。第3実施例は、図9に示されるように、ユーザが、電車に乗っており、音楽を聴きながらスマートフォン100fの画面を見ている場合を想定している。
以下に、第4実施例を図11を参照してさらに具体的に説明する。図11は、第4実施例を説明するための説明図である。第4実施例は、図11に示されるように、ユーザAが、学校の教室で友人たち(友人B、C、D)と休憩時間を過ごしている場合を想定している。
第2の実施形態は、ユーザの状態を示すコンテクスト情報を、コンテクスト情報に対応するコンテンツのメタ情報として別途利用する。このメタ情報は、例えば第1の実施形態において説明されたコンテンツの抽出を行う際に利用される。つまり、本実施形態では、コンテンツを抽出する際に、コンテンツに関連付けられたメタ情報(過去のコンテンツ情報に対応する)とコンテクスト情報とを用いる(例えば、メタ情報とコンテクスト情報とを照合したり、比較したりする)ことができる。したがって、よりユーザの状態にあったコンテンツを抽出することができるようになる。
第2の実施形態に係るサーバ400の概略的な機能構成を説明する。図12は、第2の実施形態に係るサーバ400の概略的な機能構成を示す。図12からわかるように、第2の実施形態に係るサーバ400は、第1の実施形態に係るサーバ200と同様に、受信部210と、ストレージ220と、コンテクスト情報取得部230と、コンテンツ抽出部240と、送信部260とを含むことができる。さらに、サーバ400は、メタ情報処理部470も含むことができる。なお、コンテクスト情報取得部230と、コンテンツ抽出部240と、メタ情報処理部470とは、例えばCPU等などを用いてソフトウェア的に実現される。
図13は、本開示の第2の実施形態における情報処理の方法を示すシーケンス図ある。図13を参照して、第2の実施形態の情報処理の方法を説明する。まず、ステップS101~ステップS104が実行される。これらのステップは、第1の実施形態で図5に示されたものと同様であるため、ここでは説明を省略する。
以下に、第5実施例を図14を参照してさらに具体的に説明する。図14は、第5実施例を説明するための説明図である。第5実施例は、図14の上段に示されるように、ユーザAが、野外コンサート会場で音楽鑑賞している場合を想定している。
次に、図15を参照して、本開示の実施形態に係る情報処理装置のハードウェア構成について説明する。図15は、情報処理装置のハードウェア構成を説明するためのブロック図である。図示された情報処理装置900は、例えば、上記の実施形態における検出装置100、サーバ200、端末装置300を実現しうる。
なお、先に説明した本開示の実施形態は、例えば、上記で説明したような情報処理装置又はシステムで実行される情報処理方法、情報処理装置を機能させるためのプログラム、及びプログラムが記録された一時的でない有形の媒体を含みうる。また、プログラムをインターネット等の通信回線(無線通信も含む)を介して頒布してもよい。
(1)ユーザに関する少なくとも1つのセンシングデータを含む情報を解析して得られる前記ユーザの状態に関するコンテクスト情報を取得するコンテクスト情報取得部と、前記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出するコンテンツ抽出部と、を備える、情報処理装置。
(2)前記少なくとも1つのセンシングデータは、前記ユーザの動作を検出するモーションセンサによって提供される、前記(1)に記載の情報処理装置。
(3)前記少なくとも1つのセンシングデータは、前記ユーザの周囲で発生した音を検出するサウンドセンサによって提供される、前記(1)又は(2)に記載の情報処理装置。
(4)前記少なくとも1つのセンシングデータは、前記ユーザの生体情報を検出する生体センサによって提供される、前記(1)~(3)のいずれか1項に記載の情報処理装置。
(5)前記少なくとも1つのセンシングデータは、前記ユーザの位置を検出する位置センサによって提供される、前記(1)~(4)のいずれか1項に記載の情報処理装置。
(6)前記情報は、前記ユーザのユーザプロファイル情報を含む、前記(1)~(5)のいずれか1項に記載の情報処理装置。
(7)前記1又は複数のコンテンツの前記ユーザへの出力を制御する出力制御部をさらに備える、前記(1)~(6)のいずれか1項に記載の情報処理装置。
(8)前記出力制御部は、前記コンテクスト情報に基づいて前記1又は複数のコンテンツの出力を制御する、前記(7)に記載の情報処理装置。
(9)前記1又は複数のコンテンツを出力する出力部をさらに備える、前記(8)に記載の情報処理装置。
(10)前記コンテンツ抽出部は、前記1又は複数のコンテンツと前記コンテクスト情報との適合度を算出する、前記(1)~(9)のいずれか1項に記載の情報処理装置。
(11)前記1又は複数のコンテンツを示す情報が前記適合度に従って配列されて出力されるように前記1又は複数のコンテンツの前記ユーザへの出力を制御する出力制御部をさらに備える、前記(10)に記載の情報処理装置。
(12)前記1又は複数のコンテンツに、前記コンテクスト情報に基づくメタ情報を関連付けるメタ情報処理部をさらに備える、前記(1)~(11)のいずれか1項に記載の情報処理装置。
(13)前記少なくとも1つのセンシングデータを提供するセンサをさらに備える、前記(1)~(12)のいずれか1項に記載の情報処理装置。
(14)ユーザに関する少なくとも1つのセンシングデータを含む情報を解析して得られる、前記ユーザに関するコンテクスト情報を取得することと、プロセッサが、前記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出することと、を含む情報処理方法。
(15)ユーザ関する少なくとも1つのセンシングデータを含む情報を解析して得られる、前記ユーザに関するコンテクスト情報を取得する機能と、前記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出する機能と、をコンピュータに実現させるためのプログラム。
100 検出装置
100a、100g、100h、100i、100j スマートフォン
100b、100m、100r リストウェア
100c 撮像装置
100d アクセスポイント
100e、100f マイクロフォン
110 センシング部
110f、510 カメラ
110s 脈拍センサ
130 送信部
200、400 サーバ
210 受信部
220 ストレージ
230 コンテクスト情報取得部
240 コンテンツ抽出部
250 出力制御部
260、340 送信部
300 端末装置
300a、300b TV
300c プロジェクター
300d ヘッドフォン
330 入力部
350 受信部
360 出力制御部
370 出力部
470 メタ情報処理部
520 コンテンツサーバ
Claims (15)
- ユーザに関する少なくとも1つのセンシングデータを含む情報を解析して得られる、前記ユーザの状態に関するコンテクスト情報を取得するコンテクスト情報取得部と、
前記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出するコンテンツ抽出部と、
を備える、情報処理装置。 - 前記少なくとも1つのセンシングデータは、前記ユーザの動作を検出するモーションセンサによって提供される、請求項1に記載の情報処理装置。
- 前記少なくとも1つのセンシングデータは、前記ユーザの周囲で発生した音を検出するサウンドセンサによって提供される、請求項1に記載の情報処理装置。
- 前記少なくとも1つのセンシングデータは、前記ユーザの生体情報を検出する生体センサによって提供される、請求項1に記載の情報処理装置。
- 前記少なくとも1つのセンシングデータは、前記ユーザの位置を検出する位置センサによって提供される、請求項1に記載の情報処理装置。
- 前記情報は、前記ユーザのプロファイル情報を含む、請求項1に記載の情報処理装置。
- 前記1又は複数のコンテンツの前記ユーザへの出力を制御する出力制御部をさらに備える、請求項1に記載の情報処理装置。
- 前記出力制御部は、前記コンテクスト情報に基づいて前記1又は複数のコンテンツの出力を制御する、請求項7に記載の情報処理装置。
- 前記1又は複数のコンテンツを出力する出力部をさらに備える、請求項8に記載の情報処理装置。
- 前記コンテンツ抽出部は、
前記1又は複数のコンテンツと前記コンテクスト情報との適合度を算出する、請求項1に記載の情報処理装置。 - 前記1又は複数のコンテンツを示す情報が前記適合度に従って配列されて出力されるように前記1又は複数のコンテンツの前記ユーザへの出力を制御する出力制御部をさらに備える、請求項10に記載の情報処理装置。
- 前記1又は複数のコンテンツに、前記コンテクスト情報に基づくメタ情報を関連付けるメタ情報処理部をさらに備える、請求項1に記載の情報処理装置。
- 前記少なくとも1つのセンシングデータを提供するセンサをさらに備える、請求項1に記載の情報処理装置。
- ユーザに関する少なくとも1つのセンシングデータを含む情報を解析して得られる、前記ユーザの状態に関するコンテクスト情報を取得することと、
プロセッサが、前記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出することと、
を含む情報処理方法。 - ユーザに関する少なくとも1つのセンシングデータを含む情報を解析して得られる、前記ユーザの状態に関するコンテクスト情報を取得する機能と、
前記コンテクスト情報に基づいて、コンテンツ群の中から1又は複数のコンテンツを抽出する機能と、
をコンピュータに実現させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017501873A JPWO2016136104A1 (ja) | 2015-02-23 | 2015-12-17 | 情報処理装置、情報処理方法及びプログラム |
US15/548,331 US20180027090A1 (en) | 2015-02-23 | 2015-12-17 | Information processing device, information processing method, and program |
CN201580076170.0A CN107251019A (zh) | 2015-02-23 | 2015-12-17 | 信息处理装置、信息处理方法和程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015033055 | 2015-02-23 | ||
JP2015-033055 | 2015-02-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016136104A1 true WO2016136104A1 (ja) | 2016-09-01 |
Family
ID=56788204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/085377 WO2016136104A1 (ja) | 2015-02-23 | 2015-12-17 | 情報処理装置、情報処理方法及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20180027090A1 (ja) |
JP (1) | JPWO2016136104A1 (ja) |
CN (1) | CN107251019A (ja) |
WO (1) | WO2016136104A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019087779A1 (ja) * | 2017-10-31 | 2019-05-09 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
JP2019148919A (ja) * | 2018-02-26 | 2019-09-05 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 情報提供システム及び情報提供方法 |
JP2020035406A (ja) * | 2018-08-31 | 2020-03-05 | 大日本印刷株式会社 | 画像提供システム |
WO2020255767A1 (ja) | 2019-06-20 | 2020-12-24 | ソニー株式会社 | 情報処理システム、情報処理方法、及び記録媒体 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017047584A1 (ja) * | 2015-09-18 | 2017-03-23 | 株式会社 東芝 | 街頭情報処理システム、街頭情報処理システムに適用されるクライアントおよびサーバ、ならびにこれらの方法およびプログラム |
US10176846B1 (en) * | 2017-07-20 | 2019-01-08 | Rovi Guides, Inc. | Systems and methods for determining playback points in media assets |
WO2020250080A1 (en) * | 2019-06-10 | 2020-12-17 | Senselabs Technology Private Limited | System and method for context aware digital media management |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004246535A (ja) * | 2003-02-13 | 2004-09-02 | Sony Corp | 再生方法、再生装置およびコンテンツ配信システム |
JP2006059094A (ja) * | 2004-08-19 | 2006-03-02 | Ntt Docomo Inc | サービス選択支援システム、サービス選択支援方法 |
JP2006155157A (ja) * | 2004-11-29 | 2006-06-15 | Sanyo Electric Co Ltd | 自動選曲装置 |
WO2006075512A1 (ja) * | 2005-01-13 | 2006-07-20 | Matsushita Electric Industrial Co., Ltd. | 情報通知制御装置、情報通知方式、およびプログラム |
JP2006262254A (ja) * | 2005-03-18 | 2006-09-28 | Sony Ericsson Mobilecommunications Japan Inc | 携帯端末装置 |
JP2008299631A (ja) * | 2007-05-31 | 2008-12-11 | Sony Ericsson Mobilecommunications Japan Inc | コンテンツ検索装置、コンテンツ検索方法およびコンテンツ検索プログラム |
JP2009067307A (ja) * | 2007-09-14 | 2009-04-02 | Denso Corp | 自動車用音楽再生システム |
JP2009294790A (ja) * | 2008-06-03 | 2009-12-17 | Denso Corp | 自動車用情報提供システム |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001282847A (ja) * | 2000-04-03 | 2001-10-12 | Nec Corp | 感性適応型情報提示装置及びプログラムを記録した機械読み取り可能な記録媒体 |
JP2005032167A (ja) * | 2003-07-11 | 2005-02-03 | Sony Corp | 情報検索装置、情報検索方法、情報検索システム、クライアント装置およびサーバ装置 |
JP2006146630A (ja) * | 2004-11-22 | 2006-06-08 | Sony Corp | コンテンツ選択再生装置、コンテンツ選択再生方法、コンテンツ配信システムおよびコンテンツ検索システム |
JP2007058842A (ja) * | 2005-07-26 | 2007-03-08 | Sony Corp | 情報処理装置、特徴抽出方法、記録媒体、および、プログラム |
JPWO2007066663A1 (ja) * | 2005-12-05 | 2009-05-21 | パイオニア株式会社 | コンテンツ検索装置、コンテンツ検索システム、コンテンツ検索システム用サーバ装置、コンテンツ検索方法及びコンピュータプログラム並びに検索機能付きコンテンツ出力装置 |
CN100539503C (zh) * | 2005-12-31 | 2009-09-09 | 华为技术有限公司 | 信息发布系统、公共媒体信息发布系统和发布方法 |
JP4367663B2 (ja) * | 2007-04-10 | 2009-11-18 | ソニー株式会社 | 画像処理装置、画像処理方法、プログラム |
US10552384B2 (en) * | 2008-05-12 | 2020-02-04 | Blackberry Limited | Synchronizing media files available from multiple sources |
JP2010152679A (ja) * | 2008-12-25 | 2010-07-08 | Toshiba Corp | 情報提示装置および情報提示方法 |
US20100318571A1 (en) * | 2009-06-16 | 2010-12-16 | Leah Pearlman | Selective Content Accessibility in a Social Network |
US9671683B2 (en) * | 2010-12-01 | 2017-06-06 | Intel Corporation | Multiple light source projection system to project multiple images |
US20130219417A1 (en) * | 2012-02-16 | 2013-08-22 | Comcast Cable Communications, Llc | Automated Personalization |
US9704361B1 (en) * | 2012-08-14 | 2017-07-11 | Amazon Technologies, Inc. | Projecting content within an environment |
US20140107531A1 (en) * | 2012-10-12 | 2014-04-17 | At&T Intellectual Property I, Lp | Inference of mental state using sensory data obtained from wearable sensors |
KR20140092634A (ko) * | 2013-01-16 | 2014-07-24 | 삼성전자주식회사 | 전자장치와 그 제어방법 |
US9191914B2 (en) * | 2013-03-15 | 2015-11-17 | Comcast Cable Communications, Llc | Activating devices based on user location |
US20140281975A1 (en) * | 2013-03-15 | 2014-09-18 | Glen J. Anderson | System for adaptive selection and presentation of context-based media in communications |
US9225522B2 (en) * | 2013-12-27 | 2015-12-29 | Linkedin Corporation | Techniques for populating a content stream on a mobile device |
US9712587B1 (en) * | 2014-12-01 | 2017-07-18 | Google Inc. | Identifying and rendering content relevant to a user's current mental state and context |
-
2015
- 2015-12-17 US US15/548,331 patent/US20180027090A1/en not_active Abandoned
- 2015-12-17 CN CN201580076170.0A patent/CN107251019A/zh active Pending
- 2015-12-17 WO PCT/JP2015/085377 patent/WO2016136104A1/ja active Application Filing
- 2015-12-17 JP JP2017501873A patent/JPWO2016136104A1/ja active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004246535A (ja) * | 2003-02-13 | 2004-09-02 | Sony Corp | 再生方法、再生装置およびコンテンツ配信システム |
JP2006059094A (ja) * | 2004-08-19 | 2006-03-02 | Ntt Docomo Inc | サービス選択支援システム、サービス選択支援方法 |
JP2006155157A (ja) * | 2004-11-29 | 2006-06-15 | Sanyo Electric Co Ltd | 自動選曲装置 |
WO2006075512A1 (ja) * | 2005-01-13 | 2006-07-20 | Matsushita Electric Industrial Co., Ltd. | 情報通知制御装置、情報通知方式、およびプログラム |
JP2006262254A (ja) * | 2005-03-18 | 2006-09-28 | Sony Ericsson Mobilecommunications Japan Inc | 携帯端末装置 |
JP2008299631A (ja) * | 2007-05-31 | 2008-12-11 | Sony Ericsson Mobilecommunications Japan Inc | コンテンツ検索装置、コンテンツ検索方法およびコンテンツ検索プログラム |
JP2009067307A (ja) * | 2007-09-14 | 2009-04-02 | Denso Corp | 自動車用音楽再生システム |
JP2009294790A (ja) * | 2008-06-03 | 2009-12-17 | Denso Corp | 自動車用情報提供システム |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019087779A1 (ja) * | 2017-10-31 | 2019-05-09 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
EP3575978A4 (en) * | 2017-10-31 | 2020-04-01 | Sony Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS AND PROGRAM |
JPWO2019087779A1 (ja) * | 2017-10-31 | 2020-09-24 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
JP7327161B2 (ja) | 2017-10-31 | 2023-08-16 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、およびプログラム |
JP2019148919A (ja) * | 2018-02-26 | 2019-09-05 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 情報提供システム及び情報提供方法 |
JP7154016B2 (ja) | 2018-02-26 | 2022-10-17 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 情報提供システム及び情報提供方法 |
JP2020035406A (ja) * | 2018-08-31 | 2020-03-05 | 大日本印刷株式会社 | 画像提供システム |
JP7148883B2 (ja) | 2018-08-31 | 2022-10-06 | 大日本印刷株式会社 | 画像提供システム |
WO2020255767A1 (ja) | 2019-06-20 | 2020-12-24 | ソニー株式会社 | 情報処理システム、情報処理方法、及び記録媒体 |
KR20220019683A (ko) | 2019-06-20 | 2022-02-17 | 소니그룹주식회사 | 정보 처리 시스템, 정보 처리 방법 및 기록 매체 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2016136104A1 (ja) | 2017-11-30 |
CN107251019A (zh) | 2017-10-13 |
US20180027090A1 (en) | 2018-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016136104A1 (ja) | 情報処理装置、情報処理方法及びプログラム | |
JP6369462B2 (ja) | クライアント装置、制御方法、システム、およびプログラム | |
KR102229039B1 (ko) | 오디오 활동 추적 및 요약들 | |
CN110780707B (zh) | 信息处理设备、信息处理方法与计算机可读介质 | |
JP6729571B2 (ja) | 情報処理装置、情報処理方法及びプログラム | |
US9467673B2 (en) | Method, system, and computer-readable memory for rhythm visualization | |
JP6760271B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2014181380A1 (ja) | 情報処理装置およびアプリケーション実行方法 | |
WO2017130486A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
KR102071576B1 (ko) | 콘텐트 재생 방법 및 이를 위한 단말 | |
US10088901B2 (en) | Display device and operating method thereof | |
US11151602B2 (en) | Apparatus, systems and methods for acquiring commentary about a media content event | |
CN109168062A (zh) | 视频播放的展示方法、装置、终端设备及存储介质 | |
JPWO2017064891A1 (ja) | 情報処理システム、情報処理方法、および記憶媒体 | |
CN108763475B (zh) | 一种录制方法、录制装置及终端设备 | |
JP2024107029A (ja) | 情報処理プログラム、情報処理方法、及び情報処理システム | |
US20200301398A1 (en) | Information processing device, information processing method, and program | |
US11593426B2 (en) | Information processing apparatus and information processing method | |
CN110291768A (zh) | 信息处理装置、信息处理方法和信息处理系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15883394 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017501873 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15548331 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15883394 Country of ref document: EP Kind code of ref document: A1 |